Comment by johnsmith1840

Comment by johnsmith1840 2 days ago

0 replies

Yes but it's similar to RNNs or energy models.

They try to keep a single continuous "state" that always updates.

It's more about going "farther" than something more akin to "go forever" that CL promises.

Scaling laws are true in that infinite scale would 100% lead to AGI. But at the same time the problem with it is that you can't infinitely scale the computation per task.

RL solves this problem in general but it has a deep assumption of knowing the future. Step too far out of the box and it collapses.

The smallest natural brains handle unknown future states with a fixed computation budget per timestep which is truly incredible.