Comment by byyoung3

Comment by byyoung3 a day ago

2 replies

This is valid but also hard to back up with any alternatives. At the end of the day it’s just a neural network with backprop. New architectures will likely only be marginally better. So either we add new algorithms on top of it like RL, create a new learning algorithm (for example forward-forward), or we figure out how to use more energy efficient compute (analog etc) to scale several more magnitudes. It’s gonna take some time

devmor a day ago

Yeah, that's fair - it's very easy to tell that LLMs are not the end state, but it's near impossible to know what comes next.

Personally I think LLMs will be relegated to transforming output and input from whatever new logic system is brought forth, rather than pretending they're doing logic by aggregating static corpora like we are now.

  • MoonGhost 20 hours ago

    They already can do calculations by using tools and without pretending. Why not to make them write code for logic too. This will extend their 'range'. End user can be provided only summary to keep it look simple.