Comment by perrygeo
Everything I've read in the last 5 months says otherwise. Probably best described by the Apple ML group's paper call The Illusion of Thinking. It empirically works, but the explanation could just be that making the stochastic parrot squawk longer yields a better response.
In any case, this is a far cry from what I was discussing. At best, this shows an ability for LLMs to "learn" within the context window, which should already be somewhat obvious (that's what the attention mechanism does). There is no global knowledge base or weight updates. Not until the content gets published, rescraped, and trained into the next version. This does demonstrate a learning feedback loop, albeit one that takes months or years, driven by external forces - the company that trains it. But it's way too slow to be considered intelligent, and it can't learn on its own without help.
A system that truly learned, ie incorporated empirical data from its environment into its model of the world, would need to do this in millisecond time frames. Single celled organisms can do this. Where you at AGI?
> explanation could just be that making the stochastic parrot squawk longer yields a better response
No one in the research and science communities ever said anything contrary to this and if they did they wouldn't last long (although i imagine many of them would find issue with your stochastic parrot reference).
The apple paper has a stronger title than its actual premise. Basically they found that "thinking" definitely works but falls apart for problems of a certain difficulty and simply scaling "thinking" up doesn't help (for these harder problems)
It never said "thinking" doesnt work. People are just combining the title with their existing prejudices to draw the conclusion the _want_ to see.