Comment by hnuser123456
Comment by hnuser123456 a day ago
When we get to the point where a LLM can say "oh, I made that mistake because I saw this in my training data, which caused these specific weights to be suboptimal, let me update it", that'll be AGI.
But as you say, currently, they have zero "self awareness".
That’s holding LLMs to a significantly higher standard than humans. When I realize there’s a flaw in my reasoning I don’t know that it was caused by specific incorrect neuron connections or activation potentials in my brain, I think of the flaw in domain-specific terms using language or something like it.
Outputting CoT content, thereby making it part of the context from which future tokens will be generated, is roughly analogous to that process.