Comment by dragonwriter
Comment by dragonwriter a day ago
> When we get to the point where a LLM can say "oh, I made that mistake because I saw this in my training data, which caused these specific weights to be suboptimal, let me update it", that'll be AGI.
While I believe we are far from AGI, I don't think the standard for AGI is an AI doing things a human absolutely cannot do.
All that was described here is learning from a mistake, which is something I hope all humans are capable of.