Comment by dragonwriter
Comment by dragonwriter 2 days ago
No, what was described was specifically reporting to an external party the neural connections involved in the mistake and the source in past training data that caused them, as well as learning from new data.
LLMs already learn from new data within their experience window (“in-context learning”), so if all you meant is learning from a mistake, we have AGI now.
> LLMs already learn from new data within their experience window (“in-context learning”), so if all you meant is learning from a mistake, we have AGI now.
They don't learn from the mistake though, they mostly just repeat it.