Comment by redeux
All that was described here is learning from a mistake, which is something I hope all humans are capable of.
All that was described here is learning from a mistake, which is something I hope all humans are capable of.
Yes thank you, that's what I was getting at. Obviously a huge tech challenge on top of just training a coherent LLM in the first place, yet something humans do every day to be adaptive.
No, what was described was specifically reporting to an external party the neural connections involved in the mistake and the source in past training data that caused them, as well as learning from new data.
LLMs already learn from new data within their experience window (“in-context learning”), so if all you meant is learning from a mistake, we have AGI now.