Comment by Meganet
You actually don't know that.
A LLM has a huge amount of data ingested. It can create character profiles, audience, personas etc.
Why wouldn't it have potentially even learned to 'understand' what 'being aware of your limitations' means?
Right now for me 'change of reasoning' feels a little bit of quering the existing meta space through the reasoning process to adjust weights. Basically priming the model.
I would also not just call it a 'trick'. This looks simple, weird or whatnot but i do believe that this is part of AI thinking process research.
Its a good question though what did they train? New Architecture? More parameters? Is this training a mix of experiments they did? Some auto optimization mechanism?
It might understand the concept of it having limitations, but it can't AFAIK reliably recognize when it does or doesn't know something, or has encountered a limitation.