Comment by blueprint
It's very simple. The model itself doesn't know and can't verify it. It knows that it doesn't know. Do you deny that? Or do you think that a general intelligence would be in the habit of lying to people and concealing why? At the end of the day, that would be not only unintelligent, but hostile. So it's very simple. And there is such a thing as "the truth", and it can be verified by anyone repeatably in the requisite (fair, accurate) circumstances, and it's not based in word games.
All I asked for was the OP to substantiate their claim that LLMs are not AGI. I am agnostic on that - either way seems plausible.
I don't think there even is an agreed criterion of what AGI is. Current models can easily pass the Turing test (except some gotchas, but these don't really test intelligence).