Comment by mcv
The best argument I've heard for why LLMs aren't there yet, is that they don't have a real world model. They only interact with text and images, and not with the real world. They have no concept of the real world, and therefore also no real concept of truth. They learn by interacting with text, not with the world.
I don't know if that argument is true, but it does make some sense.
In fact, I think you might argue that modern chess engines might have more of a world model (although an extremely limited one): they interact with the chess game. They learn not merely by studying the rules, but by playing the game millions of times. Of course that's only the "world" of the chess game, but it's something, and as a result, they know what works in chess. They have a concept of truth within the chess rules. Which is super limited of course, but it might be more than what LLMs have.
It doesn't make any sense. You aren't interacting with neutrinos either. Nothing really beyond some local excitations of electrical fields and EM waves in certain frequency range.