Comment by sdwr
Text comes in, text goes out, but there's a lot of complexity in the middle. It's not a "world model", but there's definitely modeling of the world going on inside.
Text comes in, text goes out, but there's a lot of complexity in the middle. It's not a "world model", but there's definitely modeling of the world going on inside.
There is zero modeling of the world going on inside, for the very simple reason that it has never seen the world. It's only been given text, which means it has no idea why that text was written. This is the fundamental limitation of all LLMs: they are only trained on text that humans have written after processing the world. You can't "uncompress" the text to get back what the world state was to understand what led to it being written.