Comment by Arkhaine_kupo
Comment by Arkhaine_kupo 4 hours ago
> the sentence "LLMs don't think because they predict the next token" is logically as wrong
it isn't, depending on the deifinition of "THINK".
If you believe that thought is the process for where an agent with a world model, takes in input, analysies the circumstances and predicts an outcome and models their beaviour due to that prediction. Then the sentence of "LLMs dont think because they predict a token" is entirely correct.
They cannot have a world model, they could in some way be said to receive a sensory input through the prompt. But they are neither analysing that prompt against its own subjectivity, nor predicting outcomes, coming up with a plan or changing its action/response/behaviour due to it.
Any definition of "Think" that requieres agency or a world model (which as far as I know are all of them) would exclude an LLM by definition.