Comment by viccis
>The problem with all this is that we don't actually know what human cognition is doing either.
We do know what it's not doing, and that is operating only through reproducing linguistic patterns. There's no more cause to think LLMs approximate our thought (thought being something they are incapable of) than that Naive-Bayes spam filter models approximate our thought.
My point is that we know very little about the sort of "thought" that we are capable of either. I agree that LLMs cannot do what we typical refer to as "thought", but I thnk it is possible that we do a LOT less of that than we think when we are "thinking" (or more precisely, having the experience of thinking).