Comment by mtts
I'm as bearish as anyone on the current AI hype, but this particular ship has sailed. Research is revealing these humongous neural networks of weights for next token prediction to exhibit underlying structures that seem to map in some way to a form of knowledge about the world that is, however imperfectly, extracted from all the text they're trained on.
Arguing that this is meaningfully different from what happens in our own brains is not something I would personally be comfortable with.
> Research is revealing these humongous neural networks of weights for next token prediction to exhibit underlying structures that seem to map in some way to a form of knowledge about the world that is
[[citation needed]]
I am sorry but I need exceptionally strong proof of that statement. I think it is totally untrue.