Comment by andsoitis

Comment by andsoitis a day ago

4 replies

> I’m curious what your mental model is for how human cognition works. Is it any less mechanical in your view?

human cognition is not constrained to pattern recognition and prediction of text and symbols.

greenpizza13 17 hours ago

The thesis of "What is Intelligence" is based around intelligence being just that.

> Intelligence is the ability to model, predict, and influence one’s future; it can evolve in relation to other intelligences to create a larger symbiotic intelligence.

The book is worth a read. But I don't believe it limits the type of intelligence we have to humans, by definition. Then again, I'm only halfway through the book :).

[https://mitpress.mit.edu/9780262049955/what-is-intelligence/]

  • andsoitis an hour ago

    > Intelligence is the ability to model, predict, and influence one’s future

    LLM's do pattern match and predict on textual symbols.

    Humans brains pattern match and predict beyond mere text.

    LLMs also do not learn in the moment, which I would argue is a sign of lack of intelligence.

  • dap 11 hours ago

    It seems obvious to me that "the ability to model, predict, and influence one’s future" is far more general and capable than "constrained to pattern recognition and prediction of text and symbols." How do you conclude that those are the same?

    I do like that definition because it seems to capture what's different between LLMs and people even when they come up with the same answers. If you give a person a high school physics question about projectile motion, they'll use a mental model that's a combination of explicit physical principles and algebraic equations. They might talk to themselves or use human language to work through it, but one can point to a clear underlying model (principles, laws, and formulas) that are agnostic to the human language they're using to work through them.

    I realize some people believe (and it could be) that ultimately it really is the same process. Either the LLM does have such a model encoded implicitly in all those numbers or human thought using those principles and formulas is the same kind of statistical walk that the LLM is doing. At the very least, that seems far from clear. This seems reflected in the results like the OP's.