Comment by adleyjulian

Comment by adleyjulian 2 days ago

14 replies

> LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Per the predictive processing theory of mind, human brains are similarly predictive machines. "Psychology" is an emergent property.

I think it's overly dismissive to point to the fundamentals being simple, i.e. that it's a token prediction algorithm, when it's clear to everyone that it's the unexpected emergent properties of LLMs that everyone is interested in.

xoac 2 days ago

The fact that a theory exists does not mean that it is not garbage

  • estearum 2 days ago

    So surely you can demonstrate how the brain is doing much different than this, and go ahead to collect your Nobel?

    • sfn42 a day ago

      It is not our job to disprove your claim. It is your job to prove it.

      And then you can go collect your Nobel.

      • estearum a day ago

        Yeah sorry but if you call a hypothesis "garbage," you should have a few bullets to back it up.

        And no, there's no such thing as positive proof.

  • ubersketch 19 hours ago

    Predictive processing is absolutely not garbage. The dish of neurons that was trained to play Pong was trained using a method that was directly based on the principles of predictive processing. Also I don't think there's really any competitor for the niche predictive processing is filling, and for closing the gap between neuroscience and psychology.

imiric 2 days ago

The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose. Our inability to explain and predict their behavior is due to the mind-boggling amount of data and processing complexity that no human can comprehend.

In contrast, we know very little about human brains. We know how they work at a fundamental level, and we have vague understanding of brain regions and their functions, but we have little knowledge of how the complex behavior we observe actually works. The complexity is also orders of magnitude greater than what we can model with current technology, but it's very much an open question whether our current deep learning architectures are even the right approach to model this complexity.

So, sure, emergent behavior is neat and interesting, but just because we can't intuitively understand a system, doesn't mean that we're on the right track to model human intelligence. After all, we find the patterns of the Game of Life interesting, yet the rules for such a system are very simple. LLMs are similar, only far more complex. We find the patterns they generate interesting, and potentially very useful, but anthropomorphizing this technology, or thinking that we have invented "intelligence", is wishful thinking and hubris. Especially since we struggle with defining that word to begin with.

  • intull 2 days ago

    I think what comment-OP above means to point at is - given what we know (or, lack thereof) about awareness, consciousness, intelligence, and the likes, let alone the human experience of it all, today, we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own; even before we start arguing about their "intelligence", whatever that may be understood of as.

    What we do know and have so far, across and cross disciplines, and also from the fact that neural nets are modeled after what we've learned about the human brain, is, it isn't an impossibility to propose that LLMs _could_ be more than just "token prediction machines". There can be 10000 ways of arguing how they are indeed simply that, but there also are a few of ways of arguing that they could be more than what they seem. We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking. That's worth not ignoring or dismissing the few.

    • sfn42 a day ago

      > we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own

      That may be. We also don't have a way to scientifically rule out the possibility that a teapot is orbiting Pluto.

      Just because you can't disprove something doesn't make it plausible.

      • intull 20 hours ago

        Is this what we are reduced to now, to snap back with a wannabe-witty remark just because you don't like how an idea sounds? Have we completely forgotten and given up on good-faith scientific discourse? Even on HN?

        • sfn42 17 hours ago

          I'm happy to participate in good faith discourse but honestly the idea that LLMs are conscious is ridiculous.

          We are talking about a computer program. It does nothing until it is invoked with an input and then it produces a deterministic output unless provided a random component to prevent determinism.

          That's all it does. It does not live a life of its own between invocations. It does not have a will of its own. Of course it isn't conscious lol how could anyone possibly believe it's conscious? It's an illusion. Don't be fooled.

    • imiric a day ago

      I agree with that.

      But the problem is the narrative around this tech. It is marketed as if we have accomplished a major breakthrough in modeling intelligence. Companies are built on illusions and promises that AGI is right around the corner. The public is being deluded into thinking that the current tech will cure diseases, solve world hunger, and bring worldwide prosperity. When all we have achieved is to throw large amounts of data at a statistical trick, which sometimes produces interesting patterns. Which isn't to say that this isn't and can't be useful, but this is a far cry from what is being suggested.

      > We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking.

      Precisely. But the burden of proof is on the author. They're telling us this is "intelligence", and because the term is so loosely defined, this can't be challenged in either direction. It would be more scientifically honest and accurate to describe what the tech actually is and does, instead of ascribing human-like qualities to it. But that won't make anyone much money, so here we are.

  • adleyjulian 2 days ago

    At no point did I say LLMs have human intelligence nor that they model human intelligence. I also didn't say that they are the correct path towards it, though the truth is we don't know.

    The point is that one could similarly be dismissive of human brains, saying they're prediction machines built on basic blocks of neuro chemistry and such a view would be asinine.

  • stevenhuang 2 days ago

    > The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose

    All of this is false.