Comment by vladsh

Comment by vladsh 2 days ago

24 replies

LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Agents, however, are products. They should have clear UX boundaries: show what context they’re using, communicate uncertainty, validate outputs where possible, and expose performance so users can understand when and why they fail.

IMO the real issue is that raw, general-purpose models were released directly to consumers. That normalized under-specified consumer products, created the expectation that users would interpret model behavior, define their own success criteria, and manually handle edge cases, sometimes with severe real world consequences.

I’m sure the market will fix itself with time, but I hope more people would know when not to use these half baked AGI “products”

DuperPower 2 days ago

because they wanted to sell the illusion of consciousness, chatgpt, gemini and claude are humans simulator which is lame, I want autocomplete prediction not this personality and retention stuff which only makes the agents dumber.

  • metalliqaz a day ago

    Since their goal is to acquire funding, it is much less important for the product to be useful than it is for the product to be sci-fi.

    Remember when the point was revenue and profits? Man, those were the good old days.

nowittyusername 2 days ago

You hit the nail on the head. Anyone who's been working intimately with LLM's comes to the same conclusion. the llm itself is only one small important part that is to be used in a more complicated and capable system. And that system will not have the same limitations as the raw llm itself.

andreyk 2 days ago

To say they LLMs are 'predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.' is not entirely accurate. Classic LLMs like GPT 3 , sure. But LLM-powered chatbots (ChatGPT, Claude - which is what this article is really about) go through much more than just predict-next-token training (RLHF, presumably now reasoning training, who knows what else).

  • mrbungie a day ago

    > go through much more than just predict-next-token training (RLHF, presumably now reasoning training, who knows what else).

    Yep, but...

    > To say they LLMs are 'predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.' is not entirely accurate.

    That's a logical leap, and you'd need to bridge the gap between "more than next-token prediction" to similarity to wetware brains and "systems with psychology".

basch 2 days ago

they are human in the sense they are reenforced to exhibit human like behavior, by humans. a human byproduct.

  • NebulaStorm456 2 days ago

    Is the solution to sycophancy just a very good clever prompt that forces logical reasoning? Do we want our LLMs to be scientifically accurate or truthful or be creative and exploratory in nature? Fuzzy systems like LLMs will always have these kinds of tradeoffs and there should be a better UI and accessible "traits" (devil's advocate, therapist, expert doctor, finance advisor) that one can invoke.

adleyjulian 2 days ago

> LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Per the predictive processing theory of mind, human brains are similarly predictive machines. "Psychology" is an emergent property.

I think it's overly dismissive to point to the fundamentals being simple, i.e. that it's a token prediction algorithm, when it's clear to everyone that it's the unexpected emergent properties of LLMs that everyone is interested in.

  • xoac 2 days ago

    The fact that a theory exists does not mean that it is not garbage

    • estearum 2 days ago

      So surely you can demonstrate how the brain is doing much different than this, and go ahead to collect your Nobel?

      • sfn42 a day ago

        It is not our job to disprove your claim. It is your job to prove it.

        And then you can go collect your Nobel.

        • estearum a day ago

          Yeah sorry but if you call a hypothesis "garbage," you should have a few bullets to back it up.

          And no, there's no such thing as positive proof.

    • ubersketch 19 hours ago

      Predictive processing is absolutely not garbage. The dish of neurons that was trained to play Pong was trained using a method that was directly based on the principles of predictive processing. Also I don't think there's really any competitor for the niche predictive processing is filling, and for closing the gap between neuroscience and psychology.

  • imiric 2 days ago

    The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose. Our inability to explain and predict their behavior is due to the mind-boggling amount of data and processing complexity that no human can comprehend.

    In contrast, we know very little about human brains. We know how they work at a fundamental level, and we have vague understanding of brain regions and their functions, but we have little knowledge of how the complex behavior we observe actually works. The complexity is also orders of magnitude greater than what we can model with current technology, but it's very much an open question whether our current deep learning architectures are even the right approach to model this complexity.

    So, sure, emergent behavior is neat and interesting, but just because we can't intuitively understand a system, doesn't mean that we're on the right track to model human intelligence. After all, we find the patterns of the Game of Life interesting, yet the rules for such a system are very simple. LLMs are similar, only far more complex. We find the patterns they generate interesting, and potentially very useful, but anthropomorphizing this technology, or thinking that we have invented "intelligence", is wishful thinking and hubris. Especially since we struggle with defining that word to begin with.

    • intull 2 days ago

      I think what comment-OP above means to point at is - given what we know (or, lack thereof) about awareness, consciousness, intelligence, and the likes, let alone the human experience of it all, today, we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own; even before we start arguing about their "intelligence", whatever that may be understood of as.

      What we do know and have so far, across and cross disciplines, and also from the fact that neural nets are modeled after what we've learned about the human brain, is, it isn't an impossibility to propose that LLMs _could_ be more than just "token prediction machines". There can be 10000 ways of arguing how they are indeed simply that, but there also are a few of ways of arguing that they could be more than what they seem. We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking. That's worth not ignoring or dismissing the few.

      • sfn42 a day ago

        > we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own

        That may be. We also don't have a way to scientifically rule out the possibility that a teapot is orbiting Pluto.

        Just because you can't disprove something doesn't make it plausible.

      • imiric a day ago

        I agree with that.

        But the problem is the narrative around this tech. It is marketed as if we have accomplished a major breakthrough in modeling intelligence. Companies are built on illusions and promises that AGI is right around the corner. The public is being deluded into thinking that the current tech will cure diseases, solve world hunger, and bring worldwide prosperity. When all we have achieved is to throw large amounts of data at a statistical trick, which sometimes produces interesting patterns. Which isn't to say that this isn't and can't be useful, but this is a far cry from what is being suggested.

        > We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking.

        Precisely. But the burden of proof is on the author. They're telling us this is "intelligence", and because the term is so loosely defined, this can't be challenged in either direction. It would be more scientifically honest and accurate to describe what the tech actually is and does, instead of ascribing human-like qualities to it. But that won't make anyone much money, so here we are.

    • adleyjulian 2 days ago

      At no point did I say LLMs have human intelligence nor that they model human intelligence. I also didn't say that they are the correct path towards it, though the truth is we don't know.

      The point is that one could similarly be dismissive of human brains, saying they're prediction machines built on basic blocks of neuro chemistry and such a view would be asinine.

    • stevenhuang 2 days ago

      > The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose

      All of this is false.

kcexn 2 days ago

A large part of that training is done by asking people if responses 'look right'.

It turns out that people are more likely to think a model is good when it kisses their ass than if it has a terrible personality. This is arguably a design flaw of the human brain.

more_corn 2 days ago

Sure, but they reflect all known human psychology because they’ve been trained on our writing. Look up the anthropic tests. If you make an agent based on an LLM it will display very human behaviors including aggressive attempts to prevent being shut down.