Comment by no_wizard

Comment by no_wizard a day ago

55 replies

That's not at all on par with what I'm saying.

There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior. We shouldn't seek to muddy this.

EDIT: Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.

Nothing I'm aware of on the market can do this. LLMs are great at statistically inferring things, but they can't generalize which means they lack reasoning. They also lack the ability to seek new information without prompting.

The fact that all LLMs boil down to (relatively) simple mathematics should be enough to prove the point as well. It lacks spontaneous reasoning, which is why the ability to generalize is key

byearthithatius a day ago

"There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior" not really. The whole point they are trying to make is that the capability of these models IS ALREADY muddying the definition of intelligence. We can't really test it because the distribution its learned is so vast. Hence why he have things like ARC now.

Even if its just gradient descent based distribution learning and there is no "internal system" (whatever you think that should look like) to support learning the distribution, the question is if that is more than what we are doing or if we are starting to replicate our own mechanisms of learning.

  • jdhwosnhw a day ago

    Peoples’ memories are so short. Ten years ago the “well accepted definition of intelligence” was whether something could pass the Turing test. Now that goalpost has been completely blown out of the water and people are scrabbling to come up with a new one that precludes LLMs.

    A useful definition of intelligence needs to be measurable, based on inputs/outputs, not internal state. Otherwise you run the risk of dictating how you think intelligence should manifest, rather than what it actually is. The former is a prescription, only the latter is a true definition.

    • fc417fc802 a day ago

      I frequently see this characterization and can't agree with it. If I say "well I suppose you'd at least need to do A to qualify" and then later say "huh I guess A wasn't sufficient, looks like you'll also need B" that is not shifting the goalposts.

      At worst it's an incomplete and ad hoc specification.

      More realistically it was never more than an educated guess to begin with, about something that didn't exist at the time, still doesn't appear to exist, is highly subjective, lacks a single broadly accepted rigorous definition to this very day, and ultimately boils down to "I'll know it when I see it".

      I'll know it when I see it, and I still haven't seen it. QED

      • jdhwosnhw a day ago

        > If I say "well I suppose you'd at least need to do A to qualify" and then later say "huh I guess A wasn't sufficient, looks like you'll also need B" that is not shifting the goalposts.

        I dunno, that seems like a pretty good distillation of what moving the goalposts is.

        > I’ll know it when I see it, and I haven’t seen it. QED

        While pithily put, thats not a compelling argument. You feel that LLMs are not intelligent. I feel that they may be intelligent. Without a decent definition of what intelligence is, the entire argument is silly.

    • Retric 21 hours ago

      LLM’s can’t pass an unrestricted Touring test. LLM’s can mimic intelligence, but if you actually try and exploit their limitations the deception is still trivial to unmask.

      Various chat bots have long been able to pass more limited versions of a Touring test. The most extreme constraint allows for simply replaying a canned conversation which with a helpful human assistant makes it indistinguishable from a human. But exploiting limitations on a testing format doesn’t have anything to do with testing for intelligence.

    • travisjungroth a day ago

      I’ve realized while reading these comments my opinions on LLMs being intelligent has significantly increased. Rather than argue any specific test, I believe no one can come up with a text-based intelligence test that 90% of literate adults can pass but the top LLMs fail.

      This would mean there’s no definition of intelligence you could tie to a test where humans would be intelligent but LLMs wouldn’t.

      A maybe more palatable idea is that having “intelligence” as a binary is insufficient. I think it’s more of an extremely skewed distribution. With how humans are above the rest, you didn’t have to nail the cutoff point to get us on one side and everything else on the other. Maybe chimpanzees and dolphins slip in. But now, the LLMs are much closer to humans. That line is harder to draw. Actually not possible to draw it so people are on one side and LLMs on the other.

      • fc417fc802 a day ago

        Why presuppose that it's possible to test intelligence via text? Most humans have been illiterate for most of human history.

        I don't mean to claim that it isn't possible, just that I'm not clear why we should assume that it is or that there would be an obvious way of going about it.

      • nl a day ago

        Or maybe accept that LLMs are intelligent and it's human bias that is the oddity here.

        • travisjungroth a day ago

          My whole comment was accepting LLMs as intelligent. It’s the first sentence.

  • dingnuts a day ago

    How does an LLM muddy the definition of intelligence any more than a database or search engine does? They are lossy databases with a natural language interface, nothing more.

    • tibbar a day ago

      Ah, but what is in the database? At this point it's clearly not just facts, but problem-solving strategies and an execution engine. A database of problem-solving strategies which you can query with a natural language description of your problem and it returns an answer to your problem... well... sounds like intelligence to me.

      • uoaei a day ago

        > problem-solving strategies and an execution engine

        Extremely unfounded claims. See: the root comment of this tree.

        • travisjungroth a day ago

          …things that look like problem solving strategies in performance, then.

    • madethisnow a day ago

      datasets and search engines are deterministic. humans, and llms are not.

      • semiquaver a day ago

        LLMs are completely deterministic. Their fundamental output is a vector representing a probability distribution of the next token given the model weights and context. Given the same inputs an identical output vector will be produced 100% of the time.

        This fact is relied upon by for example https://bellard.org/ts_zip/ a lossless compression system that would not work if LLMs were nondeterministic.

        In practice most LLM systems use this distribution (along with a “temperature” multiplier) to make a weighted random choice among the tokens, giving the illusion of nondeterminism. But there’s no fundamental reason you couldn’t for example always choose the most likely token, yielding totally deterministic output.

        This is an excellent and accessible series going over how transformer systems work if you want to learn more. https://youtu.be/wjZofJX0v4M

      • hatefulmoron a day ago

        The LLM's output is chaotic relative to the input, but it's deterministic right? Same settings, same model, same input, .. same output? Where does the chain get broken here?

      • daveguy a day ago

        The only reason LLMs are stochastic instead of deterministic is a random number generator. There is nothing inherently non-deterministic about LLM algorithms unless you turn up the "temperature" of selecting the next word. The fact that determinism can be changed by turning a knob is clear evidence that they are closer to a database or search engine than a human.

david-gpu a day ago

> There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior.

Go on. We are listening.

nmarinov a day ago

I think the confusion is because you're referring to a common understanding of what AI is but I think the definition of AI is different for different people.

Can you give your definition of AI? Also what is the "generally accepted baseline definition for what crosses the threshold of intelligent behavior"?

voidspark a day ago

You are doubling down on a muddled vague non-technical intuition about these terms.

Please tell us what that "baseline definition" is.

appleorchard46 a day ago

> Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments.

Be that as it may, a core trait is very different from a generally accepted threshold. What exactly is the threshold? Which environments are you referring to? How is it being measured? What goals are they?

You may have quantitative and unambiguous answers to these questions, but I don't think they would be commonly agreed upon.

highfrequency a day ago

What is that baseline threshold for intelligence? Could you provide concrete and objective results, that if demonstrated by a computer system would satisfy your criteria for intelligence?

  • no_wizard a day ago

    see the edit. boils down to the ability to generalize, LLMs can't generalize. I'm not the only one who holds this view either. Francois Chollet, a former intelligence researcher at Google also shares this view.

    • highfrequency a day ago

      Are you able to formulate "generalization" in a concrete and objective way that could be achieved unambiguously, and is currently achieved by a typical human? A lot of people would say that LLMs generalize pretty well - they certainly can understand natural language sequences that are not present in their training data.

      • whilenot-dev 16 hours ago

        > A lot of people would say that LLMs generalize pretty well

        What do you mean here? The trained model, the inference engine, is the one that makes an LLM for "a lot of people".

        > they certainly can understand natural language sequences that are not present in their training data

        Keeping the trained model as LLM in mind, I think learning a language includes generalization and is typically achieved by a human, so I'll try to formulate:

        Can a trained LLM model learn languages that hasn't been in its training set just by chatting/prompting? Given that any Korean texts were excluded from the training set, could Korean be learned? Does that even work with languages descending from the same language family (Spanish in the training set but Italian should be learned)?

    • voidspark a day ago

      Chollet's argument was that it's not "true" generalization, which would be at the level of human cognition. He sets the bar so high that it becomes a No True Scotsman fallacy. The deep neural networks are practically generalizing well enough to solve many tasks better than humans.

      • daveguy a day ago

        No. His argument is definitely closer to LLMs can't generalize. I think you would benefit from re-reading the paper. The point is that a puzzle consisting of simple reasoning about simple priors should be a fairly low bar for "intelligence" (necessary but not sufficient). LLMs performs abysmally because they have a very specific purpose trained goal that is different from solving the ARC puzzles. Humans solve these easily. And committees of humans do so perfectly. If LLMs were intelligent they would be able to construct algorithms consisting of simple applications of the priors.

        Training to a specific task and getting better is completely orthogonal to generalized search and application of priors. Humans do a mix of both search of the operations and pattern matching of recognizing the difference between start and stop state. That is because their "algorithm" is so general purpose. And we have very little idea how the two are combined efficiently.

        At least this is how I interpreted the paper.

    • stevenAthompson a day ago

      > Francois Chollet, a former intelligence researcher at Google also shares this view.

      Great, now there are two of you.

aj7 a day ago

LLM’s are statistically great at inferring things? Pray tell me how often Google’s AI search paragraph, at the top, is correct or useful. Is that statistically great?

nl a day ago

> Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments.

This is the embodiment argument - that intelligence requires the ability to interact with its environment. Far from being generally accepted, it's a controversial take.

Could Stephen Hawking achieve goals in a wide range of environments without help?

And yet it's still generally accepted that Stephen Hawking was intelligent.

nurettin a day ago

> intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.

I applaud the bravery of trying to one shot a definition of intelligence, but no intelligent being acts without previous experience or input. If you're talking about in-sample vs out of sample, LLMs do that all the time. At some point in the conversation, they encounter something completely new and react to it in a way that emulates an intelligent agent.

What really makes them tick is language being a huge part of the intelligence puzzle, and language is something LLMs can generate at will. When we discover and learn to emulate the rest, we will get closer and closer to super intelligence.