Comment by no_wizard

Comment by no_wizard a day ago

133 replies

>internal concepts, the model is not aware that it's doing anything so how could it "explain itself"

This in a nutshell is why I hate that all this stuff is being labeled as AI. Its advanced machine learning (another term that also feels inaccurate but I concede is at least closer to whats happening conceptually)

Really, LLMs and the like still lack any model of intelligence. Its, in the most basic of terms, algorithmic pattern matching mixed with statistical likelihoods of success.

And that can get things really really far. There are entire businesses built on doing that kind of work (particularly in finance) with very high accuracy and usefulness, but its not AI.

johnecheck a day ago

While I agree that LLMs are hardly sapient, it's very hard to make this argument without being able to pinpoint what a model of intelligence actually is.

"Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success"

  • whilenot-dev a day ago

    What's wrong with just calling them smart algorithmic models?

    Being smart allows somewhat to be wrong, as long as that leads to a satisfying solution. Being intelligent on the other hand requires foundational correctness in concepts that aren't even defined yet.

    EDIT: I also somewhat like the term imperative knowledge (models) [0]

    [0]: https://en.wikipedia.org/wiki/Procedural_knowledge

    • jfengel a day ago

      The problem with "smart" is that they fail at things that dumb people succeed at. They have ludicrous levels of knowledge and a jaw dropping ability to connect pieces while missing what's right in front of them.

      The gap makes me uncomfortable with the implications of the word "smart". It is orthogonal to that.

      • sigmoid10 19 hours ago

        >they fail at things that dumb people succeed at

        Funnily enough, you can also observe that in humans. The number of times I have observed people from highly intellectual, high income/academic families struggle with simple tasks that even the dumbest people do with ease is staggering. If you're not trained for something and suddenly confronted with it for the first time, you will also in all likelihood fail. "Smart" is just as ill-defined as any other clumsy approach to define intelligence.

      • nradov 15 hours ago

        Bombs can be smart, even though they sometimes miss the target.

  • no_wizard a day ago

    That's not at all on par with what I'm saying.

    There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior. We shouldn't seek to muddy this.

    EDIT: Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.

    Nothing I'm aware of on the market can do this. LLMs are great at statistically inferring things, but they can't generalize which means they lack reasoning. They also lack the ability to seek new information without prompting.

    The fact that all LLMs boil down to (relatively) simple mathematics should be enough to prove the point as well. It lacks spontaneous reasoning, which is why the ability to generalize is key

    • byearthithatius a day ago

      "There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior" not really. The whole point they are trying to make is that the capability of these models IS ALREADY muddying the definition of intelligence. We can't really test it because the distribution its learned is so vast. Hence why he have things like ARC now.

      Even if its just gradient descent based distribution learning and there is no "internal system" (whatever you think that should look like) to support learning the distribution, the question is if that is more than what we are doing or if we are starting to replicate our own mechanisms of learning.

      • jdhwosnhw a day ago

        Peoples’ memories are so short. Ten years ago the “well accepted definition of intelligence” was whether something could pass the Turing test. Now that goalpost has been completely blown out of the water and people are scrabbling to come up with a new one that precludes LLMs.

        A useful definition of intelligence needs to be measurable, based on inputs/outputs, not internal state. Otherwise you run the risk of dictating how you think intelligence should manifest, rather than what it actually is. The former is a prescription, only the latter is a true definition.

      • dingnuts a day ago

        How does an LLM muddy the definition of intelligence any more than a database or search engine does? They are lossy databases with a natural language interface, nothing more.

    • david-gpu a day ago

      > There exists a generally accepted baseline definition for what crosses the threshold of intelligent behavior.

      Go on. We are listening.

    • nmarinov a day ago

      I think the confusion is because you're referring to a common understanding of what AI is but I think the definition of AI is different for different people.

      Can you give your definition of AI? Also what is the "generally accepted baseline definition for what crosses the threshold of intelligent behavior"?

    • voidspark a day ago

      You are doubling down on a muddled vague non-technical intuition about these terms.

      Please tell us what that "baseline definition" is.

    • appleorchard46 a day ago

      > Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments.

      Be that as it may, a core trait is very different from a generally accepted threshold. What exactly is the threshold? Which environments are you referring to? How is it being measured? What goals are they?

      You may have quantitative and unambiguous answers to these questions, but I don't think they would be commonly agreed upon.

    • highfrequency a day ago

      What is that baseline threshold for intelligence? Could you provide concrete and objective results, that if demonstrated by a computer system would satisfy your criteria for intelligence?

      • no_wizard a day ago

        see the edit. boils down to the ability to generalize, LLMs can't generalize. I'm not the only one who holds this view either. Francois Chollet, a former intelligence researcher at Google also shares this view.

    • aj7 a day ago

      LLM’s are statistically great at inferring things? Pray tell me how often Google’s AI search paragraph, at the top, is correct or useful. Is that statistically great?

    • nl a day ago

      > Generally its accepted that a core trait of intelligence is an agent’s ability to achieve goals in a wide range of environments.

      This is the embodiment argument - that intelligence requires the ability to interact with its environment. Far from being generally accepted, it's a controversial take.

      Could Stephen Hawking achieve goals in a wide range of environments without help?

      And yet it's still generally accepted that Stephen Hawking was intelligent.

    • nurettin a day ago

      > intelligence is an agent’s ability to achieve goals in a wide range of environments. This means you must be able to generalize, which in turn allows intelligent beings to react to new environments and contexts without previous experience or input.

      I applaud the bravery of trying to one shot a definition of intelligence, but no intelligent being acts without previous experience or input. If you're talking about in-sample vs out of sample, LLMs do that all the time. At some point in the conversation, they encounter something completely new and react to it in a way that emulates an intelligent agent.

      What really makes them tick is language being a huge part of the intelligence puzzle, and language is something LLMs can generate at will. When we discover and learn to emulate the rest, we will get closer and closer to super intelligence.

  • a_victorp a day ago

    > Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success

    The fact that you can reason about intelligence is a counter argument to this

    • btilly a day ago

      > The fact that you can reason about intelligence is a counter argument to this

      The fact that we can provide a chain of reasoning, and we can think that it is about intelligence, doesn't mean that we were actually reasoning about intelligence. This is immediately obvious when we encounter people whose conclusions are being thrown off by well-known cognitive biases, like cognitive dissonance. They have no trouble producing volumes of text about how they came to their conclusions and why they are right. But are consistently unable to notice the actual biases that are at play.

      • Workaccount2 a day ago

        Humans think they can produce chain-of-reasoing, but it has been shown many times (and is self evident if you pay attention) that your brain is making decisions before you are aware of it.

        If I ask you to think of a movie, go ahead, think of one.....whatever movie just came into your mind was not picked by you, it was served up to you from an abyss.

    • awongh a day ago

      The ol' "I know it when I see that it thinks like me" argument.

    • immibis a day ago

      It seems like LLMs can also reason about intelligence. Does that make them intelligent?

      We don't know what intelligence is, or isn't.

      • syndeo a day ago

        It's fascinating how this discussion about intelligence bumps up against the limits of text itself. We're here, reasoning and reflecting on what makes us capable of this conversation. Yet, the very structure of our arguments, the way we question definitions or assert self-awareness, mirrors patterns that LLMs are becoming increasingly adept at replicating. How confidently can we, reading these words onscreen, distinguish genuine introspection from a sophisticated echo?

        Case in point… I didn't write that paragraph by myself.

    • mitthrowaway2 a day ago

      No offense to johnecheck, but I'd expect an LLM to be able to raise the same counterargument.

  • shinycode a day ago

    > "Human brains lack any model of intelligence. It's just neurons firing in complicated patterns in response to inputs based on what statistically leads to reproductive success"

    Are you sure about that ? Do we have proof of that ? In happened all the time trought history of science that a lot of scientists were convinced of something and a model of reality up until someone discovers a new proof and or propose a new coherent model. That’s literally the history of science, disprove what we thought was an established model

    • johnecheck 11 hours ago

      Indeed, a good point. My comment assumes that our current model of the human brain is (sufficiently) complete.

      Your comment reveals an interesting corollary - those that believe in something beyond our understanding, like the Christian soul, may never be convinced that an AI is truly sapient.

  • OtherShrezzing a day ago

    >While I agree that LLMs are hardly sapient, it's very hard to make this argument without being able to pinpoint what a model of intelligence actually is.

    Maybe so, but it's trivial to do the inverse, and pinpoint something that's not intelligent. I'm happy to state that an entity which has seen every game guide ever written, but still can't beat the first generation Pokemon is not intelligent.

    This isn't the ceiling for intelligence. But it's a reasonable floor.

    • 7h3kk1d a day ago

      There's sentient humans who can't beat the first generation pokemon games.

      • antasvara a day ago

        Is there a sentient human that has access to (and actually uses) all of the Pokémon game guides yet is incapable of beating Pokémon?

        Because that's what an LLM is working with.

        • 7h3kk1d 14 hours ago

          I'm quite sure my grandma could not. You can make the argument these people aren't intelligent but I think that's a contrived argument.

  • andrepd 19 hours ago

    Human brains do way more things than language. And non-human animals (with no language) also reason, and we cannot understand those either, barely even the very simplest ones.

  • devmor a day ago

    I don't think your detraction has much merit.

    If I don't understand how a combustion engine works, I don't need that engineering knowledge to tell you that a bicycle [an LLM] isn't a car [a human brain] just because it fits the classification of a transportation vehicle [conversational interface].

    This topic is incredibly fractured because there is too much monetary interest in redefining what "intelligence" means, so I don't think a technical comparison is even useful unless the conversation begins with an explicit definition of intelligence in relation to the claims.

    • Velorivox a day ago

      Bicycles and cars are too close. The analogy I like is human leg versus tire. That is a starker depiction of how silly it is to compare the two in terms of structure rather than result.

    • SkyBelow a day ago

      One problem is that we have been basing too much on [human brain] for so long that we ended up with some ethical problems as we decided other brains didn't count as intelligent. As such, science has taken an approach of not assuming humans are uniquely intelligence. We seem to be the best around at doing different tasks with tools, but other animals are not completely incapable of doing the same. So [human brain] should really be [brain]. But is that good enough? Is a fruit fly brain intelligent? Is it a goal to aim for?

      There is a second problem that we aren't looking for [human brain] or [brain], but [intelligence] or [sapient] or something similar. We aren't even sure what we want as many people have different ideas, and, as you pointed out, we have different people with different interest pushing for different underlying definitions of what these ideas even are.

      There is also a great deal of impreciseness in most any definitions we use, and AI encroaches on this in a way that reality rarely attacks our definitions. Philosophically, we aren't well prepared to defend against such attacks. If we had every ancestor of the cat before us, could we point out the first cat from the last non-cat in that lineup? In a precise way that we would all agree upon that isn't arbitrary? I doubt we could.

    • uoaei a day ago

      If you don't know anything except how words are used, you can definitely disambiguate "bicycle" and "car" solely based on the fact that the contexts they appear in are incongruent the vast majority of the time, and when they appear in the same context, they are explicitly contrasted against each other.

      This is just the "fancy statistics" argument again, and it serves to describe any similar example you can come up with better than "intelligence exists inside this black box because I'm vibing with the output".

      • devmor a day ago

        Why are you attempting to technically analyze a simile? That is not why comparisons are used.

bigmadshoe a day ago

We don't have a complete enough theory of neuroscience to conclude that much of human "reasoning" is not "algorithmic pattern matching mixed with statistical likelihoods of success".

Regardless of how it models intelligence, why is it not AI? Do you mean it is not AGI? A system that can take a piece of text as input and output a reasonable response is obviously exhibiting some form of intelligence, regardless of the internal workings.

  • danielbln a day ago

    I always wonder where people get their confidence from. We know so little about our own cognition, what makes us tick, how consciousness emerges, how about thought processes actually fundamentally work. We don't even know why we dream. Yet people proclaim loudly that X clearly isn't intelligent. Ok, but based on what?

    • uoaei a day ago

      A more reasonable application of Occam's razor is that humans also don't meet the definition of "intelligence". Reasoning and perception are separate faculties and need not align. Just because we feel like we're making decisions, doesn't mean we are.

  • no_wizard a day ago

    It’s easy to attribute intelligence these systems. They have a flexibility and unpredictability that hasn't typically been associated with computers, but it all rests on (relatively) simple mathematics. We know this is true. We also know that means it has limitations and can't actually reason information. The corpus of work is huge - and that allows the results to be pretty striking - but once you do hit a corner with any of this tech, it can't simply reason about the unknown. If its not in the training data - or the training data is outdated - it will not be able to course correct at all. Thus, it lacks reasoning capability, which is a fundamental attribute of any form of intelligence.

    • justonenote a day ago

      > it all rests on (relatively) simple mathematics. We know this is true. We also know that means it has limitations and can't actually reason information.

      What do you imagine is happening inside biological minds that enables reasoning that is something different to, a lot of, "simple mathematics"?

      You state that because it is built up of simple mathematics it cannot be reasoning, but this does not follow at all, unless you can posit some other mechanism that gives rise to intelligence and reasoning that is not able to be modelled mathematically.

      • no_wizard a day ago

        Because whats inside our minds is more than mathematics, or we would be able to explain human behavior with the purity of mathematics, and so far, we can't.

        We can prove the behavior of LLMs with mathematics, because its foundations are constructed. That also means it has the same limits of anything else we use applied mathematics for. Is the broad market analysis that HFT firms use software for to make automated trades also intelligent?

tsimionescu a day ago

One of the earliest things that defined what AI meant were algorithms like A*, and then rules engines like CLIPS. I would say LLMs are much closer to anything that we'd actually call intelligence, despite their limitations, than some of the things that defined* the term for decades.

* fixed a typo, used to be "defend"

  • no_wizard a day ago

    >than some of the things that defend the term for decades

    There have been many attempts to pervert the term AI, which is a disservice to the technologies and the term itself.

    Its the simple fact that the business people are relying on what AI invokes in the public mindshare to boost their status and visibility. Thats what bothers me about its misuse so much

    • tsimionescu a day ago

      Again, if you look at the early papers on AI, you'll see things that are even farther from human intelligence than the LLMs of today. There is no "perversion" of the term, it has always been a vague hypey concept. And it was introduced in this way by academia, not business.

    • pixl97 a day ago

      While it could possibly be to point out so abruptly, you seem to be the walking talking definition of the AI Effect.

      >The "AI effect" refers to the phenomenon where achievements in AI, once considered significant, are re-evaluated or redefined as commonplace once they become integrated into everyday technology, no longer seen as "true AI".

  • phire a day ago

    One of the earliest examples of "Artificial Intelligence" was a program that played tic-tac-toe. Much of the early research into AI was just playing more and more complex strategy games until they solved chess and then go.

    So LLMs clearly fit inside the computer science definition of "Artificial Intelligence".

    It's just that the general public have a significantly different definition "AI" that's strongly influenced by science fiction. And it's really problematic to call LLMs AI under that definition.

  • Marazan a day ago

    We had Markov Chains already. Fancy Markov Chains don't seem like a trillion dollar business or actual intelligence.

    • tsimionescu a day ago

      Completely agree. But if Markov chains are AI (and they always were categorized as such), then fancy Markov chains are still AI.

    • svachalek a day ago

      An LLM is no more a fancy Markov Chain than you are. The math is well documented, go have a read.

      • jampekka a day ago

        About everything can be modelled with large enough Markov Chain, but I'd say stateless autoregressive models like LLMs are a lot easier analyzed as Markov Chains than recurrent systems with very complex internal states like humans.

    • highfrequency a day ago

      The results make the method interesting, not the other way around.

    • baq a day ago

      Markov chains in meatspace running on 20W of power do quite a good job of actual intelligence

fnordpiglet a day ago

This is a discussion of semantics. First I spent much of my career in high end quant finance and what we are doing today is night and day different in terms of the generality and effectiveness. Second, almost all the hallmarks of AI I carried with me prior to 2001 have more or less been ticked off - general natural language semantically aware parsing and human like responses, ability to process abstract concepts, reason abductively, synthesize complex concepts. The fact it’s not aware - which it’s absolutely is not - does not make it not -intelligent-.

The thing people latch onto is modern LLM’s inability to reliably reason deductively or solve complex logical problems. However this isn’t a sign of human intelligence as these are learned not innate skills, and even the most “intelligent” humans struggle at being reliable at these skills. In fact classical AI techniques are often quite good at these things already and I don’t find improvements there world changing. What I find is unique about human intelligence is its abductive ability to reason in ambiguous spaces with error at times but with success at most others. This is something LLMs actually demonstrate with a remarkably human like intelligence. This is earth shattering and science fiction material. I find all the poopoo’ing and goal post shifting disheartening.

What they don’t have is awareness. Awareness is something we don’t understand about ourselves. We have examined our intelligence for thousands of years and some philosophies like Buddhism scratch the surface of understanding awareness. I find it much less likely we can achieve AGI without understanding awareness and implementing some proximate model of it that guides the multi modal models and agents we are working on now.

marcosdumay a day ago

It is AI.

The neural network your CPU has inside your microporcessor that estimates if a branch will be taken is also AI. A pattern recognition program that takes a video and decides where you stop on the image and where the background starts is also AI. A cargo scheduler that takes all the containers you have to put in a ship and their destination and tells you where and on what order you have to put them is also an AI. A search engine that compares your query with the text on each page and tells you what is closer is also an AI. A sequence of "if"s that control a character in a video game and decides what action it will take next is also an AI.

Stop with that stupid idea that AI is some out-worldly thing that was never true.

esolyt a day ago

But we moved beyond LLMs? We have models that handle text, image, audio, and video all at once. We have models that can sense the tone of your voice and respond accordingly. Whether you define any of this as "intelligence" or not is just a linguistic choice.

We're just rehashing "Can a submarine swim?"

arctek a day ago

This is also why I think the current iterations wont converge on any actual type of intelligence.

It doesn't operate on the same level as (human) intelligence it's a very path dependent process. Every step you add down this path increases entropy as well and while further improvements and bigger context windows help - eventually you reach a dead end where it degrades.

You'd almost need every step of the process to mutate the model to update global state from that point.

From what I've seen the major providers kind of use tricks to accomplish this, but it's not the same thing.

voidspark a day ago

You are confusing sentience or consciousness with intelligence.

  • no_wizard a day ago

    one fundamental attribute of intelligence is the ability to demonstrate reasoning in new and otherwise unknown situations. There is no system that I am currently aware of that works on data it is not trained on.

    Another is the fundamental inability to self update on outdated information. It is incapable of doing that, which means it lacks another marker, which is being able to respond to changes of context effectively. Ants can do this. LLMs can't.

    • voidspark a day ago

      But that's exactly what these deep neural networks have shown, countless times. LLM's generalize on new data outside of its training set. It's called "zero shot learning" where they can solve problems that are not in their training set.

      AlphaGo Zero is another example. AlphaGo Zero mastered Go from scratch, beating professional players with moves it was never trained on

      > Another is the fundamental inability to self update

      That's an engineering decision, not a fundamental limitation. They could engineer a solution for the model to initiate its own training sequence, if they decide to enable that.

      • no_wizard a day ago

        >AlphaGo Zero mastered Go from scratch, beating professional players with moves it was never trained on

        Thats all well and good, but it was tuned with enough parameters to learn via reinforcement learning[0]. I think The Register went further and got better clarification about how it worked[1]

        >During training, it sits on each side of the table: two instances of the same software face off against each other. A match starts with the game's black and white stones scattered on the board, placed following a random set of moves from their starting positions. The two computer players are given the list of moves that led to the positions of the stones on the grid, and then are each told to come up with multiple chains of next moves along with estimates of the probability they will win by following through each chain.

        While I also find it interesting that in both of these instances, its all referenced to as machine learning, not AI, its also important to see that even though what AlphaGo Zero did was quite awesome and a step forward in using compute for more complex tasks, it was still seeded the basics of information - the rules of Go - and simply patterned matched against itself until built up enough of a statistical model to determine the best moves to make in any given situation during a game.

        Which isn't the same thing as showing generalized reasoning. It could not, then, take this information and apply it to another situation.

        They did show the self reinforcement techniques worked well though, and used them for Chess and Shogi to great success as I recall, but thats a validation of the technique, not that it could generalize knowledge.

        >That's an engineering decision, not a fundamental limitation

        So you're saying that they can't reason about independently?

        [0]: https://deepmind.google/discover/blog/alphago-zero-starting-...

        [1]: https://www.theregister.com/2017/10/18/deepminds_latest_alph...

    • travisjungroth a day ago

      I just made up this scenario and these words, so I'm sure it wasn't in the training data.

      Kwomps can zark but they can't plimf. Ghirns are a lot like Kwomps, but better zarkers. Plyzers have the skills the Ghirns lack.

      Quoning, a type of plimfing, was developed in 3985. Zhuning was developed 100 years earlier.

      I have an erork that needs to be plimfed. Choose one group and one method to do it.

      > Use Plyzers and do a Quoning procedure on your erork.

      If that doesn't count as reasoning or generalization, I don't know what does.

      https://chatgpt.com/share/67eee8dd-43a4-8004-afde-2fe3fc9630...

      • voidspark a day ago

        LOL. I tried this with Grok

        https://grok.com/share/bGVnYWN5_1e6f3801-c9fb-4d33-844b-3902...

        "Based on the information you provided, it seems you need something capable of plimfing to handle your erork. Kwomps can’t plimf, so they’re out. Ghirns are similar to Kwomps and excel at zarking, but there’s no indication they can plimf either. Plyzers, however, have the skills that Ghirns lack, and since plimfing is a skill Ghirns don’t have, it’s reasonable to infer Plyzers can plimf. As for the method, quoning is explicitly described as a type of plimfing, while zhuning isn’t mentioned as such—only that it was developed earlier.

        So, I’d choose the Plyzers to plimf your erork using quoning. They seem capable, and quoning fits the job."

    • [removed] a day ago
      [deleted]
mjlee a day ago

I’m pretty sure AI means whatever the newest thing in ML is. In a few years LLMs will be an ML technique and the new big thing will become AI.

perching_aix a day ago

> This in a nutshell is why I hate that all this stuff is being labeled as AI.

It's literally the name of the field. I don't understand why (some) people feel so compelled to act vain about it like this.

Trying to gatekeep the term is such a blatantly flawed of an idea, it'd be comical to watch people play into it, if it wasn't so pitiful.

It disappoints me that this cope has proliferated far enough that garbage like "AGI" is something you can actually come across in literature.