Comment by roenxi

Comment by roenxi 2 days ago

39 replies

Has anyone come up with a definition of AGI where humans are near-universally capable of GI? These articles seem to be slowly pushing the boundaries past the point where slower humans are disbarred from intelligence.

Many years ago I bumped in to Towers of Hanoi in a computer game and failed to solve it algorithmicly, so I suppose I'm lucky I only work a knowledge job rather than an intelligence-based one.

cardanome 2 days ago

People confuse performance and internal presentation.

A simple calculator is vastly better as adding numbers than any human. An chess engine will rival any human grand master. No one would say that this got us closer to AGI.

We could absolutely see LLMs that produce poetry that humans can not tell apart or even prefer to human made poetry. We could have LLMs that are perfectly able to convince humans that they have consciousness and emotions.

Would we have have achieved AGI then? Does that mean those LLMs have gotten consciousness and emotions? No.

The question of consciousness is based on what is going on in the inside, how the reasoning happening and not the output. In fact the first AGI might perform significantly worse in most tasks that current LLMs.

LLMs are extremely impressive but they are not thinking. They do not have consciousness. It might be technically impossible for them to develop anything like that or at least it would require significantly bigger models.

> where slower humans are disbarred from intelligence

Humans have value for being humans. Whether they are slow or fast at thinking. Whether they are neurodivergent or neurotypical. We all have feelings, we are all capable of suffering, we are all alive.

See also the problems with AI Welfare research: https://substack.com/home/post/p-165615548

  • saberience 2 days ago

    The problem with your argument is the idea that there is this special thing called "consciousness" that humans have and AI "doesn't".

    Philosophers, scientists, thinkers have been trying to define "consciousness" for 100+ years at this point and no one has managed to either a) define it, or b) find ways to test for it.

    Saying we have "consciousness" and AI "doesn't" is like saying we have a soul, a ghost in the machine, and AI doesn't. Do we really have a ghost in the machine? Or are we just really a big deterministic machine that we just don't fully understand yet, rather like AI.

    So before you assert that we are "conscious", you should first define what you mean by that term and how we test for it conclusively.

    • staticman2 2 days ago

      Before you assert nobody has defined consciousness you should maybe consult the dictionary?

      • saberience a day ago

        Are you trying to misunderstand me purposefully?

        I'm talking about a precise, technical, scientific definition, that scientists all agree on, and which doesn't rely on the definitions of other words, and can also be reliably tested.

        There has been constant debate about what consciousness means among scientists, philosophers, psychologists for as long as the word has existed. And there has never been any consistent and agreed upon test for consciousness.

        The Google definition of consciousness is: "the state of being aware of and responsive to one's surroundings."

        By that definition, a Tesla self driving car is conscious, it is aware of and responsive to its surroundings...

  • 542354234235 2 days ago

    >The question of consciousness is based on what is going on in the inside, how the reasoning happening and not the output.

    But we don’t really understand how the reasoning is happening in humans. Tests show that our subconscious, completely outside out conscious understanding, makes decisions before we perceive that we consciously decide something [1]. Our consciousness is the output, but we don’t really know what is running in the subconscious. If something looked at it from an outside perspective, would they say that it was just unconscious programing, giving the appearance of conscious reasoning?

    I’m not saying LLMs are conscious. But since we don’t really know what gives us the feeling of consciousness, and we didn’t build and don’t understand the underlying “programing”, it is hard to actually judge a non-organic mind that claims the feeling of consciousness. If you found out today that you were actually a computer program, would you say you weren’t conscious? Would you be able to convince “real” people that you were conscious?

    [1] https://qz.com/1569158/neuroscientists-read-unconscious-brai...

    • cardanome 2 days ago

      My point was we can can't prove that LLM's have consciousness. Yes, the reverse is also true. It is possible that we wouldn't really be able to tell if a AI gained consciousness as that might look very differently than we expect.

      An important standard for any scientific theory or hypothesis is to be falsifiable . Good old Russell's teapot. We can't disprove that that a small teapot too small to be seen by telescopes, orbits the Sun somewhere in space between the Earth and Mars. So should we assume it is true? No the burden of proof lies on those that make the claim.

      So yes, I could not 100 percent disprove that certain LLM's don't show signs of consciousness but that is reversing the burden of proof. Those that make the claims that LLM's are capable of suffering, that they show signs of consciousness need to deliver. If they can't, it is reasonable to assume they are full of shit.

      People here accuse me to be scholastic and too philosophical but the the reverse is true. Yes, we barely know how human brains work and how consciousness evolved but whoever doesn't see the qualitative difference between a human being and an LLM really needs to touch grass.

      • 542354234235 a day ago

        I am not saying that LLMs are conscious. What I am saying is that since we don’t really understand what gives rise to our subjective feeling of consciousness, evaluating a non-organic mind is difficult.

        For instance, say we had a Westworld type robot that perceived pain, pleasure, happiness, sadness, and reacted accordingly. But that we understood the underlying program, would we say it wasn’t conscious? If we understood our own underlying programing, would we not be “really conscious”?

        We say we have LLMs “fake” empathy or feelings but at some point, digital minds will be faking it in a complex way that involves inner “thoughts”, motivations that they “perceive” internally as positive and negative, and various other subjective experiences. It gets very squishy trying to understand how our consciousness isn’t just a fake abstraction on top of the unconscious program, and a digital mind’s abstractions are fake.

      • Tadpole9181 2 days ago

        In one breath: scientific vigor required for your opposition.

        In the next breath: "anyone who disagrees with me is a loser."

        > Those that make the claims that LLM's are capable of suffering, that they show signs of consciousness need to deliver. If they can't, it is reasonable to assume they are full of shit.

        Replace LLM with any marginalized group. Black people, Jews, etc. I can easily just use this to excuse any heinous crime I want - because you cannot prove that you aren't a philosophical zombie to me.

        Defaulting to cruelty in the face of unfalsifiablility is absurd.

  • _aavaa_ 2 days ago

    Consciousness is irrelevant to discussions of intelligence (much less AGI) unless you pick a circular definition for both.

    This is “how many angels dance on the head of a pin” territory.

  • cwillu 2 days ago

    I would absolutely say both the calculator and strong chess engines brought us closer.

  • mcv a day ago

    The best argument I've heard for why LLMs aren't there yet, is that they don't have a real world model. They only interact with text and images, and not with the real world. They have no concept of the real world, and therefore also no real concept of truth. They learn by interacting with text, not with the world.

    I don't know if that argument is true, but it does make some sense.

    In fact, I think you might argue that modern chess engines might have more of a world model (although an extremely limited one): they interact with the chess game. They learn not merely by studying the rules, but by playing the game millions of times. Of course that's only the "world" of the chess game, but it's something, and as a result, they know what works in chess. They have a concept of truth within the chess rules. Which is super limited of course, but it might be more than what LLMs have.

    • lostmsu a day ago

      It doesn't make any sense. You aren't interacting with neutrinos either. Nothing really beyond some local excitations of electrical fields and EM waves in certain frequency range.

      • mcv 17 hours ago

        What do neutrinos have to do with this?

  • virgilp 2 days ago

    > Does that mean those LLMs have gotten consciousness and emotions? No.

    Is this a belief statement, or a provable one?

    • Lerc 2 days ago

      I think it is clearly true that it doesn't show that they have consciousness and emotions.

      The problem is that people assume that failing to show that they do means that they don't.

      It's very hard to show that something doesn't have consciousness. Try and conclusively prove that a rock does not have consciousness.

      • virgilp 2 days ago

        The problem with consciousness is kinda' the same as the problem with AGI. Trying to prove that someone/something has or does not have consciousness is largely, as a commenter said, the same as debating whether it has or not plipnikop. I.e. something that is not well defined or understood, that may mean different things to different people.

        I think it's even hard to conclusively prove that LLMs don't have any emotions. They can definitely express emotions (typically don't, but largely because they're trained/tuned to avoid the expression of emotions). Now, are those fake? Maybe, most likely even... but not "clearly" (or provably) so.

parodysbird 2 days ago

The original Turing Test was one of the more interesting standards... An expert judge talks with two subjects in order to determine which is the human: one is a human who knows the point of the test, and one is machine trying to fool the judge into being no better than a coin flip at correctly choosing who was human. Allow for many judges and experience in each etc.

The brilliance of the test, which was strangely lost on Turing, is that the test is doubtful to be passed with any enduring consistency. Intelligence is actually more of a social description. Solving puzzles, playing tricky games, etc is only intelligent if we agree that the actor involved faces normal human constraints or more. We don't actually think machines fulfill that (they obviously do not, that's why we build them: to overcome our own constraints), and so this is why calculating logarithms or playing chess ultimately do not end up counting as actual intelligence when a machine does them.

Asraelite 2 days ago

Years ago when online discussion around this topic was mostly done by small communities talking about the singularity and such, I felt like there was a pretty clear definition.

Humans are capable of consistently making scientific progress. That means being taught knowledge about the world by their ancestors, performing new experiments, and building upon that knowledge for future generations. Critically, there doesn't seem to be an end to this for the foreseeable future for any field of research. Nobody is predicting that all scientific progress will halt in a few decades because after a certain point it becomes too hard for humans to understand anything, although that probably would eventually become true.

So an AI with at least the same capabilities as a human would be able to do any type of scientific research, including research into AI itself. This is the "general" part: no matter where the research takes it, it must always be able to make progress, even if slowly. Once such an AI exists, the singularity begins.

I think the fact that AI is now a real thing with a tangible economic impact has drawn the attention of a lot of people who wouldn't have otherwise cared about the long-term implications for humanity of exponential intelligence growth. The question that's immediately important now is "will this replace my job?" and so the definitions of AGI that people choose to use are shifting more toward definitions that address those questions.

[removed] 2 days ago
[deleted]
morsecodist 2 days ago

AGI is a marketing term. It has no consistent definition. It's not very useful when trying to reason about AI's capabilities.

littlestymaar 2 days ago

> These articles seem to be slowly pushing the boundaries past the point where slower humans are disbarred from intelligence.

It's not really pushing boundaries, a non trivial amount of humans has always been excluded from the definition of “human intelligence” (and with the ageing of the population, this number is only going up), and it makes sense, like how you don't consider blind individuals when you're comparing humans sight to other animals'.

suddenlybananas 2 days ago

Someone who can reliably solve towers of Hanoi with n=4 and who has been told the algorithm should be able to do it with n=6,7,8. Don't forget, these models aren't learning how to do it from scratch the way a child might.

James_K 2 days ago

It may genuinely be the case that slower humans are not generally intelligent. But that sounds rather snobbish so it's not an opinion I'd like to express frequently.

I think the complaint made by apple is quite logical though and you mischaracterise it here. The question asked in the Apple study was "if I give you the algorithm that solves a puzzle, can you solve that puzzle?" The answer for most humans should be yes. Indeed, the answer is yes for computers which are not generally intelligent. Models failed to execute the algorithm. This suggests that the models are far inferior to the human mind in terms of their computational ability, which precedes general intelligence if you ask me. It seems to indicate that the models are using more of a "guess and check" approach than actually thinking. (A specifically interesting result was that model performance did not substantially improve between a puzzle with the solution algorithm given, and one where no algorithm was given.)

You can sort of imagine the human mind as the head of a Turing Machine which operates on language tokens, and the goal of an LLM is to imitate the internal logic of that head. This paper seems to demonstrate that they are not very good at doing that. It makes a lot of sense when you think about it, because the models work by consuming their entire input at once where the human mind operates with only a small working memory. A fundamental architectural difference which I suspect is the cause of the collapse noted in the Apple paper.

  • GrayShade 2 days ago

    I think a human will struggle to solve Hanoi using the recursive algorithm for even 6 disks, even given pen and paper.

    Does that change if you give them the algorithm description? No. Conversely, the LLMs already know the algorithm, so including it in the prompt makes no difference.

    • thaumasiotes 2 days ago

      > I think a human will struggle to solve Hanoi using the recursive algorithm for even 6 disks, even given pen and paper.

      Why? The whole point of the recursive algorithm is that it doesn't matter how many discs you're working with.

      The ordinary children's toys that implement the puzzle are essentially always sold with more than 6 discs.

      https://www.amazon.com/s?k=towers+of+hanoi

      • GrayShade 2 days ago

        The recursive solution has a stack depth proportional to the number of disks. That's three pieces (two pegs and how many disks to move) of data for each recursive call, so for 6 disks the "stack" will contain up to around 15 values, which is generally higher than an unaided human will be able to track.

        In addition, 64-256 moves is quite a lot and I suspect people will generally lose focus before completing them.