Comment by cardanome

Comment by cardanome 2 days ago

20 replies

People confuse performance and internal presentation.

A simple calculator is vastly better as adding numbers than any human. An chess engine will rival any human grand master. No one would say that this got us closer to AGI.

We could absolutely see LLMs that produce poetry that humans can not tell apart or even prefer to human made poetry. We could have LLMs that are perfectly able to convince humans that they have consciousness and emotions.

Would we have have achieved AGI then? Does that mean those LLMs have gotten consciousness and emotions? No.

The question of consciousness is based on what is going on in the inside, how the reasoning happening and not the output. In fact the first AGI might perform significantly worse in most tasks that current LLMs.

LLMs are extremely impressive but they are not thinking. They do not have consciousness. It might be technically impossible for them to develop anything like that or at least it would require significantly bigger models.

> where slower humans are disbarred from intelligence

Humans have value for being humans. Whether they are slow or fast at thinking. Whether they are neurodivergent or neurotypical. We all have feelings, we are all capable of suffering, we are all alive.

See also the problems with AI Welfare research: https://substack.com/home/post/p-165615548

saberience 2 days ago

The problem with your argument is the idea that there is this special thing called "consciousness" that humans have and AI "doesn't".

Philosophers, scientists, thinkers have been trying to define "consciousness" for 100+ years at this point and no one has managed to either a) define it, or b) find ways to test for it.

Saying we have "consciousness" and AI "doesn't" is like saying we have a soul, a ghost in the machine, and AI doesn't. Do we really have a ghost in the machine? Or are we just really a big deterministic machine that we just don't fully understand yet, rather like AI.

So before you assert that we are "conscious", you should first define what you mean by that term and how we test for it conclusively.

  • staticman2 2 days ago

    Before you assert nobody has defined consciousness you should maybe consult the dictionary?

    • saberience 2 days ago

      Are you trying to misunderstand me purposefully?

      I'm talking about a precise, technical, scientific definition, that scientists all agree on, and which doesn't rely on the definitions of other words, and can also be reliably tested.

      There has been constant debate about what consciousness means among scientists, philosophers, psychologists for as long as the word has existed. And there has never been any consistent and agreed upon test for consciousness.

      The Google definition of consciousness is: "the state of being aware of and responsive to one's surroundings."

      By that definition, a Tesla self driving car is conscious, it is aware of and responsive to its surroundings...

      • staticman2 2 days ago

        If you meant a scientific definition why do you keep mentioning philosophy?

        An LLM tells me references such as Oxford Dictionary of Science probably include a definition of consciousness but I suppose that would be behind a pay wall so I can't verify it.

        Of course you are demanding one that "scientists all agree on" which is an impossibly high bar so I don't think anyone is going to meet you there.

        • lostmsu 2 days ago

          Because it is clear the comment claiming "AIs have no consciousness" did not mean that dictionary definition, which is exactly the issue.

542354234235 2 days ago

>The question of consciousness is based on what is going on in the inside, how the reasoning happening and not the output.

But we don’t really understand how the reasoning is happening in humans. Tests show that our subconscious, completely outside out conscious understanding, makes decisions before we perceive that we consciously decide something [1]. Our consciousness is the output, but we don’t really know what is running in the subconscious. If something looked at it from an outside perspective, would they say that it was just unconscious programing, giving the appearance of conscious reasoning?

I’m not saying LLMs are conscious. But since we don’t really know what gives us the feeling of consciousness, and we didn’t build and don’t understand the underlying “programing”, it is hard to actually judge a non-organic mind that claims the feeling of consciousness. If you found out today that you were actually a computer program, would you say you weren’t conscious? Would you be able to convince “real” people that you were conscious?

[1] https://qz.com/1569158/neuroscientists-read-unconscious-brai...

  • cardanome 2 days ago

    My point was we can can't prove that LLM's have consciousness. Yes, the reverse is also true. It is possible that we wouldn't really be able to tell if a AI gained consciousness as that might look very differently than we expect.

    An important standard for any scientific theory or hypothesis is to be falsifiable . Good old Russell's teapot. We can't disprove that that a small teapot too small to be seen by telescopes, orbits the Sun somewhere in space between the Earth and Mars. So should we assume it is true? No the burden of proof lies on those that make the claim.

    So yes, I could not 100 percent disprove that certain LLM's don't show signs of consciousness but that is reversing the burden of proof. Those that make the claims that LLM's are capable of suffering, that they show signs of consciousness need to deliver. If they can't, it is reasonable to assume they are full of shit.

    People here accuse me to be scholastic and too philosophical but the the reverse is true. Yes, we barely know how human brains work and how consciousness evolved but whoever doesn't see the qualitative difference between a human being and an LLM really needs to touch grass.

    • 542354234235 a day ago

      I am not saying that LLMs are conscious. What I am saying is that since we don’t really understand what gives rise to our subjective feeling of consciousness, evaluating a non-organic mind is difficult.

      For instance, say we had a Westworld type robot that perceived pain, pleasure, happiness, sadness, and reacted accordingly. But that we understood the underlying program, would we say it wasn’t conscious? If we understood our own underlying programing, would we not be “really conscious”?

      We say we have LLMs “fake” empathy or feelings but at some point, digital minds will be faking it in a complex way that involves inner “thoughts”, motivations that they “perceive” internally as positive and negative, and various other subjective experiences. It gets very squishy trying to understand how our consciousness isn’t just a fake abstraction on top of the unconscious program, and a digital mind’s abstractions are fake.

    • Tadpole9181 2 days ago

      In one breath: scientific vigor required for your opposition.

      In the next breath: "anyone who disagrees with me is a loser."

      > Those that make the claims that LLM's are capable of suffering, that they show signs of consciousness need to deliver. If they can't, it is reasonable to assume they are full of shit.

      Replace LLM with any marginalized group. Black people, Jews, etc. I can easily just use this to excuse any heinous crime I want - because you cannot prove that you aren't a philosophical zombie to me.

      Defaulting to cruelty in the face of unfalsifiablility is absurd.

      • suddenlybananas 2 days ago

        >Replace LLM with any marginalized group. Black people, Jews, etc. I can easily just use this to excuse any heinous crime I want - because you cannot prove that you aren't a philosophical zombie to me.

        This is so flatly ridiculous of an analogy that it becomes racist itself. Maybe the bread I eat is conscious and feels pain (the ancient Manichaens thought so!). Are you know going to refrain from eating bread in case it causes suffering? You can't prove bread doesn't feel pain, you might be "defaulting to cruelty"!

_aavaa_ 2 days ago

Consciousness is irrelevant to discussions of intelligence (much less AGI) unless you pick a circular definition for both.

This is “how many angels dance on the head of a pin” territory.

cwillu 2 days ago

I would absolutely say both the calculator and strong chess engines brought us closer.

mcv 2 days ago

The best argument I've heard for why LLMs aren't there yet, is that they don't have a real world model. They only interact with text and images, and not with the real world. They have no concept of the real world, and therefore also no real concept of truth. They learn by interacting with text, not with the world.

I don't know if that argument is true, but it does make some sense.

In fact, I think you might argue that modern chess engines might have more of a world model (although an extremely limited one): they interact with the chess game. They learn not merely by studying the rules, but by playing the game millions of times. Of course that's only the "world" of the chess game, but it's something, and as a result, they know what works in chess. They have a concept of truth within the chess rules. Which is super limited of course, but it might be more than what LLMs have.

  • lostmsu 2 days ago

    It doesn't make any sense. You aren't interacting with neutrinos either. Nothing really beyond some local excitations of electrical fields and EM waves in certain frequency range.

    • mcv 20 hours ago

      What do neutrinos have to do with this?

virgilp 2 days ago

> Does that mean those LLMs have gotten consciousness and emotions? No.

Is this a belief statement, or a provable one?

  • Lerc 2 days ago

    I think it is clearly true that it doesn't show that they have consciousness and emotions.

    The problem is that people assume that failing to show that they do means that they don't.

    It's very hard to show that something doesn't have consciousness. Try and conclusively prove that a rock does not have consciousness.

    • virgilp 2 days ago

      The problem with consciousness is kinda' the same as the problem with AGI. Trying to prove that someone/something has or does not have consciousness is largely, as a commenter said, the same as debating whether it has or not plipnikop. I.e. something that is not well defined or understood, that may mean different things to different people.

      I think it's even hard to conclusively prove that LLMs don't have any emotions. They can definitely express emotions (typically don't, but largely because they're trained/tuned to avoid the expression of emotions). Now, are those fake? Maybe, most likely even... but not "clearly" (or provably) so.