Comment by intull

Comment by intull 2 days ago

4 replies

I think what comment-OP above means to point at is - given what we know (or, lack thereof) about awareness, consciousness, intelligence, and the likes, let alone the human experience of it all, today, we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own; even before we start arguing about their "intelligence", whatever that may be understood of as.

What we do know and have so far, across and cross disciplines, and also from the fact that neural nets are modeled after what we've learned about the human brain, is, it isn't an impossibility to propose that LLMs _could_ be more than just "token prediction machines". There can be 10000 ways of arguing how they are indeed simply that, but there also are a few of ways of arguing that they could be more than what they seem. We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking. That's worth not ignoring or dismissing the few.

sfn42 a day ago

> we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own

That may be. We also don't have a way to scientifically rule out the possibility that a teapot is orbiting Pluto.

Just because you can't disprove something doesn't make it plausible.

  • intull 20 hours ago

    Is this what we are reduced to now, to snap back with a wannabe-witty remark just because you don't like how an idea sounds? Have we completely forgotten and given up on good-faith scientific discourse? Even on HN?

    • sfn42 17 hours ago

      I'm happy to participate in good faith discourse but honestly the idea that LLMs are conscious is ridiculous.

      We are talking about a computer program. It does nothing until it is invoked with an input and then it produces a deterministic output unless provided a random component to prevent determinism.

      That's all it does. It does not live a life of its own between invocations. It does not have a will of its own. Of course it isn't conscious lol how could anyone possibly believe it's conscious? It's an illusion. Don't be fooled.

imiric a day ago

I agree with that.

But the problem is the narrative around this tech. It is marketed as if we have accomplished a major breakthrough in modeling intelligence. Companies are built on illusions and promises that AGI is right around the corner. The public is being deluded into thinking that the current tech will cure diseases, solve world hunger, and bring worldwide prosperity. When all we have achieved is to throw large amounts of data at a statistical trick, which sometimes produces interesting patterns. Which isn't to say that this isn't and can't be useful, but this is a far cry from what is being suggested.

> We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking.

Precisely. But the burden of proof is on the author. They're telling us this is "intelligence", and because the term is so loosely defined, this can't be challenged in either direction. It would be more scientifically honest and accurate to describe what the tech actually is and does, instead of ascribing human-like qualities to it. But that won't make anyone much money, so here we are.