Comment by esafak

Comment by esafak 2 days ago

69 replies

Interesting stuff. I don't have time to read a dissertation so I skimmed his latest paper instead: Why Is Anything Conscious? https://arxiv.org/abs/2409.14545

In it he proposes a five-stage hierarchy of consciousness:

0 : Inert (e.g. a rock)

1 : Hard Coded (e.g. protozoan)

2 : Learning (e.g. nematode)

3 : First Order Self (e.g. housefly). Where phenomenal consciousness, or subjective experience, begins. https://en.wikipedia.org/wiki/Consciousness#Types

4 : Second Order Selves (e.g. cat). Where access consciousness begins. Theory of mind. Self-awareness. Inner narrative. Anticipating the reactions of predator or prey, or navigating a social hierarchy.

5 : Third Order Selves (e.g. human). The ability to model the internal dialogues of others.

The paper claims to dissolve the hard problem of consciousness (https://en.wikipedia.org/wiki/Hard_problem_of_consciousness) by reversing the traditional approach. Instead of starting with abstract mental states, it begins with the embodied biological organism. The authors argue that understanding consciousness requires focusing on how organisms self-organize to interpret sensory information based on valence (https://en.wikipedia.org/wiki/Valence_(psychology)).

The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.

The paper does not seem to elaborate on how to assess which stage the organism belongs to, and to what degree. This is the more interesting question to me. One approach is IIT: http://www.scholarpedia.org/article/Integrated_information_t...

The author's web site: https://michaeltimothybennett.com/

root_axis a day ago

> The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.

This doesn't really address the hard problem, it just asserts that the hard problem doesn't exist. The meat of the problem is that subjective experience exists at all, even though in principle there's no clear reason why it should need to.

Simply declaring it as functional is begging the question.

For example, we can imagine a hypothetical robot that could remove its hand from a stove if it's sensors determine that the surface is too hot. We don't need subjective experience to explain how a system like that could be designed, so why do we need it for an organism?

  • simonh a day ago

    A claim is not an assertion. I don’t see any assertion the hard problem doesn’t exist here, just expression of a belief it may be solvable and an outline of maybe how.

    > Simply declaring it as functional is begging the question.

    Nobody is ‘declaring’ any such thing. I loathe this kind of lazy pejorative attack accusing someone of asserting, declaring something, just for having the temerity to offer a proposed explanation you happen to disagree with.

    What your last paragraph is saying is that stage 1 isn’t conscious therefore stage 5 isn’t. To argue against stage 5 you need to actually address stage 5, against which there are plenty of legitimate lines of criticism.

    • root_axis a day ago

      > Nobody is ‘declaring’ any such thing

      Yes, they are.

      > The claim is that phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.

      They're explicitly defining the hard problem out of existence.

      > I loathe this kind of lazy pejorative attack accusing someone of asserting

      Take it easy. Nothing I wrote here is a "pejorative attack", I'm directly addressing what was written by the OP.

phrotoma 2 days ago

Dang, this is great stuff. You may enjoy this piece that tackles similar themes but focuses on what use evolution has for consciousness.

My reading of it is that the author suggests global workspace theory is a plausible reason for evolution to spend so much time and energy developing phenomenal consciousness.

https://www.frontiersin.org/journals/psychology/articles/10....

lordnacho a day ago

First of all, how is 5 different from 4? Modelling the internal monologue of someone else is Theory of Mind, isn't it?

Next, we gotta ask ourselves, could you have substrate independence? A thing that isn't biological, but can model other level-5 creatures?

My guess is yes. There's all sort of other substrate independence.

  • pengstrom a day ago

    My stab:

    2: implicit world. Reacts to but not modeled. 3: explicit world and your separation from it. 4: Model that includes other intelligences of level 3 that you have to take into consideration. World resources can be shared or competes for. 5: Language. Model of others as yourself, their model include yours too. Mutual recursion. Information can be transmitted mind-to mind.

    • blamestross 21 hours ago

      Looks a lot like the game theory concept of the "level" of a player. (how many levels of metacognition in others can it practically take into account)

antonvs a day ago

> making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible.

Would, or does, the author then argue that ChatGPT must be conscious?

  • aswegs8 a day ago

    Not sure why this is getting downvoted. According to the above definition, LLMs are level 5 consciousness, since they have a theory of self and others.

flimflamm a day ago

I wonder if 6 would be understanding own thinking. Currently humans don't understand this. Thoughts just pop in to our heads and we try to explain what caused them.

  • Animats a day ago

    7. Full scalability. Can operate organizations of large size without confusion.

    • tempodox a day ago

      One can dream. Seeing how people already start to get confused when a logical negation is in play, I'm not optimistic.

      • pengstrom a day ago

        I'm more optimistic but cynical. Everybody has the capacity, but can't be bothered for your sake specifically. A highly intelligent person can casually entertain several theoretical notions. A lesser can too, but it requires more effort. Effort that might be better spent elsewhere, or effort that makes social interaction awkward.

        • aswegs8 a day ago

          Higher consciousness does not imply cooperation, even though we idealize it to do so. Cooperation is another dimension - it is easy to imagine a being that has a higher form of consciousness but is not interested in cooperation or does not engage in it unless it can take advantage of others.

  • root_axis a day ago

    Not sure what you mean. It seems like thoughts must necessarily pop into our head, how would we know our thoughts before we think them?

  • matt-attack 11 hours ago

    One of the most insightful things I learned about myself. Sam Harris has a fascinating 2hr podcast on the subject of free will. The idea that we don’t choose what we think is fascinating and so obvious after it’s explained. But I don’t know that I’d ever have realized.

signal-intel 2 days ago

Do you (or this paper) think consciousness exists in the humans out there who have no inner narrative?

  • jbotz 2 days ago

    Maybe all humans (and indeed other intelligent mammals) have an inner narrative, but it doesn't necessarily involve language. A mime or a silent film can tell a story without words, and the inner narrative can take likewise be in visual or other sensory form.

  • esafak 2 days ago

    That's a fair question. I don't know that the theory of mind mentioned here is the same as an internal monologue. I think one could model other people's minds without conducting an internal monologue, by visualizing it, for example. Maybe the anendophasiacs in the audience can enlighten us.

    The author also has a Youtube channel: https://www.youtube.com/@michaeltimothybennett

    • Lerc 2 days ago

      I can think words in conversations as if I am writing a story (actually thinking about it it's more like reading a script), but as far as I can tell I don't experience what most people describe as an internal monologue, I also have aphantasia which I understand is a frequent co-occurrence with a lack of an internal monologue.

      Obviously I'm conscious (but a zombie would say that too). I can certainly consider the mental states of others. Sometimes embarrassingly so, there are a few boardgames where you have to anticipate the actions of others, where the other players are making choices based upon what they think others might do rather than a strictly analytical 'best' move. I'm quite good at those. I am not a poker player but I imagine that professional players have that ability at a much higher level than I do.

      So yeah, My brain doesn't talk to me, but I can 'simulate' others inside my mind.

      Does it bother anyone else that those simulations of others that you run in your mind might, in themselves, be conscious? If so, do we kill them when we stop thinking about them? If we start thinking about them again do we resurrect them or make a new one?

      • jbotz 2 days ago

        The key to your difficulty is "my brain doesn't talk to me"... the solution is to realize that there is no "me" that's separate from your brain for it to talk to. You are the sum of the processes occurring in your brain and when it simulates others inside your mind, that's nothing but narrative. A simulation is a narrative. You may not perceive this narrative as sequence of words, a monologue, but it certainly is the result of different parts of your brain communicating with each other, passing information back and forth to model a plausible sequence of events.

        So yes, you're conscious. So is my dog, but my dog can't post his thoughts about this on Hacker news, so you are more conscious than my dog.

      • erwan577 a day ago

        I also lack an internal monologue and have strong aphantasia, so the idea that I might not be conscious made me a bit uneasy—it just felt wrong, somehow. For now, the best I can say is that my worldview, which includes self-consciousness, is abstract. I can put it into words, but most of the time, it doesn’t feel necessary.

      • exe34 2 days ago

        > Obviously I'm conscious

        I'm not trying to be pedantic - how do you know? What does consciousness mean to you? Do you experience "qualia"? When you notice something, say "the toast is burning", what goes on in your mind?

        > but I can 'simulate' others inside my mind.

        Do you mean in the sense of working out how they will react to something? What sort of reactions can they exhibit in your mind?

        Sorry if these questions are invasive, but you're as close to an alien intelligence as I'll ever meet unless LLMs go full Prime Intellect on us.

  • the_gipsy a day ago

    Do humans without inner narrative really exist? It could be just a misunderstanding about what "inner narrative" is, or the degree of perception.

    • xcf_seetan a day ago

      In some systems the interruption of the inner dialog, that leads to a inner silence, opens a door to expansion of perception to other realms.

    • mewpmewp2 a day ago

      Considering how I observe my inner monologue it is hard for me to believe that someone wouldn't notice they had it if in fact they did have it.

  • [removed] 2 days ago
    [deleted]
photonthug 2 days ago

IIT has always interested me, and after reading some of the detractors[1] I get that it has problems, but I still don't get the general lack of attention/interest or even awareness about it. It seems like a step in the right direction, establishing a viable middle ground somewhere between work in CS or neuroscience that measure and model but are far too reluctant to ever speculate or create a unifying theory, vs a more philosophical approach to theory of mind that always dives all the way into speculation.

[1] https://scottaaronson.blog/?p=1799

  • fsmv 2 days ago

    The creator of IIT doesn't understand the universality of Turing machines. He thinks that because in CPUs the physical transistors don't have as many connections as neurons in the brain, that it's fundamentally limited and cannot be conscious.

    He even goes as far as to say that you cannot simulate the brain on a CPU and make it conscious because it's still connection limited in the hardware. If you understand computer science you know this is absurd, Turing machines can compute any computable function.

    He says "you're not worried you will fall into a simulated black hole are you?" but that is an entirely different kind of thing. The only difference we would get by building a machine with hundreds of thousands of connections per node is faster and more energy efficient. The computation would be the same.

    • candlemas a day ago

      Maybe he doesn't think consciousness is a computation.

    • exe34 2 days ago

      > The computation would be the same.

      Assuming of course that Penrose is cuckoo when it comes to consciousness (which I'm happy to assume).

    • photonthug 2 days ago

      This is a typical critique that sort of assumes you have to be right about everything to be right about anything. Maybe a useful point of comparison is aether theories in physics. Wrong, sure, but useless? People might argue whether it was always pure distraction or a useful building block, but I wonder what Maxwell or Einstein would say. If nothing else, one needs something to ground one's thinking and to argue against, which is why replacement theories usually need to acknowledge or address what came before. And typically we try to fix bad theories rather than simply discarding them, especially if there's no alternative available. What are the other available "grand unifying theories" of consciousness? To quote the end of Aaronson's rebutal:

      > In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

      Of course, it's not on Aaronson to rescue the theory he's trying to disprove, but notice that he is out to disprove it and spends his time on that, rather than imagining what axioms might be added or replaced, etc. Proving that having a large Φ-value is not a sufficient condition for consciousness hardly seems devastating "to the core", because finding better descriptions of necessary conditions would still represent significant progress.

      Similarly a critique like

      > He thinks that because in CPUs the physical transistors don't have as many connections as neurons in the brain, that it's fundamentally limited and cannot be conscious.

      seems a little bit narrow. I do agree it seems to misunderstand universality, but on the other hand, maybe it's just distracted by counting IO pins on chips, and what it should focus on more is counting nodes/edges in neural net layers, and whether connection-counting in hardware-vs-software might need to have a weighting-coeffecients, etc. HN loves to celebrate things like the bitter lesson, the rise of LLMs and ML, and the failure of classical logic and rule-based reasoning and NLP. Is all of that same stuff not soft-evidence for the relevance if not the completeness of IIT?

      • NoMoreNicksLeft a day ago

        >This is a typical critique that sort of assumes you have to be right about everything to be right about anything.

        If you don't understand the fundamentals and basics of the underlying science, then you can't really be right about anything at all. It should shock and disturb you to listen someone get it this wrong, this "not even wrong" level of nonsense. There's no insight to be found in such prattle.

        • pengstrom a day ago

          Strange. My knowledge of the fundamentals and processes in humans still makes me jelous of the apparent ease others fare in social situations. Clearly there's more to it than it seems. I'd be wary of equating bottom-up and top-down as principally equivalent.

klabb3 2 days ago

> 4 : Second Order Selves (e.g. cat). Where access consciousness begins. Theory of mind. Self-awareness. Inner narrative. Anticipating the reactions of predator or prey, or navigating a social hierarchy.

Cats and dogs most definitely anticipate actions of other animals and navigate (and establish) social hierarchy. Is this even a trait of consciousness?

I’ve spent much time thinking of qualitative differences between human and close animals. I do think ”narrative” is probably one such construct. Narratives come early (seemingly before language). This lays the foundation of sequential step-by-step thinking. Basically it lets you have intermediate virtual (in-mind) steps supporting next steps, whether that’s through writing, oral communication or episodic memory.

An animal can 100% recall and associate memories, such as mentioning the name of a playmate to a dog (=tail wagging). However, it seems like they can neither remember nor project ”what happens next” and continue to build on it. Is it a degree of ability or a fundamental qualitative difference? Not sure.

In either case, we should be careful overfitting human traits into definition of consciousness, particularly language. Besides, many humans have non-verbal thoughts and we are no less conscious during those times.

  • jijijijij 2 days ago

    There is this popular video of a crow repeatedly riding down a snow covered roof on a piece of plastic, basically snowboarding. Seemingly just for fun/play.

    For me, it's hard to imagine how such behavior could be expressed without the pure conscious experience of abstract joy and anticipation thereof. It's not the sort of play, which may prepare a young animal for the specific challenges of their species (e.g. hunting, or fighting). I don't think you could snowboard on a piece of bark or something. Maybe ice, but not repeatedly by dragging it up the hill again. It's an activity greatly inspired by man-made, light and smooth materials, novelties considering evolutionary timescales. May even be inspired by observing humans...

    I think it's all there, but the question about degree of ability vs. qualitative difference may be moot. I mean, trivially there is a continuous evolutionary lineage of "feature progression", unless we would expect our extend of consciousness being down to "a single gene". But it's also moot, because evolutionary specialization may as well be as fundamental a difference as the existence of a whole new organ. E.g. the energy economics of a bird are restricted by gravity. We wouldn't see central nervous systems without the evolutionary legacy of predation -> movement -> directionality -> sensory concentration at the front. And we simply cannot relate to solitary animals (who just don't care about love and friendship)... Abilities are somewhat locked-in by niche and physics constraints.

    I think the fundamental difference between humans and animals, is the degree of freedom we progressively gained over the environment, life, death and reproduction. Of course we are governed by the wider idea of evolution like all matter, but in the sense of classical theory we don't really have a specific niche, except "doing whatever with our big, expensive brain". I mean, we're at a point where we play meta-evolution in the laboratory. This freedom may have brought extended universality into cognition. Energy economics, omnivorous diet, bipedal walking, hands with freely movable thumbs, language, useful lifespan, ... I think the sum of all these make the difference. In some way, I think we are like we are, exactly because we are like that. Getting here wasn't guided by plans and abstractions.

    If it's a concert of all the things in our past and present, we may never find a simpler line between us and the crow, yet we are fundamentally different.

  • MoonGhost a day ago

    > An animal can 100% recall and associate memories, such as mentioning the name of a playmate to a dog (=tail wagging). However, it seems like they can neither remember nor project ”what happens next” and continue to build on it.

    There are videos of dogs stopping kids from falling in the water. They definitely can project 'what happens next'. I.e. what kid is doing, why, and what's going to happen. More over dog brings the toy kid wanted from the water. In other words animals are not as primitive and stupid as some want them to be to fit in their theories. BTW, parrots often are really talking, not just reproducing random words.

  • pengstrom a day ago

    I've never gotten the impression an animal was aware it could change me. Sure it'd make its wants clear until it got what it wanted or got bored, but that's a very primitive form of conduct. The cat clearly knows I can get it more foods between meals. The communication is limited, but I've never seen him come up with a better argument than that he really really wants more food. Dogs are stranger and clearly has a concept of social structure that cats don't. Both from their background as pack animals and deliberate domestication for assisting humans in work.

  • NL807 a day ago

    These stages are part of a spectrum. There is no hard boundaries.

  • ben_w 2 days ago

    > Is this even a trait of consciousness?

    There's 40 or so different definitions of the word, so it depends which one you're using when you ask the question.

    For me, and not just when it comes to machine minds, the meaning I find most interesting is qualia — unfortunately, I have no particular reason to think this hierarchy helps with that, because there might be a good evolutionary reason for us to have a subjective experience rather than mere unfeeling circuits of impulse and response, it's (1) not clear why this may have been selected for, and evolution does do things at random and only select for/against when they actually matter, and (2) it's not clear when in our evolution this may have happened, and (3) it's not clear how to test for it.

kazinator 2 days ago

Where dooes a human under anaesthesia fit in?

  • wwweston 2 days ago

    Unconscious, in my experience.

    But not aconscious.

    • kazinator 2 days ago

      Is there a definition of unconscious distinct from and more useful than "temporarily aconscious with most memory intact"?

thrance 2 days ago

I'm wary of any classification that puts humans in a special category of their own, as the crown jewel of the tree of life (many such cases).

> The ability to model the internal dialogues of others.

It feels like someone spent a lot of time searching for something only humans can do, and landed on something related to language (ignoring animals that communicate with sounds too). How is this ability any different than the "Theory of mind"? And why is it so important that it requires a new category of its own?

  • mtbennett 2 days ago

    Fair points. However I don't put humans in a special category, so much as I say I know at least humans are this conscious. I then cite some research on Australian magpies which suggests they may be so conscious too.

    It is not different from theory of mind; theory of mind is an important part of it, just not the whole picture. I argue access consciousness and theory of mind go hand in hand, which is a significant departure from how access consciousness is traditionally understood.

aswegs8 a day ago

I am usually not a fan of fanboying (pun intended) but getting an award with the presenter being Joscha Bach is so cool!

verisimi a day ago

The step from 1 to 2, rock to hard coding (protozoan) assumes life. There's no way I would describe hard coding as life.

  • pengstrom a day ago

    Does it assume life, or are the qualities that makes an organism act in spite of us, and agency, one and the same?

moffkalast 2 days ago

> phenomenal consciousness is fundamentally functional, making the existence of philosophical zombies (entities that behave like conscious beings but lack subjective experience) impossible

That's interesting, but I think that only applies if the consciousness is actually consistent in some wide set of situations? Like you can dump a few decent answers into a database and it answers correctly if asked the exact right questions, a la Eliza or Chinese room, does that mean SQL's SELECT is conscious?

With LLMs it's not entirely clear if we've expanded that database to near infinity with lossy compression or if they are a simplistic barely functional actual consciousness. Sometimes it feels like it's both at the same time.

mock-possum 2 days ago

Well shit I wonder what level 6 looks like