Comment by Zarathruster

Comment by Zarathruster 10 hours ago

36 replies

Of all the things I studied at Berkeley, the Philosophy of Mind class he taught is the one I think back on most often. The subject matter has only grown in relevance with time.

In general, I think he's spectacularly misunderstood. For instance: he believed that it was entirely possible to create conscious artificial beings (at least in principle). So why do so many people misunderstand the Chinese Room argument to be saying the opposite? My theory is that most people encounter his ideas from secondary sources that subtly misrepresent his argument.

At the risk of following in their footsteps, I'll try to very succinctly summarize my understanding. He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language. The Chinese Room argument might mislead people into thinking it's an epistemology claim ("knowing" the Chinese language) when it's really an ontology claim (consciousness and its objective, independent mode of existence).

If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.

munch117 2 minutes ago

> If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.

My first exposure was a video of Searle himself explaining the Chinese room argument.

It came across as a claim that a whole can never be more than its parts. It made as much sense as claiming that a car cannot possibly drive, as it consists of parts that separately cannot drive.

tsimionescu 9 hours ago

> His argument is much narrower: consciousness can't be instantiated purely in language.

No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. But his belief, which he articulates very explicitly in the article, is that you couldn't create a machine consciousness by running even a perfect simulation of a biological brain on a digital computer, neuron for neuron and synapse for synapse. He likens this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building.

Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. His ideas are very much muddy, and while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate, it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.

  • sharts 4 hours ago

    I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

    We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.”

    The fact is that they can’t.

    Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side.

    Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features.

    • lostmsu 3 hours ago

      > I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

      This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised.

      • pegasus an hour ago

        I think the claim would be that an LLM would only ever pass a strict subset of the questions testing a particular understanding. As we gather more and more text to feed these models, finding those questions will necessarily require more and more out-of-the-box thinking... or a (un)lucky draw. Giveaways will always be lurking just beyond the inference horizon, ready to yet again deflate our high hopes of having finally created a machine which actually understands our everyday world.

        I find this thesis very plausible. LLMs inhabit the world of language, not our human everyday world, so their understanding of it will always be second-hand. An approximation of our own understanding of that world, itself imperfect, but at least aiming for the real thing.

        The part about overcoming this limitation by instantiating the system in hardware I find less convincing, but I think I know where he comes from with that as well: by giving it hardware sensors, the machine would not have to simulate the world outside as well - on top of the inner one.

        The inner world can more easily be imagined as finite, at least. Many people seem to take this as a given, actually, but there's no good reason to expect that it is. Plank limits from QM are often brought up as an argument for digital physics, but in fact they are only a limit on our knowledge of the world, not on the physical systems themselves.

  • mjburgess 7 hours ago

    > this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building

    > with no clear reason whatsoever as to why

    It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes.

    The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers.

    A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies.

    A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire.

    A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing.

    • mattclarkdotnet 7 hours ago

      Because simulated fire burns other things in the simulation just as much as “real” fire burns real things. Searle &co assert that there is a real world that has special properties, without providing any way to show that we are living in it

      • mjburgess 6 hours ago

        > Because simulated fire burns other things in the simulation just as much as “real” fire burns real things.

        What we mean by a simulation is, by definition, a certain kind of "inference game" we play (eg., with beads and chalk) that help us think about the world. By definition, if that simulation has substantial properties, it isn't a simulation.

        If the claim is that an electrical device can implement the actual properties of biological intelligence, then the claim is not about a simulation. It's that by manufacturing some electrical system, plugging various devices into it, and so on -- that this physical object has non-simulated properties.

        Searle, and most other scientific naturalists who appreciate the world is real -- are not ruling out that it could be possible to manufacture a device with the real properties of intelligence.

        It's just that merely by, eg., implementing the fibonacci sequence, you havent done anything. A computation description doesnt imply any implementation properties.

        Further, when one looks at the properties of these electronic systems and the kinds of causal realtions they have with their environments via their devices, one finds very many reasons to suppose that they do not implement the relevant properties.

        Just as much as when one looks at a film strip under a microscope, one discovers that the picture on the screen was an illusion. Animals are very easily fooled, apes most of all -- living as we do in our own imaginations half the time.

        Science begins when you suspend this fantasy way of relating to the world, look it its actual properties.

        If your world view requires equivocating between fantasy and reality, then sure, anything goes. This is a high price to pay to cling on to the idea that the film is real, and there's a train racing towards you in your cinema seat.

        • mlsu 12 minutes ago

          > By definition, if that simulation has substantial properties, it isn't a simulation.

          This is kind of a no-true-scotsman esque argument though, isn't it? "substantial properties" are... what, exactly? It's not a subjective question. One could, and many have, insist that fire that really burns is merely a simulation. It would be impossible from the inside to tell. In that case, what is fantasy, and what is reality?

    • tsimionescu 6 hours ago

      There is a massive difference between chemical processes, like fire, and computational processes, which thinking likely is. A computer can absolutely be made to interact with the world in a way that assigns real physical meaning to the symbols it manipulates, a meaning entirely independent of any conscious being. For example, the computer that powers an automatic door has a clear meaning for its symbols intrinsic in its construction.

      Saying that the symbols in the computer don't mean anything, that it is only we who give them meaning, presupposes a notion of meaning as something that only human beings and some things similar to us possess. It is an entirely circular argument, similarly to the notion of p-zombies or the experience of seizing red thought experiment.

      If indeed the brain is a biological computer, and if our mind, our thinking, is a computation carried out by this computer, with self-modeling abilities we call "qualia" and "consciousness", then none of these arguments hold. I fully admit that this is not at all an established fact, and we may still find out that our thinking is actually non-computational - though it is hard to imagine how that could be.

      • mjburgess 5 hours ago

        There are no such things as "computational processes". Any computational description of reality describes vastly different sets of casual relata, nothing which exists in the real world is essentially a computational process -- everything is essential causal, with a circumstantially useful computational description.

    • bondarchuk 5 hours ago

      >A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described.

      This notion of causality is interesting. When a human claims that he is conscious, there a causal chain from the fact that they are conscious to their claiming so. When a neuron-level simulation of a human claims it is conscious, there must be a similar causal chain, with a similar fact at its origin.

    • 112233 2 hours ago

      There we go again. You claim that thinking is a biological process by definition, and use your definition to "prove" that software cannot be thinking. What if instead of software simulation of thinking we would have an actual software thinking? Your point would be to disregard it, not based on behaviour, but based on whatever your idea of "propper hardware for thinking" is. Pure troll and sophist, that Searle

  • dvt 6 hours ago

    > while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate

    His views are perfectly consistent with non-dualism and if you think his views are muddy, that doesn't mean they are (they are definitively not muddy, per a large consensus). For the record, I am a substance dualist, and his arguments against dualism are pretty interesting, precisely because he argues that you can build something that functions in a different way than symbol manipulation while still doing something that looks like symbol manipulation (but also has this special property called consciousness, kind of like our brains).

    Is this true? I don't know (I, of course, would argue "no"), but it does seem at least somewhat plausible and there's no obvious counter-argument.

    • tsimionescu 6 hours ago

      I don't see how his views can be made sense of without dualism. He believed very much in this concept of qualia as some special property, and in the logical coherence of the concept of p-zombies, beings that would exactly like a conscious being but without having qualia. This simply makes no sense unless you believe that consciousness is a non-physical property, one that the physical world acts upon but which can't itself act back upon it (as otherwise, there would obviously have to be some kind of meaningful physical difference between the being that possesses it and the being that doesn't).

      • dvt 4 hours ago

        > This simply makes no sense unless you believe that consciousness is a non-physical property

        It does make sense, and there's work being done on this front, (Penrose & Hameroff's Orch OR comes to mind). We obviously don't know exactly what such a mechanism would look like, but the theory itself is not inconsistent. Also, there's all kinds of p-zombies, so we likely need some specificity here.

  • Zarathruster 5 hours ago

    > No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example.

    It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema

    Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him.

    > Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.

    I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.

    I have, however, heard him say the following:

    1. The structure and arrangement of neurons in the human nervous system creates consciousness.

    2. The exact causal mechanism for this is phenomenon is unknown.

    3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious.

    He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation.

    > it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.

    He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism:

    https://faculty.wcas.northwestern.edu/paller/dialogue/proper...

    • tsimionescu 4 hours ago

      > It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema

      The Chinese room is an argument caked in notions of language, but it is in fact about consciousness more broadly. Syntax and semantics are not merely linguistic concepts, though they originate in that area. And while Searle may have been interested in language as well, that is not what this particular argument is mainly about (the title of the article is Minds, Brains, and Programs - the first hint that it's not about language).

      > I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.

      He said both things in the paper that introduced the Chinese room concept, as an answer to the potential rebuttals.

      Here is a quote about the brain that would be run in software:

      > 3. The Brain Simulator reply (MIT and Berkley)

      > [...] The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.

      And here is the bit about creating a real electrical brain, that he considers could be conscious:

      > "Yes, but could an artifact, a man-made machine, think?"

      > Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.

      > He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism: https://faculty.wcas.northwestern.edu/paller/dialogue/proper...

      I don't find this paper convincing. He admits at every step that materialism makes more sense, and then he asserts that still, consciousness is not ontologically the same thing as the neurobiological states/phenomena that create it. He admits that usually being causally reducible means being ontologically reducible as well, but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction. I am simply not convinced.

      • Zarathruster an hour ago

        > The Chinese room is an argument caked in notions of language, but it is in fact about consciousness more broadly.

        At this point I'm pretty sure we've had a misunderstanding. When I referred to "language" in my original post, you seem to have construed this as a reference to the Chinese language in the thought experiment. On the contrary, I was referring to software specifically, in the sense that a computer program is definitionally a sequence of logical propositions. In other words, a speech act.

        > [...] The problem with the brain simulator is that it is simulating the wrong things about the brain.

        This quote is weird and a bit unfortunate. It seems to suggest an opening: the brain simulator doesn't work because it simulates the "wrong things," but maybe a program that simulates the "right things" could be conscious. Out of context, you could easily reach that conclusion, and I suspect that if he could rewrite that part of the paper he probably would, because the rest of the paper is full of blanket denials that any simulation would be sufficient. Like this one: >The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.

        Regarding the electrical brain:

        > Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.

        Right, so he describes one example of an "electrical brain" that seems like it'd satisfy the conditions for consciousness, while clearly remaining open to the possibility that a different kind of artificial (non-electrical) brain might also be conscious. I'll assume you're using this quote to support your previous statement:

        > Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.

        I think it's fairly obvious why this is different from a simulation. If you build a system that reproduces the consciousness-causing mechanism of neurons, then... it will cause consciousness. Not simulated consciousness, but the real deal. If you build a robot that can reproduce the ignition-causing mechanism of a match striking a tinderbox, then it will start a real fire, not a simulated one. You seem to think that Searle owes us an explanation for this. Why? How are simulations even relevant to the topic?

        > I don't find this paper convincing.

        The title of the paper is "Why I Am Not a Property Dualist." Its purpose is to explain why he's not a property dualist. Arguments against materialism are made in brief.

        > He admits at every step that materialism makes more sense

        Did we read the same paper?

        > He admits that usually being causally reducible means being ontologically reducible as well,

        Wrong, but irrelevant

        > but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction.

        Examples and explanations are easy to provide, because there are several:

        > But in the case of consciousness, causal reducibility does not lead to ontological reducibility. From the fact that consciousness is entirely accounted for causally by neuron firings, for example, it does not follow that consciousness is nothing but neuron firings. Why not? What is the difference between consciousness and other phenomena that undergo an ontological reduction on the basis of a causal reduction, phenomena such as color and solidity? The difference is that consciousness has a first person ontology; that is, it only exists as experienced by some human or animal, and therefore, it cannot be reduced to something that has a third person ontology, something that exists independently of experiences. It is as simple as that.

        First-person vs. third-person ontologies are the key, whether you buy them or not. Consciousness is the only possible example of a first-person ontology, because it's the only one we know of

        > “Consciousness” does not name a distinct, separate phenomenon, something over and above its neurobiological base, rather it names a state that the neurobiological system can be in. Just as the shape of the piston and the solidity of the cylinder block are not something over and above the molecular phenomena, but are rather states of the system of molecules, so the consciousness of the brain is not something over and above the neuronal phenomena, but rather a state that the neuronal system is in.

        I could paste a bunch more examples of this, but the key takeaway is that consciousness is a state, not a property.

  • jll29 7 hours ago

    Hardware and software are of course equivalent, as every computer science (but not every philosopher) knows.

    D.R. Hofstadter posited that we can extract/separate the software from the hardware it runs on (the program-brain dichotomy), whereas Searle believed that these were not two layers but consciousness was in effect a property of the hardware. And from that, as you say, follows that you may re-create the property if your replica hardware is close enough to the real brain.

    IMHO, philosophers should be rated by the debate their ideas create, and by that, Searle was part of the top group.

  • xtiansimon 5 hours ago

    >> “His argument is much narrower: consciousness can't be instantiated purely in language.”

    > “No, his argument is that consciousness can't be instantiated purely in software…“

    The confusion is very interesting to me, maybe because I’m a complete neophyte on the subject. That said, I’ve often wondered if consciousness is necessarily _embodied_ or emerged from pure presence into language & body. Maybe the confusion is intentional?

gwd 7 hours ago

> He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language.

I haven't read loads of his work directly, but this quote from him would seem to contradict your claim:

> I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. [1]

Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.

[1] https://plato.stanford.edu/entries/chinese-room/

  • Zarathruster 5 hours ago

    Sorry, I've reread this a few times and I'm not sure which part of Searle's argument you think I mischaracterized. Could you clarify? For emphasis:

    > "consciousness can't be instantiated purely in language" (mine)

    > "we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else" (Searle)

    I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where.

    > Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.

    There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here: https://plato.stanford.edu/entries/chinese-room/#SystRepl

    • gwd 4 hours ago

      > I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where.

      I'm by far an expert in this; my knowledge of the syntax / semantics distinction primarily comes from discussions w/ ChatGPT (and a bit from my friend who is a Catholic priest, who had some training in philosophy).

      But, the quote says "purely formally or syntactically". My understanding is that Searle (probably thinking about the Prolog / GPS-type attempts at logical artificial intelligence prevalent in the 70's and 80's) is thinking of AI in terms of pushing symbols around. So, in this sense, the adder circuit in a processor doesn't semantically add numbers; it only syntactically adds numbers.

      When you said, "consciousness can't be instantiated purely in language", I took you to mean human language; it seems to leave the door open to consciousness (and thus semantics) being instantiated by a computer program in some other way. Whereas, the quote from Searle very clearly says, "...the computer program by itself is not sufficient for consciousness..." (emphasis mine) -- seeming to rule out any possible computer program, not just those that work at the language level.

      > There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here:

      I mean, yeah, I read that. Let me quote the relevant part for those reading along:

      > Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.

      I mean, it sounds to me like Searle didn't understand the "Systems Response" argument; because as the end of that section says, he's just moved the program and state part of the <procesor, program, state> tuple out of the room and into his head. The fact that the processor (Searle's own conscious mind) is now storing the program and the state in his own memory rather than externally doesn't fundamentally change the argument: If that tuple can "understand" things, then computers can "understand" things; and if that tuple can't "understand" things, then computers can't "understand" things.

      One must, of course, be humble when saying of a world-renowned expert, "He didn't understand the objection to his argument". But was Searle himself a programmer? Did he ever take a hard drive out of one laptop, pop it into another, and have the experience of the same familiar environment? Did he ever build an adder circuit, a simple register system, and a simple working computer out of logic gates, and see it suddenly come to life and execute programs? If he had, I can't help but think his intuitions regarding the syntax / semantic distinction would be different.

      EDIT: I mean, I'm personally a Christian, and do believe in the existence of eternal souls (though I'm not sure exactly what those look like). But I'm one of those annoying people who will quibble with an argument whose conclusion I agree with (or to which I am sympathetic), because I don't think it's actually a good argument.

      • Zarathruster 37 minutes ago

        Ah ok, gotcha.

        > When you said, "consciousness can't be instantiated purely in language", I took you to mean human language

        No, I definitely meant the statement to apply to any kind of language, but it seems clear that I sacrificed clarity for the sake of brevity. You're not the only one who read it that way, but yeah, we're in agreement on the substance.

        • gwd 22 minutes ago

          I think I'm still a bit confused... so, in the languages which cannot produce understanding and consciousness, you mean to include "machine language"? (And thus, any computer language which can be compiled to machine language?)

          On your interpretation, are there any sorts of computation that Searle believes would potentially allow consciousness?

          ETA: The other issue I have is with this whole idea is that "understanding requires semantics, and semantics requires consciousness". If you want to say that LLMs don't "understand" in that sense, because they're not conscious, I'm fine as long as you limit it to technical philosophical jargon. In plain English, in a practical sense, it's obvious to me that LLMs understand quite a lot -- at least, I haven't found a better word to describe LLMs' relationship with concepts.

dr_dshiv 9 hours ago

This is true of many philosophers. Once you read the source materials, you realize the depth of the material.

112233 41 minutes ago

I have yet to see anything to convince me he was not being a troll and making that argument deliberately so jumbled up in bad faith.

First of all, what purpose the person in the room serves, but to confuse and misdirect? Replace that person with a machine, and argument looses any impact.

His response to system reply is extremely egregious. How can that have been made in good faith? (to paraphrase: "the whole system understands chinese" — "no, a person can run the system in their head, it means the system cannot understand anything that the person running it does not") What kind of nonsense response is that? Either the guy was LV80 troll, or I dunno..