Comment by danielbarla
Comment by danielbarla 11 hours ago
> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.
I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.
I would encourage deeply digging into the intuition that brain states and computer states are the same. Start with what you know, and then work backwards and see whether you still think they aren’t different. For example, we have an intuitive understanding of what kinds of flavors (for us) are delicious versus not. Or what kinds of sounds are pleasant versus not. Etc. If I close my eyes, I can see the color purple. I know that Nutella is delicious, and I can imagine its flavor at will. I share Searle’s intuition that the universe would be a strange place if these feelings of understanding (and pleasantness!) were simply functions not of physical states but of abstract program states. Keep in mind—what counts as a bit is simply a matter of convention. In one computer system, it could be a minute difference in voltage in a transistor. In another, it could be the presence of one element versus another. In another, it could be whether a chamber contains water or not. In another, it could be markings on a page. On and on. On the strong AI thesis, any system that runs steps in this program would not just produce functionally equivalent output to brains, but they would be forced to have mental states too, like imagining the taste of Nutella. To me, that’s implausible. Once you start digging in, you realize that either the Chinese Room is missing something, our understanding of physical reality is incomplete, OR that you have to bite the bullet that the universe creates mental states when systems implement the right program—but then you’re left with the puzzle of how it is that there is a tie between the physical world and the abstract world of symbols (how can causing a mark on a page cause mental states).
So what’s the physical cause for consciousness and understanding that is not computable? It’s worth noting that once you really start digging in, if for example you took the hypothesis that “consciousness is a sequence of microtubule-orchestrated collapses of the quantum wavefunction” [1], then you can see a series of physical requirements for consciousness and understanding that forces all conscious beings onto: 1) roughly the same clock (because consciousness shares a cause), and 2) the same reality (because consciousness causes wavefunction collapses). That’s something you could not do merely by simulating certain brain processes in a closed system.
1) Not saying this is correct, but it invites one to imagine that consciousness could have physical requirements that play in some of the oddities of the (shared) quantum world. https://x.com/StuartHameroff/status/1977419279801954744