Comment by simpaticoder

Comment by simpaticoder 5 hours ago

0 replies

Okay, I'll bite: what's so "terrifying" about developing a physical theory of consciousness?

I have to admit having similar reactions to other "profound" questions - for example, does free will exist? To that one I say: As long as weather exists, even deterministic intelligences will be as unpredictable as one with free will. A machine with chaotic inputs will itself be sufficiently chaotic.

Regarding consciousness, I think there is a category error born of (understandable) hubris. It is the conceit that you can carve out "consciousness" from the holistic physical phenomena of "humans" or, more generally, "life". It's kind of a package deal. Humans might (and probably will) make concious machines, but it will forever be an unanswerable philosophical question about whether they "really" are, just as it is with other humans. In the end it's best to "zoom out" and consider the subject in the context of the Fermi paradox - will such an invention help or harm humanity? (Does replacement imply harm? If we are replaced by our children, is that harm?)

In any event, it's all above my pay-grade, so to speak. For what it's worth, I tend to think that a) life is common in the universe, b) intelligent life very uncommon, and c) humanity got some really serious help from the cosmos/won a few lotteries. We got a moon the exact same angular size as the sun, allowing us to e.g. verify general relativity with ease. We got an atmosphere that let us see the stars clearly, and still breathe. We got a 3rd gen star and planet with a nice mix of light and heavy elements, and plenty of energy runway in the sun. We got abiogenesis (~common) and eukaryotic cells (~uncommon). We got some timely 99% extinctions (but not 100%) to clear the path for us, and which coincidentally left vast energy resources underground for us to bootstrap out of the middle ages. We got a celestial moat, almost impossible to cross (special relativity speed limits; thermodynamic limits) for all but the most advanced (and therefore presumably wisest) civilizations, keeping us safe from colonization. The latter is a bit of a golden cage, and I consider getting out of that cage the highest civilizational goal possible.

Within this picture, AI can fit in in many places, with positive and negative effects. I have to admit that I do not like the trend I see in humanity to become unmoored from the physical world, to venture out unarmed with critical thinking skills, like lambs to the slaughter in the barbaric free-for-all that is the modern info-sphere, who's ulimtate goals are the same as they ever were: money and power. The chance of a stupid self-own like nuclear war, autonomous AI weapons, bio-warfare, or catastrophic global climate change are still all too likely, and getting more likely as intelligent, balanced minds are selected against. We can't do anything about a caldera explosion or a nearby supernova, or even being stuck in-system while the sun burns out, but we can and should avoid shooting ourselves while playing with daddy's gun.