Comment by nixpulvis

Comment by nixpulvis 3 hours ago

4 replies

I think it's that the issues are still so prevalent that people will justify poor arguments and reasons for being skeptical, because it matches their feelings, and articulating the actual problem is harder.

ants_everywhere 3 hours ago

It's exactly the same as the literal Luddites, synthesizers, cameras, etc. The actual concern is economic: people don't want to be replaced.

But the arguments are couched in moral or quality terms for sympathy. Machine-knitted textiles are inferior to hand-made textiles. Synthesizers are inferior to live orchestras. Daguerreotypes are inferior to hand-painted portraits.

It's a form of intellectual insincerity, but it happens predictably with every major technological advance because people are scared.

  • nixpulvis 2 hours ago

    I don't completely disagree. But it's incorrect to claim that there's nothing but fear of losing jobs at the heart of the AI concern.

    I think a lot of people like myself are concerned with how dependent we are becoming so quickly on something with limited accuracy and accountability.

    • ants_everywhere 2 hours ago

      Would your concerns be lessened or heightened if AI was more accurate? The doomsday scenario was always a highly competent AI like Skynet.

      • nixpulvis 2 hours ago

        I think it would ease some of my concerns, but wouldn't make me in the camp that believes it should be raced to without thinking about how to control it and plans in place to both identify and react to it's risks.

        There are two doomsdays. The dramatic one where they control the military and we end up living in the matrix. And the less dramatic, where we as human forget how to do things for ourselves and then slowly watch the AIs become less and less capable of keeping us happy and alive. Maybe in the end of both scenarios it's similar but one would take decades, while the other could happen overnight.

        Accuracy alone doesn't fix either doomsday scenario. But it would slow some of the issues I see forming already with people replacing research skills and informational reporting with AIs that can lie or be very misleading.