Comment by trevithick

Comment by trevithick 2 hours ago

13 replies

I have not heard or read anything about AI that could be construed as positive for an ordinary person. Step one is "lose your job with no possibility of finding another one, but still have to buy stuff to survive." That will also be the last step for a huge number of people. Is there a bull case for some hypothetical regular person with a desk job? I haven't seen one.

infinitezest an hour ago

This is something I've been waiting to hear as well. We hear about how jobs will be eliminated, and occasionally we hear about how that means there will be time for other things that we want to do, but it kind of seems like AI is already doing all of the things that we want to do. And then, of course, there's the question of how the rest of us are going to provide for ourselves if none of us have jobs. Those at the top already seem quite reticent to share with the rest of us. I can't imagine that's going to get better if we don't provide any value to them that a computer can't do for cheaper.

  • FuriouslyAdrift an hour ago

    There's always crime...

    • boilerupnc 32 minutes ago

      I suspect the early variants will fall into two camps:

      1. Traditional garden variety human to human, computer to computer and computer to human crime stuff that happens today.

      2. Human to computer (AI) crime, misdeeds and bullying. Stuff like:

      - Sabotage and poison your AI agent colleague to make it look bad, inefficient, ineffectual in small, but high volume ways. Delegate all risky, bad+worse choice decision making to AI and let the algo take the reputational damage.

      - Go beat up on automated bots, cars, drones, etc ... How should it feel to kick a robot dog?

      For a humorous read on automation bots and AI in a dystopian world, take a look at Quality Land [0]. Really enjoyed it. As a teaser, imagine having some drones suffering from a fear of heights, hence being deemed faulty and sentenced for destruction. Do faulty bots or AI have value in this world even if they don't deliver on their original intended use?

      [0] https://www.goodreads.com/book/show/36216607-qualityland

    • A4ET8a8uTh0_v2 an hour ago

      The interesting thing about it is that the signs suggest that 'the rich' are prepping for such an outcome ( you will see occasional article here and there about bunkers being bought ). Naturally, if one was to suggest that maybe we could try working towards some sort of semblance of 'new new deal', they would be called some sort of crazy person, who is a communist and hates democracy ( as opposed to simply trying to save the system from imploding ).

      • achierius 43 minutes ago

        Then why bother? Why not go all the way, try to find a way to a new, better system, rather than gambling that these people who so totally hate you would one day become willing to compromise in order to save the current one (with benefits then most of all, not you)?

        • A4ET8a8uTh0_v2 34 minutes ago

          Because, in real life, power re-alignment of that magnitude tends to be.. chaotic. I like my life. I also like my kid to survive long enough to fend for itself. Both of these become a big gamble if we do not work within the existing system.

          I am saying this as a person who had a front seat to a something similar as a kid. It was relatively peaceful and it still managed to upend lives of millions ( because, at the end of the day, people don't really change ).

bpodgursky 2 minutes ago

Individual countries, or even multinational orgs like the EU, "opting out" doesn't work, is the critical problem. In a digital economy, you can't keep AI workers from crossing borders. You can pretend to make it illegal, but the Philippine "contractors" you hire will just be fake!

Or more likely, your entire enterprise collapses against international rivals. Or your entire country turns into North Sentinelese islanders just surviving at the whim of hypertechnical industrialized neighbors.

I'm all for international cooperation on how to preserve a place for humans, I truly am, but the "let's just not do it" is frustratingly naive and not an actual plan.

dontlaugh an hour ago

Especially with this bleak described future. Why would I want to be one of the few simulated? I’d rather stay dead.

If you think this may be the future, surely the only rational response is to do everything in your power to prevent it.

  • trevithick an hour ago

    Yes. I've always planned to stay dead forever after I die. AI is not changing that.

cjs_ac an hour ago

The bull case is that when the venture capitalists stop subsidising LLM providers and expect them to turn a profit, the actual end-user costs exceeds the cost of employing a human. I don't know whether this is actually true, but it might happen.

  • Zigurd an hour ago

    Either that, or the AI bubble bursts hard and those people lose their jobs too, and nobody has a prospect of getting their job back. That then causes the market to lose enough that it becomes impossible for PE firms to exit any of their investments.

  • trevithick an hour ago

    True, my question assumed AI "progress" and adoption follow the hype trajectory. Reality could be closer to the scenario you laid out. The bubble pops, some AI tools maybe improve things in some areas, societal disintegration gets kicked down the road a few years.

  • empath75 an hour ago

    Even it AI is better and more cost efficient at doing everything that humans do, there will still be work for humans to do. AI development will focus on stuff that ai is best and most efficient at. There will be many things that AI may be better at than humans but nevertheless would not be the best use of AI, and humans can still do that work.