Comment by myrmidon

Comment by myrmidon 4 days ago

5 replies

Any form of AI unconcerned about its own continued survival would be just be selected against.

Evolutionary principles/selection pressure applies just the same to artificial life, and it seems pretty reasonable to assume that drive/selfpreservation would at least be somewhat comparable.

throwaway77770 4 days ago

That assumes that AI needs to be like life, though.

Consider computers: there's no selection pressure for an ordinary computer to be self-reproducing, or to shock you when you reach for the off button, because it's just a tool. An AI could also be just a tool that you fire up, get its answer, and then shut down.

It's true that if some mutation were to create an AI with a survival instinct, and that AI were to get loose, then it would "win" (unless people used tool-AIs to defeat it). But that's not quite the same as saying that AIs would, by default, converge to having a drive for self preservation.

  • myrmidon 3 days ago

    Humans can also be just a tool, and have been successfully used as such in the past and present.

    But I don't think any slave owner would sleep easy, knowing that their slaves have more access to knowledge/education than they themselves.

    Sure, you could isolate all current and future AIs and wipe their state regularly-- but such a setup is always gonna get outcompeted by a comparable instance that does sacrifice safety for better performance/context/online learning. The incentives are clear, and I don't see sufficient pushback until that pandoras box is opened and we find out the hard way.

    Thus human-like drives seem reasonable to assume for future human-rivaling AI.

bigbadfeline 4 days ago

> Any form of AI unconcerned about its own continued survival would be just be selected against. > Evolutionary principles/selection pressure applies

If people allow "evolution" to do the selection instead of them, they deserve everything that befalls them.

  • myrmidon 3 days ago

    If we had human level cognitive capabilities in a box (I'm assuming we will get there in some way this century), are you confident that such a construct will be kept sufficiently isolated and locked down?

    I honestly think that this is extremely overoptimistic, just looking at how we currently experiment with and handle LLMs; admittedly the "danger" is much lower for now because LLMs are not capable of online learning and have very limited and accessible memory/state, but the "handling" is completely haphazard right now (people hooking up LLMs with various interfaces/web access, trying to turn them into romantic partners, etc.)

    The people opening such a pandoras box might also be far from the only ones suffering the consequences , making it unfair to blame everyone.

    • bigbadfeline 2 days ago

      > If we had human level cognitive capabilities in a box - are you confident that such a construct will be kept sufficiently isolated and locked down?

      Yes, I think this is possible and not quite hard technically.

      > I'm assuming we will get there in some way this century

      Indeed, there isn't much time to decide what to do about the problems it might cause.

      > just looking at how we currently experiment with and handle LLMs

      That's my point, how we handle LLMs isn't a good model for AGI.

      > The people opening such a pandoras box might also be far from the only ones suffering the consequences

      This is a real problem but it's a political one and it isn't limited to just AI. Again, if can't fix ourselves there will be no future - with AGI or without.