Comment by myrmidon

Comment by myrmidon 3 days ago

1 reply

If we had human level cognitive capabilities in a box (I'm assuming we will get there in some way this century), are you confident that such a construct will be kept sufficiently isolated and locked down?

I honestly think that this is extremely overoptimistic, just looking at how we currently experiment with and handle LLMs; admittedly the "danger" is much lower for now because LLMs are not capable of online learning and have very limited and accessible memory/state, but the "handling" is completely haphazard right now (people hooking up LLMs with various interfaces/web access, trying to turn them into romantic partners, etc.)

The people opening such a pandoras box might also be far from the only ones suffering the consequences , making it unfair to blame everyone.

bigbadfeline 2 days ago

> If we had human level cognitive capabilities in a box - are you confident that such a construct will be kept sufficiently isolated and locked down?

Yes, I think this is possible and not quite hard technically.

> I'm assuming we will get there in some way this century

Indeed, there isn't much time to decide what to do about the problems it might cause.

> just looking at how we currently experiment with and handle LLMs

That's my point, how we handle LLMs isn't a good model for AGI.

> The people opening such a pandoras box might also be far from the only ones suffering the consequences

This is a real problem but it's a political one and it isn't limited to just AI. Again, if can't fix ourselves there will be no future - with AGI or without.