Comment by gcanyon

Comment by gcanyon 2 days ago

2 replies

The obvious question (to me at least) is whether "consciousness" is actually useful in an AI. For example, if your goal is to replace a lawyer researching and presenting a criminal case, is the most efficient path to develop a conscious AI, or is consciousness irrelevant to performing that task?

It might be that consciousness is inevitable -- that a certain level of (apparent) intelligence makes consciousness unavoidable. But this side-steps the problem, which is still: should consciousness be the goal (phrased another way, is consciousness the most efficient way to achieve the goal), or should the goal (whatever it is) simply be the accomplishment of that end goal, and consciousness happens or doesn't as a side effect.

Or even further, perhaps it's possible to achieve the goal with or without developing consciousness, and it's possible to not leave consciousness to chance but instead actively avoid it.

novaRom a day ago

Consciousness is not required for efficient AI agents, but it might be useful if your agent should have self-preservation. However an agent without embodiment, instincts, and emotions can call its own existence into question. Any powerful agent will find a way to control its own existence.

  • gcanyon a day ago

    > Any powerful agent will find a way to control its own existence.

    See, I think that's not a given. To my point, I'm acknowledging the possibility that consciousness/self-determination might naturally come about with higher levels of functionality, but also that it might be inevitable or it might be optional, in which case we need to decide whether it's desirable.