Comment by gcanyon
The obvious question (to me at least) is whether "consciousness" is actually useful in an AI. For example, if your goal is to replace a lawyer researching and presenting a criminal case, is the most efficient path to develop a conscious AI, or is consciousness irrelevant to performing that task?
It might be that consciousness is inevitable -- that a certain level of (apparent) intelligence makes consciousness unavoidable. But this side-steps the problem, which is still: should consciousness be the goal (phrased another way, is consciousness the most efficient way to achieve the goal), or should the goal (whatever it is) simply be the accomplishment of that end goal, and consciousness happens or doesn't as a side effect.
Or even further, perhaps it's possible to achieve the goal with or without developing consciousness, and it's possible to not leave consciousness to chance but instead actively avoid it.
Consciousness is not required for efficient AI agents, but it might be useful if your agent should have self-preservation. However an agent without embodiment, instincts, and emotions can call its own existence into question. Any powerful agent will find a way to control its own existence.