Comment by ori_b
Why would we argue if the machine is better at knowing what's worth doing? Why wouldn't we ask the machine to decide, and then do it?
Why would we argue if the machine is better at knowing what's worth doing? Why wouldn't we ask the machine to decide, and then do it?
For what human leverage of AGI may look like, look at the relationship between a mother and a toddler.
As you said: There's an infinite number of things a toddler may find worth doing, and they offload most of the execution to the mother. The mother doesn't escape the ambiguity, but has more experience and context.
Of course, this all assumes AGI is coming and super intelligent.
Well, because people are lazy. They already ask it for advice and it gives answers that they like. I already see teams using AI to put together development plans.
If you assume super intelligence, Why wouldn't that expand? Especially when it comes to competitive decisions that have a real cost when they're suboptimal?
The end state is that agents will do almost all of the real decision making, assuming things work out as the AI proponents say.
There are infinite things worth doing, a machines ability to actually know what's worth doing in any given scenario is likely on par with a human's. What's "Worth doing" is subjective, everything comes down to situational context. Machines cannot escape the same ambiguity as humans. If context is constant, then I would assume overlapping performance on a pretty standard distribution between humans and machines.
Machines lower the marginal cost of performing a cognitive task for humans, it can be extremely useful and high leverage to off load certain decisions to machines. I think it's reasonable to ask a machine to decide when machine context is higher and outcome is de-risked.
Human leverage of AGI comes down to good judgement, but that too is not uniformly applied.