Comment by whatever1
But Humans generalize very well across tasks. You can have an employee driving a forklift, then stop pick-up a pallet that blocks his way and continue.
But Humans generalize very well across tasks. You can have an employee driving a forklift, then stop pick-up a pallet that blocks his way and continue.
And robots will not do that either, what if the employee used hearing to determine if there is a hazard (another moving vehicle around) before jumping to pick a pallet? How would the robot know by just “looking”? How to prioritise visuals, audio, sense … etc?