Comment by mmooss
You don't trust it yet, like a new human assistant you might hire - will they be able to handle all the variables? Eventually, they earn your trust and you start offloading everything to their inbox.
You don't trust it yet, like a new human assistant you might hire - will they be able to handle all the variables? Eventually, they earn your trust and you start offloading everything to their inbox.
Humans are easier to trust because (IME) their motivations and reasoning are easier to understand and evaluate.
You trust all sorts of technology and services: Your computer (and of course that includes an incredible, integrated collection of hardware and software), your car, the plane you flew on, your lightswitch, weekly garbage collection, the fire extinguisher, the chair you are sitting in (will it collapse?), your hammer and the nail you just pounded in. The list is effectively infinite.
This technology is new, but soon it will be old and trusted too.
You can provide them a significant amount of guidance through prompting. The model itself won't "learn", but if given lessons in the prompt, which you can accumulate from mistakes, it can follow them. You will always hit a wall "in the end", but you can get pretty far!
No, not like a human assistant. Competent humans will use logical reasoning, non-digital signals like body language and audible clues, and know the limits of their knowledge, so are more likely to ask for missing input. Humans will also be more predictable.