Comment by conartist6

Comment by conartist6 19 hours ago

0 replies

I can only assume you meant to write "shouldn't" instead of "should", but if you study human factors you'll discover that certain kinds of taking-shortcuts behavior are inevitable when dealing with humans. Speeding when we drive, for example. We know we are creating a material risk of getting pulled over and fined, but we just basically decide to ignore that risk because for most of us it is outweighed by the convenience (and real value) of getting everywhere we're going faster.

As always considering how a person would interact with an intern is surprisingly instructive to how they will form a working relationship with an non-sentient tool like a language model. You would expect them to give it a probationary experience to earn their trust after which if they are satisfied they will almost certainly express that trust by giving the tool a greater and greater degree of freedom with less active (and less critical) oversight.

It is not the initial state that worries me where the officers still mistrust a new technology and are vigilant of it. What worries me is the late-stage where they have learned to trust it (because it has learned to cover their asses correctly) and the AI itself actually ends up exercising power in human social structures because people have a surprising bias towards not speaking up when it would be safer to keep your head down and go with the flow, even when the flow is letting AI take operational control of society inch by inch