Comment by protocolture

Comment by protocolture 5 hours ago

5 replies

Yeah I see things like "AI Firewalls" as both, firstly ridiculously named, but also, the idea you can slap an applicance (thats sometimes its own LLM) onto another LLM and pray that this will prevent errors to be lunacy.

For tasks that arent customer facing, LLMs rock. Human in the loop. Perfectly fine. But whenever I see AI interacting with someones customer directly I just get sort of anxious.

Big one I saw was a tool that ingested a humans report on a safety incident, adjusted them with an LLM, and then posted the result to an OHS incident log. 99% of the time its going to be fine, then someones going to die and the the log will have a recipe for spicy noodles in it, and someones going to jail.

jonplackett 3 hours ago

The air Canada chatbot that mistakenly told someone they can cancel and be refunded for a flight due to a bereavement is a good example of this. It went to court and they had to honour the chatbot’s response.

It’s quite funny that a chatbot has more humanity than its corporate human masters.

  • delichon 7 minutes ago

    That policy would be fraudulently exploited immediately. So is it more humane or more gullible?

  • kebman 16 minutes ago

    Not AI, but similar sounding incident in Norway. Some traders found a way to exploit another company's trading bot at the Oslo Stock Exchange. The case went to court. And the court's ruling? "Make a better trading bot."

  • RobotToaster 34 minutes ago

    Chatbots have no fear of being fired, most humans would do the same in a similar position.

  • shinycode an hour ago

    What a nice side effect, unfortunately they’ll lock chatbots with more barriers in the future but that’s ironic.