Comment by eadmund
Comment by eadmund 3 months ago
> are you really going to trust that you can stop it from _executing a harmful action_?
Of course, because an LLM can’t take any action: a human being does, when he sets up a system comprising an LLM and other components which act based on the LLM’s output. That can certainly be unsafe, much as hooking up a CD tray to the trigger of a gun would be — and the fault for doing so would lie with the human who did so, not for the software which ejected the CD.
Given that the entire industry is in a frenzy to enable "agentic" AI - i.e. hook up tools that have actual effects in the world - that is at best a rather native take.
Yes, LLMs can and do take actions in the world, because things like MCP allow them to translate speech into action, without a human in the loop.