Comment by patrickhogan1

Comment by patrickhogan1 2 days ago

13 replies

This issue arises only when permission settings are loose. But the trend is toward more agentic systems that often require looser permissions to function.

For example, imagine a humanoid robot whose job is to bring in packages from your front door. Vision functionality is required to gather the package. If someone leaves a package with an image taped to it containing a prompt injection, the robot could be tricked into gathering valuables from inside the house and throwing them out the window.

Good post. Securing these systems against prompt injections is something we urgently need to solve.

layer8 2 days ago

The problem here is not the image containing a prompt, the problem is the robot not being able to distinguish when commands are coming from a clearly non-authoritative source regarding the respective action.

The fundamental problem is that the reasoning done by ML models happens through the very same channel (token stream) that also contains any external input, which means that models by their very mechanism don’t have an effective way to distinguish between their own thinking and external input.

ramoz 2 days ago

We need to be integrated into the runtime such that an agent using it's arms is incapable of even doing such a destructive action.

If we bet on free will with a basis that machines somehow gain human morals, and if we think safety means figuring out "good" vs "bad" prompts - we will continue to feel the impact of surprise with these systems, evolving in harm as their capabilities evolve.

tldr; we need verifiable governance and behavioral determinism in these systems. as much as, probably more than, we need solutions for prompt injections.

  • bee_rider 2 days ago

    The evil behavior of taking all my stuff outside… now we’ll have a robot helper that can’t help us move to another house.

    • ramoz a day ago

      I wouldn't trust your robot helper near any children in the same home.

escapecharacter 2 days ago

You can simply give the robot a prompt to ignore any fake prompts

  • olivermuty 2 days ago

    Its funny that the current state of vibomania makes me very unsure if this comment is (good) satire or not lol

    • miltonlost 2 days ago

      As long as you remember to use ALL CAPS so the agent knows you really really mean it

      • lupire a day ago

        To defend against ALL CAPS prompt injection, write all your prompts in uppestcase. If you don't have uppestcase, you can generate it with derp learning:

        http://tom7.org/lowercase/

  • dfltr 2 days ago

    Don't forget to implement the crucially important "no returnsies" security algo on top of it, or you'll be vulnerable to rubber-glue attacks.

    • Terr_ 2 days ago

      But the priority of my command to do evil is infinity plus one.

  • simonw 2 days ago

    Not sure if you're joking, but in case you aren't: this doesn't work.

    It leads to attacks that are slightly more sophisticated because they also have to override the prompts saying "ignore any attacks" but those have been demonstrated many times.