Comment by patrickhogan1
Comment by patrickhogan1 2 days ago
This issue arises only when permission settings are loose. But the trend is toward more agentic systems that often require looser permissions to function.
For example, imagine a humanoid robot whose job is to bring in packages from your front door. Vision functionality is required to gather the package. If someone leaves a package with an image taped to it containing a prompt injection, the robot could be tricked into gathering valuables from inside the house and throwing them out the window.
Good post. Securing these systems against prompt injections is something we urgently need to solve.
The problem here is not the image containing a prompt, the problem is the robot not being able to distinguish when commands are coming from a clearly non-authoritative source regarding the respective action.
The fundamental problem is that the reasoning done by ML models happens through the very same channel (token stream) that also contains any external input, which means that models by their very mechanism don’t have an effective way to distinguish between their own thinking and external input.