Comment by warkdarrior
Comment by warkdarrior 2 days ago
So you have some hierarchy of LLMs. The first LLM that sees the prompt is vulnerable to prompt injection.
Comment by warkdarrior 2 days ago
So you have some hierarchy of LLMs. The first LLM that sees the prompt is vulnerable to prompt injection.
It can still be injected to delegate in a different way than the user would expect/want it to.
The first LLM only knows to delegate and cannot respond.