Comment by lifetimerubyist
Comment by lifetimerubyist 19 hours ago
Prompt injection will never be "solved". It will always be a threat.
Comment by lifetimerubyist 19 hours ago
Prompt injection will never be "solved". It will always be a threat.
Firewalls run on explicit rules. The "lethal trifecta" thing tells you how to constrain an LLM to enforce some set of explicit rules.
It only tells you that you can't secure a system using an LLM as a component without completely destroying any value provided by using the LLM in the first place.
Prompt injection cannot be solved without losing the general-purpose quality of an LLM; the underlying problem is also the very feature that makes LLMs general.
Correct, because it's an exploit on intelligence, borderline intelligence or would-be intelligence. You can solve it by being an unintelligent rock. Failing that, if you take in information you're subject to being harmed by mal-information crafted to mess you up as an intelligence.
As they love to say, do your own research ;)
9 years into transformers and only a couple years into highly useful LLMs I think the jury is still out. It certainly seems possible that some day we'll have the equivalent of an EDR or firewall, as we do for viruses and network security.
Not perfect, but good enough that we continue to use the software and networks that are open enough that they require them.