Comment by satvikpendem

Comment by satvikpendem 7 days ago

9 replies

There is no way to get rid of a prompt injection attack. There are always ways to convince the AI to do something else besides flagging a post even if that's its initial instruction.

mentos 7 days ago

The raw text of the persons message can/will be posted to the forum and be obvious to the community if it’s a prompt injection to be flagged for human review and their account banned.

  • satvikpendem 7 days ago

    Sure, that's if human moderators see it before the AI, in which case, why have an AI at all? I presume in this solution that the AI is running all the time and it will see messages the instant they're sent and thus will always be vulnerable to a prompt injection attack before any human even sees it in the first place.

    • mentos 7 days ago

      To moderate the majority of the community that will not be attempting prompt injections.

      What meaningful vulnerabilities are there if the post can only be accepted/rejected/flaggedForHumanReview?

      • satvikpendem 7 days ago

        That's what you tell the AI to do, who knows what other systems it has access to? For example, where is it writing the flags for these posts? Can it access the file system and do something programmatically? Et cetera, et cetera.