Comment by mentos

Comment by mentos 7 days ago

6 replies

To moderate the majority of the community that will not be attempting prompt injections.

What meaningful vulnerabilities are there if the post can only be accepted/rejected/flaggedForHumanReview?

satvikpendem 7 days ago

That's what you tell the AI to do, who knows what other systems it has access to? For example, where is it writing the flags for these posts? Can it access the file system and do something programmatically? Et cetera, et cetera.

  • mentos 6 days ago

    The same way OpenAI offers its service to hundreds of millions of users without compromising any other systems it’s running on.

    • satvikpendem 6 days ago

      OpenAI doesn't allow write access to any file system. If you are recording posts to be reviewed, then you must necessarily store that information somewhere, at which point you will be allowing the AI to access some sort of data storage system, whether it be a file system or a database.

      • dijit 6 days ago

        is that really an issue in practice?

        I'm sure you can coax openai to send a http request, at which point you can just queue up automated reports.

        • cutemonster 6 days ago

          No it's not. Well, if designing the system in bad ways, it can be, but that can be said about anything.

          There's no need to do this: (from GP)

          > > at which point you will be allowing the AI to access

          No need to allow the AI to access anything.

          Send it the comment thread, what the forum is about, the users profile text, and then the AI outputs a number. Any security problem is then because of bugs the humans wrote in their code.

          Prompt injection? Yes, so there still needs to be ways to report comments manually, and review.

      • mentos 6 days ago

        CustomGPTs have write access to change their name and icon. OpenAI has a memory feature which persists between chat sessions. What are you talking about?