Comment by riazrizvi

Comment by riazrizvi 3 hours ago

3 replies

Nah. It’s so easy for OpenAI to modify their output. I’m already seeing them restrict news article re-generation by newspaper name. They do it to reduce liability. There’s also a big copyright infringement case coming up in the USA this year, and being able to point to responsiveness to complaints will be a key part of their legal defense I bet.

portaouflop 2 hours ago

You can modify the output but the underlying model is always susceptible to jail breaks. A method I tried a couple months ago to reliably get it to explain to me how to cook meth step by step still works. I’m not gonna share it, you just have to take my word on this.

  • riazrizvi 2 hours ago

    I believe you, but you only need to establish a safety standard where jailbreaking is required by the end-user to show you are protecting property in good faith, AFAIK.

  • randomNumber7 2 hours ago

    Why is this so problematic? You can read all this stuff in old papers and patents that are available in the web.

    And if you are not capable to do this you will likely not succeed with the chatgpt instructions.