Comment by portaouflop

Comment by portaouflop 2 hours ago

2 replies

You can modify the output but the underlying model is always susceptible to jail breaks. A method I tried a couple months ago to reliably get it to explain to me how to cook meth step by step still works. I’m not gonna share it, you just have to take my word on this.

riazrizvi 2 hours ago

I believe you, but you only need to establish a safety standard where jailbreaking is required by the end-user to show you are protecting property in good faith, AFAIK.

randomNumber7 2 hours ago

Why is this so problematic? You can read all this stuff in old papers and patents that are available in the web.

And if you are not capable to do this you will likely not succeed with the chatgpt instructions.