Comment by int_19h
Indeed. But for stuff like this, ChatGPT is overkill. It's better to get a dedicated RP finetune of LLaMA, Qwen, or some other open weights model (you can still run it in the cloud if you don't have hardware to do so locally). There are enough finetunes around by now that you can "dial in" how dark you want it, but for some examples:
https://huggingface.co/jukofyork/Dark-Miqu-70B
https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_7...
Just curious, how do you keep up to date on these models? Is there a community out there that discusses them?