Comment by int_19h

Comment by int_19h 4 days ago

3 replies

Indeed. But for stuff like this, ChatGPT is overkill. It's better to get a dedicated RP finetune of LLaMA, Qwen, or some other open weights model (you can still run it in the cloud if you don't have hardware to do so locally). There are enough finetunes around by now that you can "dial in" how dark you want it, but for some examples:

https://huggingface.co/jukofyork/Dark-Miqu-70B

https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_7...

emptysongglass 4 days ago

Just curious, how do you keep up to date on these models? Is there a community out there that discusses them?

  • int_19h 3 days ago

    r/LocalLLaMA has the discussions, but on top of that, I just periodically browse new model lists on Hugging Face. There's a lot of stuff, but most low-effort finetunes tend to focus on small models (since that's much cheaper and faster), so if you only look at 70B+ ones, there's a lot less garbage there.