Comment by amenhotep
Uncensored finetunes aren't the same thing, that's taking a model that's already been lobotomised and trying to teach it that wrongthink is okay - rehabilitation of the injury. OpenAI's uncensored model would be a model that had never been injured at all.
I also am not convinced by the argument but that is a poor reason against.
I'm talking about taking the Llama 3 base model and finetuning it with a dataset that doesn't include refusals, not whatever you mean by "taking a model that's already been lobotomized".
It's interesting that you weren't convinced by the above argument but still repeated the edgelord term "lobotomized" in your reply.