Comment by amenhotep

Comment by amenhotep 14 hours ago

2 replies

Uncensored finetunes aren't the same thing, that's taking a model that's already been lobotomised and trying to teach it that wrongthink is okay - rehabilitation of the injury. OpenAI's uncensored model would be a model that had never been injured at all.

I also am not convinced by the argument but that is a poor reason against.

staticman2 13 hours ago

I'm talking about taking the Llama 3 base model and finetuning it with a dataset that doesn't include refusals, not whatever you mean by "taking a model that's already been lobotomized".

It's interesting that you weren't convinced by the above argument but still repeated the edgelord term "lobotomized" in your reply.

  • errantspark 10 hours ago

    The claim is that llama is "lobotomized" because it was trained with safety in mind. You can't untrain that by finetuning. For what it's worth the non-instruct llama generally seems better at reasoning than instruct llama which i think is a point in support of OP.