Comment by staticman2

Comment by staticman2 12 hours ago

1 reply

I'm talking about taking the Llama 3 base model and finetuning it with a dataset that doesn't include refusals, not whatever you mean by "taking a model that's already been lobotomized".

It's interesting that you weren't convinced by the above argument but still repeated the edgelord term "lobotomized" in your reply.

errantspark 10 hours ago

The claim is that llama is "lobotomized" because it was trained with safety in mind. You can't untrain that by finetuning. For what it's worth the non-instruct llama generally seems better at reasoning than instruct llama which i think is a point in support of OP.