Comment by behnamoh
Comment by behnamoh 2 days ago
Lots of research shows post-training dumbs down the models but no one listens because people are too lazy to learn proper prompt programming and would rather have a model already understand the concept of a conversation.
"Post-training" is too much of a conflation, because there are many post-training methods and each of them has its own quirky failure modes.
That being said? RLHF on user feedback data is model poison.
Users are NOT reliable model evaluators, and user feedback data should be treated with the same level of precaution you would treat radioactive waste.
Professional are not very reliable either, but the users are so much worse.