Comment by behnamoh

Comment by behnamoh 2 days ago

17 replies

Lots of research shows post-training dumbs down the models but no one listens because people are too lazy to learn proper prompt programming and would rather have a model already understand the concept of a conversation.

ACCount37 2 days ago

"Post-training" is too much of a conflation, because there are many post-training methods and each of them has its own quirky failure modes.

That being said? RLHF on user feedback data is model poison.

Users are NOT reliable model evaluators, and user feedback data should be treated with the same level of precaution you would treat radioactive waste.

Professional are not very reliable either, but the users are so much worse.

CuriouslyC 2 days ago

Some distributional collapse is good in terms of making these things reliable tools. The creativity and divergent thinking does take a hit, but humans are better at this anyhow so I view it as a net W.

  • ACCount37 2 days ago

    This. A default LLM is "do whatever seems to fit the circumstances". An LLM that was RLVR'd heavily? "Do whatever seems to work in those circumstances".

    Very much a must for many long term tasks and complex tasks.

CGMthrowaway 2 days ago

How do you take a raw model and use it without chatting ? Asking as a layman

  • swatcoder 2 days ago

    You lob it the beginning of a document and let it toss back the rest.

    That's all that the LLM itself does at the end of the day.

    All the post-training to bias results, routing to different models, tool calling for command execution and text insertion, injected "system prompts" to shape user experience, etc are all just layers built on top of the "magic" of text completion.

    And if your question was more practical: where made available, you get access to that underlying layer via an API or through a self-hosted model, making use of it with your own code or with a third-party site/software product.

  • behnamoh 2 days ago

    the same way we used GPT-3. "the following is a conversation between the user and the assistant. ..."

nomel 2 days ago

The "alignment tax".