Comment by criemen

Comment by criemen a day ago

2 replies

My understanding is that this is essentially how RLHF works, and it doesn't scale. As you run RL for longer, the model will learn how to cheat the imperfections of the grader, instead of getting better at the task at hand. Therefore, to scale RL you really need good graders, and determinism is king.

clbrmbr 20 hours ago

Do you think constitutional approaches would help here? (Verifiable reward for the main score, but then asking the model to self-critique for security and quality.)

amelius 20 hours ago

You're talking about training an LLM. I'm talking about training robotic/motor skills and haptic feedback.