Comment by amelius

Comment by amelius a day ago

3 replies

Step 1. Train a VLM to supervise the RL training.

Step 2. Train the RL network. In the mean time drink coffee or work on plan of world domination.

criemen a day ago

My understanding is that this is essentially how RLHF works, and it doesn't scale. As you run RL for longer, the model will learn how to cheat the imperfections of the grader, instead of getting better at the task at hand. Therefore, to scale RL you really need good graders, and determinism is king.

  • clbrmbr 20 hours ago

    Do you think constitutional approaches would help here? (Verifiable reward for the main score, but then asking the model to self-critique for security and quality.)

  • amelius 19 hours ago

    You're talking about training an LLM. I'm talking about training robotic/motor skills and haptic feedback.