Comment by blackbear_

Comment by blackbear_ 4 days ago

4 replies

They mention reinforcement learning, so I guess they used some sort of Monte Carlo tree search (the same algorithm used for AlphaGo).

In this case, the model would explore several chain of thoughts during training, but only output a single chain during inference (as the sibling comment suggests).

whimsicalism 4 days ago

as someone who works in this field, this comment is obviously uninformed even about old public research trends

  • ricardobeat 4 days ago

    Care to elaborate? Your comment would be a lot more useful if it included a little why. Otherwise it’s just teasing readers and at the same time smearing the author without anything to back it up.

    • whimsicalism 4 days ago

      reinforcement learning with ppo doesn’t involve mcts and has been the bread and butter of aligning LLMs since 2020. nothing about saying they use rl implies mcts

      • janalsncm 4 days ago

        > nothing about saying they use rl implies they use mcts

        We can say the same thing about RL implying PPO, however there’s pretty big hints, namely Noam Brown being involved. Many of the things Noam Brown has worked on involve RL in tree search contexts.

        He has also been consistently advocating the use of additional test-time compute to solve search problems. This is also consistent with the messaging regarding the reasoning tokens. There is likely some learned tree search algorithm, such as a learned policy/value function as in AlphaGo.

        It’s all speculation until we have an actual paper. So we can’t categorically say MCTS/learned tree search isn’t involved.