lossolo 2 days ago

You are overinterpreting what they said again. "Go/Dota/Poker/Diplomacy" do not use LLMs, which means they are not considered "general purpose" by them. And to prove it to you, look at the OpenAI IMO solutions on GitHub, which clearly show that it's not a general purpose trained LLM because of how the words and sentences are generated there. These are models specifically fine tuned for math.

  • simianwords 2 days ago

    they could not have been more clear - sorry but are you even reading?

    • lossolo 2 days ago

      Clear about what? Do you know the difference between an LLM based on transformer attention and a monte carlo tree search system like the one used in Go? You do not understand what they are saying. It was a fine tuned model, just as DeepSeekMath is a fine tuned LLM for math, which means it was a special purpose model. Read the OpenAI GitHub IMO submissions to see the proof.