Comment by simianwords
Comment by simianwords 2 days ago
I think you are misleading as well. Thinking models do recursively generate the final “best” prompt to get the most accurate output. Unless you are genuinely giving new useful information in the prompt, it is kind of useless to structure the prompt in one way or another because reasoning models can generate intermediate steps that give best output. The evidence on this is clear - benchmarks reveal that thinking models are way more performant.
You're both kind of right. The order is less important for reasoning models, but if you carefully read thinking traces you'll find that the final answer is sometimes not the same as the last intermediary result. On slightly more challenging problems LLMs flip flop quite a bit and ordering the output cleverly can uplift the result. That might stop being true for newer or future models but I iterated quite a bit in this for sonnet 4.