Comment by reaperman

Comment by reaperman a day ago

6 replies

Edit: 'wahnfrieden corrected me. I incorrectly posited that CoT was only included in the context window during the reasoning task and later left out entirely. Edited to remove potential misinformation.

monsieurbanana a day ago

In which case the model couldn't possibly know that the number was correct.

  • Me1000 a day ago

    I'm also confused by that, but it could just be the model being agreeable. I've seen multiple examples posted online though where it's fairly clear that the COT output is not included in subsequent turns. I don't believe Anthropic is public about it (could be wrong), but I know that the Qwen team specifically recommend against including COT tokensfrom previous inferences.

    • thomassmith65 a day ago

      Claude has some awareness of its CoT. As an experiment, it's easy, for example, to ask Claude to "think of a city, but only reply with the word 'ready' and next to ask "what is the first letter of the city you thought of?"

wahnfrieden a day ago

No, the CoT is not simply extra context the models are specifically trained to use CoT and that includes treating it as unspoken thought

  • reaperman a day ago

    Huge thank you for correcting me. Do you have any good resources I could look at to learn how the previous CoT is included in the input tokens and treated differently?

    • wahnfrieden a day ago

      I've only read the marketing materials of closed models. So they could be lying, too. But I don't think CoT is something you can do with pre-CoT models via prompting and context manipulation. You can do something that looks a little like CoT, but the model won't have been trained specifically on how to make good use of it and will treat it like Q&A context.