Comment by no-name-here
Comment by no-name-here 2 days ago
Do such instructions take up a tiny bit more attention/context from LLMs, and consequentially is it better to leave it off and just ignore such output?
Comment by no-name-here 2 days ago
Do such instructions take up a tiny bit more attention/context from LLMs, and consequentially is it better to leave it off and just ignore such output?
I have to balance this with what I know about my reptile brain. It’s distracting to me when Claude declares that I’m “absolutely right!” or making a “brilliant insight,” so it’s worth it to me to spend the couple context tokens and tell them to avoid these cliches.
(The latest Claude has a `/context` command that’s great at measuring this stuff btw)