Comment by kasey_junk

Comment by kasey_junk 3 days ago

5 replies

What’s irritating is that the llms haven’t learned this as bout themselves yet. If you ask an llm to improve its instructions those sort of improvements are what it will suggest.

It is the thing I find most irritating about working with llms and agents. They seem forever a generation behind in capabilities that are self referential.

danielbln 3 days ago

LLMs will also happily put time estimates on work packages that are based on ore-LLM turn around times.

"Phase 2 will take about one week"

No, Claude, it won't, because you you and I will bang this thing out in a few hours.

  • mceachen 3 days ago

    "Refrain from including estimated task completion times." has been in my ~/.claude/CLAUDE.md for a while. It helps.

    • no-name-here 2 days ago

      Do such instructions take up a tiny bit more attention/context from LLMs, and consequentially is it better to leave it off and just ignore such output?

      • mceachen 2 days ago

        I have to balance this with what I know about my reptile brain. It’s distracting to me when Claude declares that I’m “absolutely right!” or making a “brilliant insight,” so it’s worth it to me to spend the couple context tokens and tell them to avoid these cliches.

        (The latest Claude has a `/context` command that’s great at measuring this stuff btw)

conorcleary 2 days ago

Comments like yours on posts like these by humans like us will create a philosophical lens out of the ether that future LLMs will harvest for free and then paywall.