Comment by EMM_386

Comment by EMM_386 4 days ago

5 replies

This is great.

When I work with AI on large, tricky code bases I try to do a collaboration where it hands off things to me that may result in large number of tokens (excess tool calls, unprecise searches, verbose output, reading large files without a range specified, etc.).

This will help narrow down exactly which to still handle manually to best keep within token budgets.

Note: "yourusername" in install git clone instructions should be replaced.

winchester6788 4 days ago

I had a similar problem, and when claude code (or codex) is running in sandbox, i wanted to put a cap or get notified on large contexts.

especially, because once x0K words crossed, the output becomes worser.

https://github.com/quilrai/LLMWatcher

made this mac app for the same purpose. any thoughts would be appreciated

cedws 4 days ago

I've been trying to get token usage down by instructing Claude to stop being so verbose (saying what it's going to do beforehand, saying what it just did, spitting out pointless file trees) but it ignores my instructions. It could be that the model is just hard to steer away from doing that... or Anthropic want it to waste tokens so you burn through your usage quickly.

  • egberts1 4 days ago

    Simply assert that :

    you are a professional (insert concise occupation).

    Be terse.

    Skip the summary.

    Give me the nitty-gritty details.

    You can send all that using your AI client settings.

kej 4 days ago

Would you mind sharing more details about how you do this? What do you add to your AI prompts to make it hand those tasks off to you?

jmuncor 4 days ago

Hahahah just fixed it, thank you so much!!!! Think of extending this to a prompt admin, Im sure there is a lot of trash that the system sends on every query, I think we can improve this.