Comment by dhorthy
For the record I do think the AI community tries to unnecessarily reinvent the wheel on crap all the time.
sure, readme.md is a great place to put content. But there's things I'd put in a readme that I'd never put in a claude.md if we want to squeeze the most out of these models.
Further, claude/agents.md have special quality-of-life mechanics with the coding agent harnesses like e.g. `injecting this file into the context window whenever an agent touches this directory, no matter whether the model wants to read it or not`
> What people often forget about LLMs is that they are largely trained on public information which means that nothing new needs to be invented.
I don't think this is relevant at all - when you're working with coding agents, the more you can finesse and manage every token that goes into your model and how its presented, the better results you can get. And the public data that goes into the models is near useless if you're working in a complex codebase, compared to the results you can get if you invest time into how context is collected and presented to your agent.
> For the record I do think the AI community tries to unnecessarily reinvent the wheel on crap all the time.
On Reddit's LLM subreddits people are rediscovering the very basics of software project management as some massive insights daily or very least weekly.
Who would've guessed that proper planning, accessible and up to documentation and splitting tasks into manageable testable chunks produces good code? Amazing!
Then they write a massive blog post or even some MCP mostrosity for it and post it everywhere as a new discovery =)