Comment by motorest
> In code bases like this you can't just wing it without breaking a million things, so LLM agents tend to perform really poorly.
This is a false premise. LLMs themselves don't force you to introduce breaking changes into your code.
In fact, the inception of coding agents was lauded as a major improvement to the developer experience because they allow the LLMs themselves to automatically react to feedback from test suites, thus speeding up how code was implemented while preventing regressions.
If tweaking your code can result in breaking a million things, this is a problem with your code and how you worked to make it resilient. LLMs are only able to introduce regressions if your automated tests are unable to catch any of these million of things breaking. If this is the case then your problems are far greater than LLMs existing, and at best LLMs only point out the elephant in the room.