Comment by jillesvangurp
Comment by jillesvangurp 9 hours ago
All of this is subjective. What does it mean for code to be high quality?
If you can express that in a form that can be easily tested, you can just instruct an agentic coding tool to do something about it. Most of my experience is with codex. Everytime I catch it doing something I don't like, I try to codify it in a skill or in my Agents.md or some test. I've been using codex specifically to work on addressing technical debt in my own code bases. There's a lot of stuff I never got around to fixing that I'm now actually addressing. Because it stopped being a monster project that would take weeks. You can actually nudge a code base in the right direction with agentic coding tools.
The same things that make it hard for people to iterate on code bases (complexity, technical debt, poor architectural decisions, etc.) also make it hard for LLMs to work on code bases. So, as soon as you start working on making those things better, you might get better results.
If you have a lot of regressions when iterating with an LLM, you don't have good enough regression tests. If code produces runtime type errors, maybe use something with a better type checker that can remove those bugs before they happen. If you see a lot of duplication, tell it to do something about it and/or use code quality tools that flag such issues and tell it to address those issues. This stuff requires a bit of discipline and skill. But they are fixable things. And the usual excuse that you can't be bothered doesn't apply here; just make the coding tools fix this for you.
As for evidence, the amount of dollars being spent by well respected people in the industry on these tools is increasing. That might not be the evidence you like but it's a clear indication that people are getting some value out of these tools.
I'm definitely getting more predictable results. I find myself merging most proposed changes after a few iterations. The percentage is trending up in the last months. I can only speak for myself. But essentially everybody I know and respect is using this stuff at this point. With very mixed results. But people are getting shit done. I think there are lots of things to improve with these tools. I'd like them to be faster and require less micro management. I'd like them to work across multiple repositories and issue trackers instead of suffering from perpetual tunnel vision. Mostly when I get bad results, it's a context problem. Some of these things are frustrating to fix. But in the end this is about good feedback loops, not about models magically getting what you want.