Comment by cypherfox

Comment by cypherfox 8 hours ago

0 replies

Let me give a concrete example. I had a tool I built ten years ago on Rails 5.2. It's decent, mildly complex for a 1-man project, and I wanted to refresh it. Current Rails is 8. I've done upgrades before and it's...rough going more than one version up. It's _such_ a pain to get it right.

I pointed Claude Code at it, and a few hours later, it had done all of the hard work.

I babysat it, but I was doing other things while it worked. I didn't verify all the code changes (although I did skim the resultant PR, especially for security concerns) but it worked. It rewrote my extensive hand-rolled Coffeescript into modern JavaScript, which was also nice; it did it perfectly. The tests passed, and it even uncovered some issues that I had it fix afterwards. (Places where my security settings weren't as good as they should have been, or edge cases I hadn't thought of ten years ago.)

Now could I have done this? Yes, of course. I've done it before with other projects. But it *SUCKS* to do manually. Some folks suggest that you should only use these tools for tasks you COULD do, but would be annoyed to do. I kind of like that metric, but I bet my bar for annoyance will go down over time.

My experience with these systems is that they aren't significantly faster, ultimately, but I hate the sucky parts of my job VASTLY less. And there are a lot of sucky parts to even the code-creation side of programming. I *love* my career and have been doing it for 36 years, but like anything that you're very experienced in, you know the parts that suck.

Like some others, it helps that my most recent role was Staff Software Engineer, and so I was delegating and looking over the results of other folks work more than hand-rolling code. So the 'suggest and review' pattern is one that I'm very comfortable with, along with clearly separate small-scale plan and execute steps.

Ultimately I find these tools reduce cognitive load, which makes me happier when I'm building systems, so I don't care as much if I'm strictly faster. If at the end of the day I made progress and am not exhausted, that's a win. And the LLM coding tools deliver that for me, at least.

One of the things I've also had to come to terms with _in large companies_ is that the code is __never__ high quality. If you drill into almost any part of a huge codebase, you're going to start questioning your sanity (obligatory 'Programming Sucks' reference). Whether it's a single complex 750 line C++ function at the heart of a billion dollar payment system, or 2,000 lines in a single authentication function in a major CRM tool, or a microservice with complex deployment rules that just exists to unwrap a JWT, or 13 not-quite-identical date time picker libraries in one codebase, the code in any major system is not universally high quality. But it works. And there are always *very good reasons* why it was built that way. Those are the forces that were on the development team when it was built, and you don't usually know them, and you mustn't be a jerk about it. Many folks new to a team don't get that, and create a lot of friction, only to learn Chesterton's Fence all over again.

Coming to terms with this over the course of my career has also made coming to terms with the output of LLMs being functional, but not high quality, easier. I'm sure some folks will call this 'accepting mediocrity' and that's okay. I'd rather ship working code. (_And to be clear, this is excepting security vulnerabilities and things that will lose data. You always review for those kinds of errors, but even for those, reviews are made somewhat easier with LLMs._)

N.b. I pay for Claude Code, but I regularly test local coding models on my ML server in my homelab. The local models and tooling is getting surprisingly good...but not there yet.