qsort 3 days ago

> Important: there is a lot of human coding, too.

I'm not highlighting this to gloat or to prove a point. If anything in the past I have underestimated how big LLMs were going to be. Anyone so inclined can take the chance to point and laugh at how stupid and wrong that was. Done? Great.

I don't think I've been intentionally avoiding coding assistants and as a matter of fact I have been using Claude Code since the literal day it first previewed, and yet it doesn't feel, not even one bit, that you can take your hands off the wheel. Many are acting as if writing any code manually means "you're holding it wrong", which I feel it's just not true.

  • simonw 3 days ago

    Yeah, my current opinion on this is that AI tools make development harder work. You can get big productivity boosts out of them but you have to be working at the top of your game - I often find I'm mentally exhausted after just a couple of hours.

    • dotinvoke 3 days ago

      My experience with AI tools is the opposite. The biggest energy thieves for me are configuration issues, library quirks, or trivial mistakes that are hard to spot. With AI I can often just bulldoze past those things and spend more time on tangible results.

      When using it for code or architecture or design, I’m always watching for signs that it is going off the rails. Then I usually write code myself for a while, to keep the structure and key details of whatever I’m doing correct.

      • troupo 3 days ago

        For me, LLMs always, without fail get important details wrong.

        - incessantly duplicating already existing functionality: utility functions, UI components etc.

        - skipping required parameters like passing current user/actor to DB-related functions

        - completely ignoring large and small chunks of existing UI and UI-related functionality like layouts or existing styles

        - using ad-hoc DB queries or even iterating over full datasets in memory instead of setting up proper DB queries

        And so on and so forth.

        YYMV of course depending on language and project

    • james_marks 3 days ago

      100%. It’s like managing an employee that always turns their work in 30 seconds later; you never get a break.

      I also have to remember all of the new code that’s coming together, and keep it from re-inventing other parts of the codebase, etc.

      More productive, but hard work.

    • sawmurai 3 days ago

      I have a similar experience. It feels like riding your bike in a higher gear - you can go faster but it will take more effort and you need the potential (stronger legs) to make use of it

      • specproc 3 days ago

        It's more like shifting from a normal to an electric bike.

        You can go further and faster, but you can get to a point where you're out of juice miles from home, and getting back is a chuffing nightmare.

        Also, you discover that you're putting on weight and not getting that same buzz you got on your old pushbike.

        • truetraveller 3 days ago

          Hey, that's a great analogy, 10/10! This explains in a few words what an entire article might explain.

    • jstummbillig 3 days ago

      Considering the last 2 years, has it become harder or easier?

      • simonw 3 days ago

        Definitely harder.

        A year ago I was using GitHub Copilot autocomplete in VS Code and occasionally asking ChatGPT or Claude to help write me a short function or two.

        Today I have Claude Code and Codex CLI and Codex Web running, often in parallel, hunting down and resolving bugs and proposing system designs and collaborating with me on detailed specs and then turning those specs into working code with passing tests.

        The cognitive overhead today is far higher than it was a year ago.

        • dingdingdang 3 days ago

          Also better and faster though!! It's close to a Daft Punk type situation.

    • truetraveller 3 days ago

      Woah, that's huge coming from you. This comment itself is worth an article. Do it. Call it "AI tools make development harder work".

      P.s. always thought you were one of those irrational AI bros. Later, found that you were super reasonable. That's the way it should be. And thank you!

  • Pannoniae 3 days ago

    In fact, I've been writing more code myself since these tools exist - maybe I'm not a real developer but in the past I might have tried to either find a library online or try to find something on the internet to copypaste and adapt, nowadays I give it a shot myself with Claude.

    For context, I mainly do game development so I'm viewing it through that lens - but I find it easier to debug something bad than to write it from scratch. It's more intensive than doing it yourself but probably more productive too.

  • scuff3d 2 days ago

    > Many are acting as if writing any code manually means "you're holding it wrong", which I feel it's just not true.

    It's funny because not far below this comment there is someone doing literally this.

  • oblio 3 days ago

    LLMs are autonomous driving level 2.

j_bum 3 days ago

This was a fun read.

I’ve similarly been using spec.md and running to-do.md files that capture detailed descriptions of the problems and their scoped history. I mark each of my to-do’s with informational tags: [BUG], [FEAT], etc.

I point the LLM to the exact to-do (or section of to-do’s) with the spec.md in memory and let it work.

This has been working very well for me.

nightski 3 days ago

Even though the author refers to it as "non-trivial", and I can see why that conclusion is made, I would argue it is in fact trivial. There's very little domain specific knowledge needed, this is purely a technical exercise integrating with existing libraries for which there is ample documentation online. In addition, it is a relatively isolated feature in the app.

On top of that, it doesn't sound enjoyable. Anti slop sessions? Seriously?

Lastly, the largest problem I have with LLMs is that they are seemingly incapable of stopping to ask clarifying questions. This is because they do not have a true model of what is going on. Instead they truly are next token generators. A software engineer would never just slop out an entire feature based on the first discussion with a stakeholder and then expect the stakeholder to continuously refine their statement until the right thing is slopped out. That's just not how it works and it makes very little sense.

  • simonw 3 days ago

    The hardest problem in computer science in 2025 is presenting an example of AI-assisted programming that somebody won't call "trivial".

    • nightski 3 days ago

      If all I did was call it trivial that would be a fair critique. But it was followed up with a lot more justification than that.

      • simonw 3 days ago

        Here's the PR. It touched 21 files. https://github.com/ghostty-org/ghostty/pull/9116/files

        If that's your idea of trivial then you and I have very different standards in terms of what's a trivial change and what isn't.

        • groby_b 3 days ago

          It's trivial in the sense that a lot of the work isn't high cognitive load. But... that's exactly the point of LLMs. It takes the noise away so you can focus on high-impact outcomes.

          Yes, the core of that pull requests is an hour or two of thinking, the rest is ancillary noise. The LLM took away the need for the noise.

          If your definition of trivial is signal/noise ratio, then, sure, relatively little signal in a lot of noise. If your definition of "trivial" hinges on total complexity over time, then this kicks the pants of manual writing.

          I'd assume OP did the classic senior engineer stick of "I can understand the core idea quickly, therefore it can't be hard". Whereas Mitchel did the heavy lifting of actually shipping the "not hard" idea - still understanding the core idea quickly, and then not getting bogged down in unnecessary details.

          That's the beauty of LLMs - it turns the dream of "I could write that in a weekend" into actually reality, where it before was always empty bluster.

  • kannanvijayan 3 days ago

    I've wondered about exposing this "asking clarifying questions" as a tool the AI could use. I'm not building AI tooling so I haven't done this - but what if you added an MCP endpoint whose description was "treat this endpoint as an oracle that will answer questions and clarify intent where necessary" (paraphrased), and have that tool just wire back to a user prompt.

    If asking clarifying questions is plausible output text for LLMs, this may work effectively.

    • simonw 3 days ago

      I think the asking clarifying questions thing is solved already. Tell a coding agent to "ask clarifying questions" and watch what it does!

      • nightski 3 days ago

        Obviously if you instruct the autocomplete engine to fill in questions it will. That's not the point. The LLM has no model of the problem it is trying to solve, nor does it attempt to understand the problem better. It is merely regurgitating. This can be extremely useful. But it is very limiting when it comes to using as an agent to write code.

      • danielbln 3 days ago

        I've added "amcq means ask me clarifying questions" to my global Claude.md so I can spam "amcq" at various points in time, to great avail.

  • antonvs 3 days ago

    > A software engineer would never just slop out an entire feature based on the first discussion with a stakeholder and then expect the stakeholder to continuously refine their statement until the right thing is slopped out. That's just not how it works and it makes very little sense.

    Didn’t you just describe Agile?

    • [removed] 3 days ago
      [deleted]
    • Retric 3 days ago

      Who hurt you?

      Sorry couldn’t resist. Agile’s point was getting feedback during the process rather than after something is complete enough to be shipped thus minimizing risk and avoiding wasted effort.

      Instead people are splitting up major projects into tiny shippable features and calling that agile while missing the point.

      • rkomorn 3 days ago

        I've never seen a working scrum/agile/sprint/whatever product/project management system and I'm convinced it's because I've just never seen an actual implementation of one.

        "Splitting up major projects into tiny shippable features and calling that agile" feels like a much more accurate description of what I've experienced.

        I wish I'd gotten to see the real thing(s) so I could at least have an informed opinion.

      • antonvs 3 days ago

        Agile’s point was to get feedback based on actual demoable functionality, and iterate on that. If you ignore the “slop” pejorative, in the context of LLMs, what I quoted seems to fit the intent of Agile.