Comment by gloosx

Comment by gloosx 2 days ago

20 replies

It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.

ben_w 2 days ago

> If you are a very good developer, you will always be in demand.

"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.

Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.

  • nradov 2 days ago

    Five years ago we had pretty good static analysis tools for popular languages which could automate certain aspects of code reviews and catch many common defects. Those tools didn't even use AI, just deterministic pattern matching. And yet due to laziness and incompetence many developers didn't even bother taking full advantage of those tools to maximize their own productivity.

    • ben_w 2 days ago

      The devs themselves can still be lazy, claude and copilot code review can be automated on all pull requests by demand of the PM — and the PM can be lazy and ask the LLMs to integrate themselves.

      And the LLMs can use the static analysis tools.

      • lmm 11 hours ago

        An LLM can run the static analysis tool and copy/paste its output onto your PR, sure. I'm not sure I would call that "doing code review".

        • ben_w 3 hours ago

          > copy/paste

          I did not say that.

          That it can *also* use tools to help, doesn't mean it can *only* get there by using tools.

          They can *also* just do a code review themselves.

          As in, I cloned a repo of some of my old manually-written code, cd'd into it, ran `claude`, and gave it the prompt "code review" (or something close to that), and it told me a whole bunch of things wrong with it, in natural language, even though I didn't have the relevant static analysis tools for those languages installed.

      • lisbbb a day ago

        I can't even imagine what time wasting bs the LLMs are finding with static analysis tools! It's all just a circle jerk everywhere now.

    • lisbbb a day ago

      Static analysis was pretty limited imho. It wasn't finding anything that interesting. I spent untold hours trying to satisfy SonarQube in 2021 & 2022. It was total shit busy work they stuck me with because all our APIs had to have at least 80% code coverage and meet a moving target of code analysis profiles that were updated quarterly. I had to do a ton of refactoring on a lot of projects just to make them testable. I barely found any bugs and after working on over 100 of those stupid things, I was basically done with that company and its bs. What an utter waste of time for a senior dev. They had to have been trying to get me to quit.

  • gloosx a day ago

    Even if someday we get AI that can generalize well, the need for a person who actually develops things using AI is not going anywhere. The thing with AI is that you cannot make it responsible, there will still be a human in the loop who is responsible for conveying ideas to the AI and controlling its results, and that person will be the developer. Senior developers are not hired just because they are smart or can write code or build systems, they are also hired to share the load of responsibility.

    Someone with a name, an employment contract, and accountability is needed to sign off on decisions. Tools can be infinitely smart, but they cannot be responsible, so AI will shift how developers work, not whether they are needed.

    • ben_w a day ago

      Even where a human in the loop is a legal obligation, it can be QA or a PM, roles as different from "developer" as "developer" is from "circuit designer".

      • gloosx a day ago

        A PM or QA can sign off only on process or outcome quality. They cannot replace the person who actually understands the architecture and the implications of technical decisions. Responsibility is about being able to judge whether the system is correct, safe, maintainable, and aligned with real-world constraints.

        If AI becomes powerful enough to generate entire systems, the person supervising and validating those systems is, functionally, a developer — because they must understand the technical details well enough to take responsibility for them.

        Titles can shift, but the role dont disappear. Someone with deep technical judgment will still be required to translate intent into implementation and to sign off on the risks. You can call that person "developer", "AI engineer" or something else, but the core responsibility remains technical. PMs and QA do not fill that gap.

  • marcelr 2 days ago

    ai can do code review? do people actually believe this? we have a mr llm bot, it is wrong 95% of the time

    • ben_w 2 days ago

      I have used it for code review.

      Like everything else they do, it's amazing how far you can get even if you're incredibly lazy and let it do everything itself, though of course that's a bad idea because it's got all the skill and quality of result you'd expect if I said "endless hoarde of fresh grads unwilling to say 'no' except on ethical grounds".

  • tonyhart7 2 days ago

    waymo and tesla already operate in certain areas, even if tech is ready

    regulation still very much a thing

    • afavour 2 days ago

      “certain areas” is a very important qualifier, though. Typically areas with very predictable weather. Not discounting the achievement just noting that we’re still far away from ubiquity.

      • duderific 2 days ago

        Waymo is doing very well around San Francisco, which is certainly very challenging city driving. Yes, it doesn't snow there. Maybe areas with winter storms will never have autonomous vehicles. That doesn't mean there isn't a lot of utility created even now.

        • ben_w a day ago

          My original point, clearly badly phrased given the responses I got, is that the promises have been exceeding the reality for a decade.

          Musk's claims about what Tesla's would be able to do wasn't limited to just "a few locations" it was "complete autonomy" and "you'll be able to summon your car from across the country"… by 2018.

          And yet, 2025, leaves: https://news.ycombinator.com/item?id=46095867

  • pb7 2 days ago

    I've been taking self-driving cars to get around regularly for a year or more.