Comment by the_af

Comment by the_af a day ago

16 replies

> The best AI coders are positioned as tools for developers, rather than replacements for them.

I agree with this. However, we must not delude ourselves and understand that corporate is pushing for replacement. So there will be a big push to improve on tools like Devin. This is not a conspiracy theory, in many companies (my wife's, for example) they are openly stating this: we are going to reduce (aka "lay off") the engineering staff and use as much AI solutions as possible.

I wonder how many of us, here, understand that many jobs are going away if/when this works out for the companies. And the usual coping mechanism, "it will only be for low hanging fruit", "it will never happen to me because my $SKILL is not replaceable", will eventually not save you. Sure, if you are a unique expert on a unique field, but many of us don't have that luxury. Not everyone can be a top of the cream specialist. And it'll be used to drive down salaries, too.

lolinder a day ago

I remember when I was first getting started in the industry the big fear of the time was that offshoring was going to take all of our jobs and drive down the salaries of those that remained. In fact the opposite happened: it was in the next 10 years that salaries ballooned and tech had a hiring bubble.

Companies always want to reduce staff and bad companies always try to do so before the solution has really proven itself. That's what we're seeing now. But having deep experience with these tools over many years, I'm very confident that this will backfire on companies in the medium term and create even more work for human developers who will need to come in and clean up what was left behind.

(Incidentally, this also happened with offshoring— many companies ended up with large convoluted code bases that they didn't understand and that almost did what they wanted but were wrong in important ways. These companies needed local engineers to untangle the mess and get things back on track.)

  • senordevnyc a day ago

    But having deep experience with these tools over many years, I'm very confident...

    No one has had deep experience with these tools for any amount of time, let alone many years. They're literally just now hitting the market and are rapidly expanding their capabilities. We're at a fundamentally different place than we were just twelve months ago, and there's no reason to think 2025 will be any different.

    • lolinder a day ago

      I was building things with GPT-2 in 2019. I have as much experience engineering with them as anyone who wasn't an AI researcher before then.

      And no, we're not at a fundamentally different place than we were just 12 months ago. The last 12 months had much slower growth than the 12 months before that, which had slower growth than the 12 months before that. And in the end these tools have the same weaknesses that I saw in GPT-2, just to a lesser degree.

      The only aspect in which we are in a fundamentally different place is that the hype has gone through the roof. The tools themselves are better, but not fundamentally different.

      • senordevnyc 20 hours ago

        It’s genuinely difficult to take seriously a claim that coding using Sonnet has “the same weaknesses” as GPT-2, which was effectively useless for the task. It’s like suggesting that a flamethrower has the same weaknesses as a matchstick because they both can be put out by water.

        We’ll have to agree to disagree about whether the last 12 months has had as much innovation as the preceding 12 months. We started 2024 with no models better than GPT-4, and we ended the year with multiple open source models that beat GPT-4 and can run on your laptop, not to mention a bunch of models that trounce it. Plus tons of other innovations, dramatically cheaper training and inference costs, reasoning models, expanded multi-modal capabilities, etc, etc.

        I’m guessing you’ve already seen and dismissed it, but in case you’re interested in an overview, this is a good one: https://simonwillison.net/2024/Dec/31/llms-in-2024/

        • dimitri-vs 19 hours ago

          I'm paying for o1-pro (just for one month) and have been using LLMs since GPT-2 (via AI Dungeon). Progress is absolutely flattering when you're looking at practical applications versus benchmarks.

          o1 is actually surprisingly "meh" and I just don't see how they can justify the price when sonnet 3.5 latest is almost as good, 10x as fast and doesn't even have "reasoning".

          I'm spending half my day every day for the past few years using LLMs in one way or another. They still confidently (and unpredictability) hallucinate, even o1. They have no memory, can't build up experience, performance rapidly degrades with long conversations, etc.

          I'm not saying progress isn't being made, but the rate of progress is definitely slowing.

  • the_af a day ago

    I think it's qualitatively different this time.

    Unlike with offshoring, this is a technological solution, which understandably is received more enthusiastically on HN. I get it. It's interesting as tech! And it's achieved remarkable things. But unlike with offshoring (which is a people thing) or magical NOCODE/CASE/etc "solutions", it seems the consensus is that AI coding assistants will eventually get there. At least a portion of even HN seems to think so. And some are cheering!

    The coping mechanism seems to be "it won't happen to me" or "my knowledge is too specialized" but I think this will become increasingly false. And even if your knoweldge is too specialized to be replaced by AI, most engineers aren't like that. "Well, become more specialized" is unrealistic advice, and in any case, the employment pool will shrink.

    PS: I am offhsoring (in a way). I'm not based in the US but I work remotely for a US company.

    • lolinder a day ago

      > But unlike with offshoring (which is a people thing) or magical NOCODE/CASE/etc "solutions", it seems the consensus is that AI coding assistants will eventually get there.

      There's no consensus to that point. There are a few loud hype artists, most of whom are employed in AI and have so have conflicts of interest and also are pre-filtered to the true believers. Their logic is basically "See this trend? Trends continue, so this is inevitable!"

      That's bad logic. Trends do not always continue, they often slow or reverse, and this one is showing all signs of doing so already. OpenAI has come straight out and said that they don't expect to see another jump like GPT-3 to 4, and have resorted to throwing more tokens at the problems, which works with diminishing returns. I do not expect to see a return to the rapid growth we had for a year or two there.

      > PS: I am offhsoring (in a way). I'm not based in the US but I work remotely for a US company.

      Yes, and this is a good example: there's a place for offshoring, but it didn't replace US devs. The same thing will happen here.

      • senordevnyc a day ago

        Trends do not always continue, they often slow or reverse, and this one is showing all signs of doing so already. OpenAI has come straight out and said that they don't expect to see another jump like GPT-3 to 4, and have resorted to throwing more tokens at the problems, which works with diminishing returns. I do not expect to see a return to the rapid growth we had for a year or two there.

        This feels like the declaration of someone who has spent almost no time playing with these models or keeping up with AI over the last two years. Go look at the benchmarks and leaderboards for the last 18 months and tell me we're not progressing far beyond GPT4. Meanwhile models are also getting faster, cheaper, getting multi-modal capabilities, cheaper to train for a given capability, etc.

        And of course there are diminishing returns, the latest public models are in the 90s on many of their benchmarks!

nyarlathotep_ a day ago

> I wonder how many of us, here, understand that many jobs are going away if/when this works out for the companies. And the usual coping mechanism, "it will only be for low hanging fruit", "it will never happen to me because my $SKILL is not replaceable", will eventually not save you. Sure, if you are a unique expert on a unique field, but many of us don't have that luxury. And it'll be used to drive down salaries, too.

Yeah it's maddening.

The cope is bizarre too: "writing code is the least important part of the job"

Ok then why does nearly every company make people write code for interviews or do take home programming projects?

Why do people list programming languages on their resumes if it's "least important"?

Also bizarre to see people cheering on their replacements as they use all this stuff.

  • s1mplicissimus a day ago

    > Ok then why does nearly every company make people write code for interviews or do take home programming projects?

    For the same reason they put leetcode problems to "test" an applicants skill. Or have them write mergesort on a chalkboard by hand. It gives them a warm fuzzy feeling in the tummy because now they can say "we did something to check they are competent". Why, you ask? Well it's mostly impossible to come up with a test to verify a competency you don't have yourself. Imagine you can't distinguish red and green, are not aware of it, but want to hire people who can. That's their situation, but they cannot admit it - because it would be clear evidence that they are no good fit for their current role. Use this information responsibly ;)

    > Why do people list programming languages on their resumes if it's "least important"?

    You put the programming languages in there alongside the HR-soothing stuff because you hope that an actual software person gets to see your resume and gives you an extra vote for being a good match. Notice that most guides recommend a relatively small amount of technical content vs. lots of "using my awesomeness i managed to blafoo the dingleberries in a more efficient manner to earn the company a higher bottom line"

    If you don't want to be a software developer that's fine. But your questions point me towards the conclusion that you don't know a lot of things about software development in the first place which doesn't speak for your ability to estimate how easy it will be to automate it using LLMs.

    • the_af a day ago

      Arguing about programming is not the point, in my opinion.

      When AI becomes able to do most non-programming tasks too, say design or solving open-ended problems (yeah, except in trivial cases it cannot -- for now) we can have this conversation again...

      I think saying "well, programming is not important, what matters is $THING" is a coping mechanism. Eventually AI will do $THING acceptably enough for the bean counters to push for more layoffs.

      • ZephyrBlu a day ago

        When AI can do the software engineering tasks that require expertise outside of coding like system design, scoping problems, cross-team/domain work, etc then it will be AGI, at which point the fact that SWE jobs are automated would be the least of everyones worries.

        The main problem I perceive with AI being able to do that kind of work is that it requires an unprecedented level of agency and context-gathering. Right now agents are very much like juniors in that they work in an insular, not collaborative, way.

        Another big problem is that these higher level problems often require piecing together a lot of fragmented context. If the AI already had access to the information, sure, it would probably be able to achieve the task. But the hard bit is finding the information. Some logs here, some code there, a conversation with someone on a different team, etc. It's often a highly intuitive and tacit process, not easily explicitly defined. There's a reason that defining what a "Senior" is tends to be very difficult.

        • the_af a day ago

          > When AI can do the software engineering tasks that require expertise outside of coding like system design, scoping problems, cross-team/domain work, etc then it will be AGI

          I think you're talking about the really general case, but in my opinion that's not as important. All that matters is that AI solutions manage (in the near future) to cover the average case -- where most engineers actually work -- in a mediocre but cost effective manner, for this to have huge repercussions on the job market and salaries.

          > But the hard bit is finding the information. Some logs here, some code there, a conversation with someone on a different team, etc.

          I've no problem believing they will become more and more successful at this. This is information retrieval which can be done faster by machines, and making sense of it all together is where advances in AI will need to happen. I think there's a high chance they'll happen eventually, at least in a way that's enough to cobble together projects that will make the leadership happy (maybe after some review/adjustment by a few human experts they retain?). They do not even have to be particularly successful -- how many human-populated engineering projects succeed, anyway?

      • epicureanideal a day ago

        Also, because the economy is no longer based on competition, but is controlled by a bunch of industry specific oligopolies, even if the bean counters are wrong it won’t matter, because every other company will be similarly inefficient. Everybody loses, but the people in charge are too dumb to know. Our free market is currently broken.

  • dimitri-vs 19 hours ago

    Is spending 4 years of your life on education that will likely only be 10-20% applicable to your job any less bizarre? It's just another hoop employers want to see you capable of jumping.

    If you ignore the syntax programming is just writing detailed instructions. Just because AI is able to translate English to code doesn't mean the 100s of decisions that need to be made go away. Someone still needs to write very detailed instructions even if they are in English and it sure isn't going to be the people sitting in meetings all day.

    And let's pretend that I can now be 10x more productive with AI. Great, now I can ship 10x more features in the same timeframe and nothing changes - the development backlog is literally infinite. There are always more features or bugs to work on.

    • hatefulmoron 18 hours ago

      > Just because AI is able to translate English to code doesn't mean the 100s of decisions that need to be made go away. Someone still needs to write very detailed instructions even if they are in English and it sure isn't going to be the people sitting in meetings all day.

      What makes you think it will be you? The machines seem increasingly capable of converting English into different English, and if we take it as a given that they can convert English into code.. what are you there for? The people sitting in meetings might as well talk to the machine, to the extent they're willing to talk to you.

      To be clear, the professional "meeting participants" are as much on the chopping block as we are, although that's not commonly pointed out.