Comment by cushychicken

Comment by cushychicken 3 days ago

23 replies

Answers like this are sort of what makes me wonder what most engineers are smoking when they think AI isn’t valuable.

I don’t think the outright dismissal of AI is smart. (And, OP, I don’t mean to imply that you are doing that. I mean this generally.)

I also suspect people who level these criticisms have never really used a frontier LLM.

Feeding in a whole codebase that I’m familiar with, and hearing the LLM give good answers about its purpose and implementation from a completely cold read is very impressive.

Even if the LLM never writes a line of code - this is still valuable, because helping humans understand software faster means you can help humans write software faster.

linotype 3 days ago

Many devs still think their job is to write code not build products their business needs. I use LLMs extensively and it’s helped me work better faster.

  • grugagag 3 days ago

    LLMs excel at some things and work very poorly at others. People working on different problems have had different experiences, sometimes opposite ends of the spectrum.

    • danieldk 2 days ago

      I think the people who claim 10x-100x productivity improvements are working on tasks where LLMs work really well. There is a lot of development work out there that is relatively simple CRUD and LLMs are very good at it. On the complete opposite end we have designing new algorithms/data structures or extending them in a novel way. Or implementing drivers for new hardware from incomplete specs. LLMs do not do well on these tasks or even slow down developers 10x.

      So, I think the claims of improvement in productivity and regression in productivity can be true at the same time (and it's not just that people who don't find using LLMs productive are just prompting them wrong).

      I think most can be gained by learning in which areas LLMs can give large productivity boosts and where it's better to avoid using them. Of course, this is a continuous process, given that LLMs are still getting better.

      Personally, I am quite happy with LLMs. They cannot replace me, but they can do a chunk of the boring/repetitive work (e.g. boilerplate), so as a result I can focus on the interesting problems. As long as we don't have human-like performance (and I don't feel like we are close yet), LLMs make programming more interesting.

      They are also a great learning aid. E.g., this morning I wanted to make a 3D model for something I needed, but I don't know OpenSCAD. I iteratively made the design with Claude. At some point the problem becomes too difficult for Claude, but with the code generated at that point, I have learned enough about OpenSCAD that I can fix the more difficult parts of the project. The project would have taken me a few hours (to learn the language, etc.), but now I was done in 30 minutes and learned some OpenSCAD in a pleasant way.

      • kaycebasques 2 days ago

        Your OpenSCAD experience is an important point in the productivity debates that is often not discussed. A lot of projects that were previously impossible are now feasible. 10 years ago, you might have searched the OpenSCAD docs, watched videos, felt like it was impossible to find the info you needed, and given up. Claude and similar tools have gotten me past that initial blocker many times. Finding a way to unblock 0 to 1 productivity is perhaps as important (or maybe even more important than) as enabling 1 to 10 or 1 to 100.

      • iteria 2 days ago

        You don't even need such fancy examples. There are plenty of codebases where people are working with code that is over a decade old and has several paradigms all intermixed with a lot of tribal knowledge that isn't documented in code or wiki. That is where AI sucks. It will not be able to make meaningfully change in that environment.

        There is also the frontend and tnpse code bases don't need to be very old at all before AI falls down. NPM packages and clashing styles in a codebase and AI has been not very helpful to me at all.

        Generally speaking, which AI is a fine enhancement to autocomplete, I haven't seen it be able to do anything more serious in a mature codebase. The moment business rules and tech debt sneak in in any capacity, AI becomes so unreliable that it's faster to just write it yourself. If I can't trust the AI to automatically generate a list of exports in an index.ts file. What can I trust it for?

kaycebasques 2 days ago

> hearing the LLM give good answers about its purpose and implementation from a completely cold read

Cold read ability for this particular tool is still an open question. As others have mentioned, a lot of the example tutorials are for very popular codebases that are probably well-represented in the language model's training data. I'm personally going to test it on my private, undocumented repos.

tossandthrow 3 days ago

> Even if the LLM never writes a line of code - this is still valuable, because helping humans understand software faster means you can help humans write software faster.

IMHO, Ai text additions are generally not valuable and I assume, until proven wrong, that Ai text provides little to no value.

I have seen so many startups fold after they made some ai product that on the surface level appeared impressive but provided no substantial value.

Now, I will be impressed by the ai that can remove code without affecting the product.

  • jonahx 3 days ago

    > Now, I will be impressed by the ai that can remove code without affecting the product.

    Current AIs can already do this decently. With the usual caveats about possible mistakes/oversight.

panny 3 days ago

>Answers like this are sort of what makes me wonder what most engineers are smoking when they think AI isn’t valuable.

I'll just wait for a winner to shake out and learn that one. I've gotten tired of trying AIs only to get slop.

otabdeveloper4 15 hours ago

Summarization is one thing LLM's can do well, yes. (That's not what this current hype cycle is selling though.)

CodeMage 3 days ago

> Answers like this are sort of what makes me wonder what most engineers are smoking when they think AI isn’t valuable.

Honestly, I wonder if I'm living in some parallel universe, because my experience is that "most engineers" are far from that position. The reactions I'm seeing are either "AI is the future" or "I have serious objections to and/or problems with AI".

If you're calling the latter group "the outright dismissal of AI", I would disagree. If I had to call it the outright dismissal of anything, it would be of AI hype.

> I also suspect people who level these criticisms have never really used a frontier LLM.

It's possible. At my workplace, we did a trial of an LLM-based bot that would generate summaries for our GitHub PRs. I have no idea whether it's a "frontier" LLM or not, but I came out of that trial equally impressed, disappointed, and terrified.

Impressed, because its summaries got so many details right. I could immediately see the use for a tool like that: even when the PR author provides a summary of the PR, it's often hard to figure out where to start looking at the PR and in which order to go through changes. The bulleted list of changes from the bot's summary was incredibly useful, especially because it was almost always correct.

Disappointed, because it would often get the most important thing wrong. For the very first PR that I made, it got the whole list of changes right, but the explanation of what the PR did was the opposite of the truth. I made a change to make certain behavior disabled by default and added an option to enable it for testing purposes, and the bot claimed that the behavior was impossible before this change and the PR made it possible if you used this option.

Terrified, because I can see how alluring it is for people to think that they can replace critical thinking with AI. Maybe it's my borderline burnout speaking, but I can easily imagine the future where the pressure from above to be more "efficient" and to reduce costs brings us to the point where we start trusting faulty AI and the small mistakes start accumulating to the point where great damage is done to millions of people.

> Even if the LLM never writes a line of code - this is still valuable, because helping humans understand software faster means you can help humans write software faster.

I have my doubts about this. Yes, if we get an AI that is reliable and doesn't make these mistakes, it can help us understand software faster, as long as we're willing to make the effort to actually understand it, rather than delegating to the AI's understanding.

What I mean by that is that there are different levels of understanding. How deep do you dive before you decide it's "deep enough" and trust what the AI said? This is even more important if you start also using the AI to write the code and not just read it. Now you have even less motivation to understand the code, because you don't have to learn something that you will use to write your own code.

I'll keep learning how to use LLMs, because it's necessary, but I'm very worried about what we seem to want from them. I can't think of any previous technological advance that aimed to replace human critical thinking and creativity. Why are we even pursuing efficiency if it isn't to give us more time and freedom to be creative?

  • doug_durham 3 days ago

    The value is that it got the details correct as you admit. That alone is worth the price of admission. Even if I need to rewrite or edit parts it has saved me time, and has raised the quality of PRs being submitted across the board. The key point with these tools is *Accountability*. As an engineer you are still accountable for your work. Using any tool doesn't take that away. If the PR tool gets it wrong, and you still submit it, that on the engineer. If you have a culture of accountability, then there is nothing to be terrified of. Any by the way the most recent tools are really, really good at PRs and commit messages.

    • svieira 2 days ago

      Are you accountable for CPU bugs in new machines added to your Kubernetes fleet? The trusting-trust problem only works if there is someone to trust.

voidUpdate 2 days ago

Well companies lock "frontier LLMs" behind paywalls, and I don't want to pay for something that still might not be of any use to me

  • GaggiX 2 days ago

    Gemini 2.5 Pro Experimental (a frontier model) has 5 RPM and 25 RPD.

    Gemini 2.5 Flash Preview 04-17 another powerful model has 10 and 500.

    OpenAI also allows you to use their API for free if you agree to share the tokens.