Comment by nlh

Comment by nlh 6 days ago

1 reply

I am but a small humble minority voice here but perhaps I represent a larger non-HN group:

I am not a professional SWE; I am not fluent in C or Rust or bash (or even Typescript) and I don't use Emacs as my editor or tmux in the terminal;

I am just a nerdy product guy who knows enough to code dangerously. I run my own small business and the software that I've written powers the entire business (and our website).

I have probably gotten a AT LEAST a 500-1000% speedup in my personal software productivity over the past year that I've really leaned into using Claude/Gemini (amazing that GPT isn't on that list anymore, but that's another topic...) I am able to spec out new features and get them live in production in hours vs. days and for bigger stuff, days vs weeks (or even months). It has changed the pace and way in which I'm able to build stuff. I literally wrote an entire image editing workflow to go from RAW camera shot to fully processed product image on our ecommerce store that's cut out actual, real, dozens of hours of time spent previously.

Is the code I'm producting perfect? Absolutely not. Do I have 100% test coverage? Nope. Would it pass muster if I were a software engineer at Google? Probably not.

Is it working, getting to production faster, and helping my business perform better and insanely more efficiently? Absolutely.

Draiken 6 days ago

I think that tracks with what I see: LLMs enable non-experts to do something really fast.

If I want to, let's say, create some code in a language I never worked on an LLM will definitely make me more "productive" by spewing out code for me way faster than I could write it. Same if I try to quickly learn about a topic I'm not familiar with. Especially if you don't care about the quality, maintainability, etc. too much.

But if I'm already a software developer with 15 years of experience dealing with technology I use every day, it's not going to increase my productivity in any meaningful way.

This is the dissonance I see with AI talk here. If you're not a software developer the things LLMs enable you to do are game-changers. But if you are a good software developer, in its best days it's a smarter autocomplete, a rubber-duck substitute (when you can't talk to a smart person) or a mildly faster google search that can be very inaccurate.

If you go from 0 to 1 that's literally infinitely better but if you go from 100 to 105, it's barely noticeable. Maybe everyone with these absurd productivity gains are all coming from zero or very little knowledge but for someone that's been past that point I can't believe these claims.