Comment by kenjackson

Comment by kenjackson 18 hours ago

4 replies

I only generate the code once with GenAI and typically fix a bug or two - or at worst use its structure. Rarely do I toss a full PR.

It’s interesting some folks can use them to build functioning systems and others can’t get a PR out of them.

omnicognate 16 hours ago

The problem is that at this stage we mostly just have people's estimates of their own success to go on, and nobody thinks they're incompetent. Nobody's going to say "AI works really well for, me but I just pump out dross my colleagues have to fix" or "AI doesn't work for me but I'm an unproductive, burnt out hack pretending I'm some sort of craftsman as the world leaves me behind".

This will only be resolved out there in the real world. If AI turns a bad developer, or even a non-developer, into somebody that can replace a good developer, the workplace will transform extremely quickly.

So I'll wait for the world to prove me wrong but my expectation, and observation so far, is that AI multiplies the "productivity" of the worst sort of developer: the ones that think they are factory workers who produce a product called "code". I expect that to increase, not decrease, the value of the best sort of developer: the ones who spend the week thinking, then on Friday write 100 lines of code, delete 2000 and leave a system that solves more problems than it did the week before.

  • mwcampbell 8 hours ago

    I aspire to live up to your description of the best sort of developer. But I think there might also be a danger that that approach can turn into an excuse for spending the week overthinking (possibly while goofing off as well; I've done it), then writing a first cut on Friday, leaving no time for the multiple iterations that are often necessary to get to the best solution. In other words, I think sometimes it's necessary to just start coding sooner than we'd like so we can start iterating toward the right solution. But that "unproductive, burnt out hack" line hits a bit too close to home for me these days, and I'm starting to entertain the possibility that an LLM-based agent might have more energy for doing those multiple iterations than I do.

  • autobodie 12 hours ago

    My experiences so far suggest that you might be right.

dagw 17 hours ago

It’s interesting some folks can use them to build functioning systems and others can’t get a PR out of them.

It is 100% a function of what you are trying to build, what language and libraries you are building it in, and how sensitive that thing is to factors like performance and getting the architecture just right. I've experienced building functioning systems with hardly any intervention, and repeatedly failing to get code that even compiles after over an hour of effort. There exists small, but popular, subset of programming tasks where gen AI excels, and a massive tail of tasks where it is much less useful.