The hidden cost of AI coding
(terriblesoftware.org)289 points by Sharpie4679 2 days ago
289 points by Sharpie4679 2 days ago
I see this "prompting is an art" stuff a lot. I gave Claude a list of 10 <Route> objects and asked it to make an adjustment to all of them. It gave me 9 back. When I asked it to try again it gave me 10 but one didn't work. What's "prompt engineering" there, telling it to try again until it gets it right? I'd rather just do it right the first time.
Then don't use it? Nobody is making you.
I am also barely using LLMs at the moment. Even 10% of the time would be generous.
What I was saying is that I have tried different ways of interacting with LLMs and was happy to discover that the way I describe stuff to another senior dev actually works quite fine with an LLM. So I stuck to that.
Again, if an LLM is not up to your task, don't waste your time with it. I am not advocating for "forget everything you knew and just go ask Mr. AI". I am advocating for enabling and productivity-boosting. Some tasks I hate, for some I lack the deeper expertise, others are just verbose and require a ton of typing. If you can prompt the LLM well and vet the code yourself after (something many commenters here deliberately omit so they can happily tear down their straw man) then the LLM will be a net positive.
It's one more tool in the box. That's all there is to it really. No idea why people get so polarizing.
Prompt engineering is just trying that task on a variety of models and prompt variations until you can better understand the syntax needed to get the desired outcome, if the desired outcome can be gotten.
Honestly you’re trying to prove AI is ineffective by telling us it didn’t work with your ineffective protocol. That is not a strong argument.
What should I have done there? Tell it to make sure that it gives me all 10 objects I give it back? Tell it to not put brackets in the wrong place? This is a real question --- what would you have done?
Each activity we engage in has different use, value, and subjective enjoyment to different people. Some people love knitting! Personally, I do know how to sew small tears, which is more than most people in the US these days.
Just because I utilize the services of others for some things does not mean that it should be expected I want to utilize the service of others for all things.
This is a preposterous generalization and exactly why I said the OP premise is laughable.
Further, you’ve shifted OP’s point from subjective enjoyment of an activity to getting “paid well” - this is an irrelevant tangent to whether “most” people in general would delegate work if they could.
Obviously my comment was shortened for brevity and it is kind of telling that you couldn't tell and rushed to tear down the straw man that you saw.
Answering your question:
- That there are annoying tasks none of us look forward to doing.
- That sometimes you have knowledge gaps and LLMs serve as a much better search engine.
- That you have a bad day but the task is due tomorrow. Happened to us all.
I am not "laughably projecting on others", no. I am enumerating human traits and work conditions that we all have or had.
OBVIOUSLY I did not mean that I would delegate all my work tomorrow if I could. I actually do love programming.
> Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Hard disagree, I get to hyperfocus on making magical things that surprise and delight me every day.
Nice. I've got a whole lot of magical things that I need built for my day job. Want to connect so I can hand the work over to you? I'll still collect the paychecks, but you can have the joy. :)
My work is the magical stuff, I don’t write much code outside of work, I don’t have time with two young kids.
> Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
I don’t think this is the case, if anything the opposite is true. Most of us would like to do the work code but have realized, at some career point, that you’re paid more to abstract yourself away from that and get others to do it either in technical leadership or management.
> I don’t think this is the case, if anything the opposite is true
I'll be a radical and say that I think it depends and is very subjective.
Author above you seems to enjoy working on code by itself. You seem to have a different motivation. My motivation is solving problems I encounter, code just happen to be one way out of many possible ones. The author of the submission article seems to love the craft of programming in itself, maybe the problem itself doesn't even matter. Some people program just for the money, and so on.
> most of us would delegate our work code to somebody else or something else if we could.
I saw your objections to other comments on the basis of them seemingly not having a disdainful attitude towards coding they do for work, specifically.
I absolutely do have tasks, coding included, that I don't want to do, and find no joy in. If I can have my manager assign the task to someone else, great! But using an LLM isn't that, so I'm still on the hook for ensuring all the most boring parts of that task (bugfixing, reworks, integration, tests, etc) get done.
My experience with LLMs is that they simply shift the division of time away from coding, and towards all the other bits.
And it can't possibly just be about prompting. How many hundreds of lines of prompting would you need to get an LLM to understand your coding conventions, security baselines, documentation reqs, logging, tests, allowed libraries, OSS license restrictions (i.e. disallowed libraries), etc? Or are you just refactoring for all that afterwards?
Maybe you work somewhere that doesn't require that level of rigor, but that doesn't strike me as a good thing to be entrenching in the industry by increasing coders' reliance on LLMs.
A super necessary context here is that I barely use LLM at all still. Maybe I should have said so but I figured that too much nuance would ruin a top-level comment and mostly casually commented on a tradeoff of using or not using LLMs.
Where I use LLMs:
1. Super boring and annoying tasks. Yes, my prompts for those include various coding style instructions, requests for small clarifying comments where the goal of the code is not obvious, tests. So, no OSS license restrictions. Libraries I specify most of the times I used LLMs (and only once did I ask it to suggest a library). Logging and telemetry I add myself. So long story short, I use the LLM to show me a draft of a solution and then mercilessly refactor it to match my practices and guidelines. I don't do 50 exchanges out of laziness, no.
2. Tasks where my expertise is lacking. I recently used an LLM to help me with making a `.clone()`-heavy Rust code to become nearly zero-copy for performance reasons -- it is a code on a hot path. As much as I love Rust and I am fairly good at it (realistically I'm IMO at 7.5 / 10), all the lifetimes and zero-copy semantics I still don't know yet. A long session with an LLM after, I emerged both better educated and with a faster code. IMO a win-win.
1. I work on enjoyable problems after I let the LLM do some of the tasks I have to do for money. The LLM frees me bandwidth for the stuff I truly love. I adore solving problems with code and that's not going to change ever.
2. Some of the modern LLMs generate very impressive code. Variables caching values that are reused several times, utility functions, even closure helpers scoped to a single function. I agree that when the LLM code's quality falls bellow a certain threshold then it's better in every way to just write it yourself instead.
> most of us would delegate our work code to somebody else or something else if we could
Not me. I code because I love to code, and I get paid to do what I love. If that's not you…find a different profession?
Needlessly polarizing. I love coding since 12 years old (so more than 30 years at this point) but most work tasks I'm given are fairly boring and uninteresting and don't move almost any science or knowledge forward.
Delegating part of that to an LLM so I can code the stuff I love is a big win for my motivation and is making me doing the work tasks with a bit more desire and pleasure.
Please don't forget that most of us out there can't code for money anything that their heart wants. If you can, I'd be happy for you (and envious) but please understand that's also a fairly privileged life you'd be having in that case.
> Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have
It requires magical incantations that may or may not work and where a missing comma in a prompt can break the output just as badly as the US waking up and draining compute resources.
Has nothing to do with eloquence
Totally. And yet rigorous proof is very difficult. Having done some mathematics involving nontrivial proofs, I respect even more how difficult rigor is.
Ah, I absolutely don't verify code in the mathematical sense of the word. More like utilize strong static typing (or hints / linters in weaker typed languages) and write a lot of tests.
Nothing is truly 100% safe or free of bugs. What I meant with my comment up-thread was that I have enough experience to have a fairly quick and critical eye of code, and that has saved my skin many times.
How did you get there from me agreeing 100% with someone who said that you should be ready to verify everything an LLM does for you and if you're not willing to do that you shouldn't use them at all?
Do you ever read my comments, or do you just imagine what I might have said and reply to that?
There's simply no way to verify everything that comes out of these things. Otherwise why use it? You also can't possibly truly know if you know more about a topic since by definition the models know more than you. This is automation bias. Do you not know the problems with even verifying or watching machines? This is a core part of the discussion of self driving vehicles. I guess I assumed you knew stuff about the field of AI!
> work on whatever code makes you happy without using an LLM?
This isn't how it works, psychologically. The whole time I'm manual coding, I'm wondering if it'd be "easier" to start prompting. I keep thinking about a passage from The Road To Wigan Pier where Orwell addresses this effect as it related to the industrial revolution:
>Mechanize the world as fully as it might be mechanized, and whichever way you turn there will be some machine cutting you off from the chance of working—that is, of living.
>At a first glance this might not seem to matter. Why should you not get on with your ‘creative work’ and disregard the machines that would do it for you? But it is not so simple as it sounds. Here am I, working eight hours a day in an insurance office; in my spare time I want to do something ‘creative’, so I choose to do a bit of carpentering—to make myself a table, for instance. Notice that from the very start there is a touch of artificiality about the whole business, for the factories can turn me out a far better table than I can make for myself. But even when I get to work on my table, it is not possible for me to feel towards it as the cabinet-maker of a hundred years ago felt towards his table, still less as Robinson Crusoe felt towards his. For before I start, most of the work has already been done for me by machinery. The tools I use demand the minimum of skill. I can get, for instance, planes which will cut out any moulding; the cabinet-maker of a hundred years ago would have had to do the work with chisel and gouge, which demanded real skill of eye and hand. The boards I buy are ready planed and the legs are ready turned by the lathe. I can even go to the wood-shop and buy all the parts of the table ready-made and only needing to be fitted together; my work being reduced to driving in a few pegs and using a piece of sandpaper. And if this is so at present, in the mechanized future it will be enormously more so. With the tools and materials available then, there will be no possibility of mistake, hence no room for skill. Making a table will be easier and duller than peeling a potato. In such circumstances it is nonsense to talk of ‘creative work’. In any case the arts of the hand (which have got to be transmitted by apprenticeship) would long since have disappeared. Some of them have disappeared already, under the competition of the machine. Look round any country churchyard and see whether you can find a decently-cut tombstone later than 1820. The art, or rather the craft, of stonework has died out so completely that it would take centuries to revive it.
>But it may be said, why not retain the machine and retain ‘creative work’? Why not cultivate anachronisms as a spare-time hobby? Many people have played with this idea; it seems to solve with such beautiful ease the problems set by the machine. The citizen of Utopia, we are told, coming home from his daily two hours of turning a handle in the tomato-canning factory, will deliberately revert to a more primitive way of life and solace his creative instincts with a bit of fretwork, pottery-glazing, or handloom-weaving. And why is this picture an absurdity—as it is, of course? Because of a principle that is not always recognized, though always acted upon: that so long as the machine is there, one is under an obligation to use it. No one draws water from the well when he can turn on the tap. One sees a good illustration of this in the matter of travel. Everyone who has travelled by primitive methods in an undeveloped country knows that the difference between that kind of travel and modern travel in trains, cars, etc., is the difference between life and death. The nomad who walks or rides, with his baggage stowed on a camel or an ox-cart, may suffer every kind of discomfort, but at least he is living while he is travelling; whereas for the passenger in an express train or a luxury liner his journey is an interregnum, a kind of temporary death. And yet so long as the railways exist, one has got to travel by train—or by car or aeroplane. Here am I, forty miles from London. When I want to go up to London why do I not pack my luggage on to a mule and set out on foot, making a two days of it? Because, with the Green Line buses whizzing past me every ten minutes, such a journey would be intolerably irksome. In order that one may enjoy primitive methods of travel, it is necessary that no other method should be available. No human being ever wants to do anything in a more cumbrous way than is necessary. Hence the absurdity of that picture of Utopians saving their souls with fretwork. In a world where everything could be done by machinery, everything would be done by machinery. Deliberately to revert to primitive methods to use archaic took, to put silly little difficulties in your own way, would be a piece of dilettantism, of pretty-pretty arty and craftiness. It would be like solemnly sitting down to eat your dinner with stone implements. Revert to handwork in a machine age, and you are back in Ye Olde Tea Shoppe or the Tudor villa with the sham beams tacked to the wall.
>The tendency of mechanical progress, then, is to frustrate the human need for effort and creation. It makes unnecessary and even impossible the activities of the eye and the hand. The apostle of ‘progress’ will sometimes declare that this does not matter, but you can usually drive him into a comer by pointing out the horrible lengths to which the process can be carried.
sorry it's so long
I’ve been struggling with a very similar feeling. I too am a manager now. Back in the day there was something very fulfilling about fully understanding and comprehending your solution. I find now with AI tools I don’t need to understand a lot. I find the job much less fulfilling.
The funny thing is I agree with other comments, it is just kind of like a really good stack overflow. It can’t automate the whole job, not even close, and yet I find the tasks that it cannot automate are so much more boring (the ones I end up doing).
I envy the people who say that AI tools free them up to focus on what they care about. I haven’t been able to achieve this building with ai, if anything it feels like my competence has decreased due to the tools. I’m fairly certain I know how to use the tools well, I just think that I don’t enjoy how the job has evolved.
When we outsource the parts of programming that used to demand our complete focus and creativity, do we also outsource the opportunity for satisfaction? Can we find the same fulfillment in prompt engineering that we once found in problem-solving through code?
Most of AI-generated programming content I use are comments/explanations for legacy code, closely followed by tailored "getting started" scripts and iterations on visualisation tasks (for shitty school assignments that want my pyplots to look nice). The rest requires an understanding, which AI can help you achieve faster (it's read many a book related to the topic, so it can recall information a lot like an experienced colleague may), but it can't confer capital K Knowledge or understanding upon you. Some of the tasks it performs are grueling, take a lot of time to do manually, and provide little mental stimulation. Some may be described as lobotomizing and (in my opinion) may mentally damage you in the "Jack Torrance typewriter" kinda way.
It makes me able to work on the fun parts of my job which possess the qualities the article applauds.
I'm guessing you're referencing KRAZAM? https://youtu.be/eDr6_cMtfdA
This article resonates with me like no other has in years. I very recently retired after 40 years writing software because my role had evolved into a production-driven limbo. For the past decade I have scavenged and copied other peoples' code into bland cookie cutter utilities that fed, trained, ran, and summarized data mining ops. It has required not one whit of creative expression or 'flow', making my life's work as dis-engaging as that of... well... the most bland job you can imagine.
AI had nothing to do with my own loss of engagement, though certainly it won't cure what ailed me. In fact, AI promises to do to all of software development what the mechanized data mining process did to my sense of creative self-expression. It will squeeze all the fun out of it, reducing the joy of coding (and its design) to plug-and-chug, rinse, repeat.
IMHO the threat of AI to computer programming is not the loss of jobs. It's the loss of personal passionate engagement in the craft.
So long as your experience and skill allows you to produce work of higher quality than average for your industry, then you will always have a job which is to review that average quality work, and surgically correct it when it is wrong.
This has always been true in every craft, and it remains true for programmers in a post LLL world.
Most training data is open source code written by novice to average programmers publishing their first attempts at things and thus LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience.
Honestly to most programmers early in their career right now, I would suggest spending more time reviewing code, and bugfixes, than writing code. Review is the skillset the industry needs most now.
But you will need to be above average as a software reviewer to be employable. Go out into FOSSland and find a bunch of CVEs, or contribute perf/stability/compat fixes, proving you review and improve things better than existing automated tools.
Trust me, there are bugs -everywhere- if you know how to look for them and proving you can find them is the resume you need now.
The days of anyone that can rub two HTML tags together having a high paying job are over.
> LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience
The one time i pasted LLM code without reviewing it it belonged on accidentally quadratic.
It was obvious at first read, but probably not for a beginner. The accidental complexity was hidden behind API calls that weren't wrong, just grossly inefficient.
Problem might be, if you lose the "joy" and the "flow" you'll stop caring about things like that. And software is bloated enough already.
When I coding, most of time was used to search docs over internet. My first language is not english, search over hundrud of pages is quiet slow.
AI help me a lot, you don't need search, just ask AI, and it provide the answer directly. After using AI, I have more time used on coding, more fun.
I am mostly pretty underwhelmed with LLMs' code, but this is a use-case that makes perfect sense to me, and seems like a net-positive: using them as a reference manual/ translator/ training aid.
I just wish I saw more people doing this, rather than asking them to 'draw 80% of the owl'.
I always thought about the problem of AI taking jobs, that even if there are new jobs created to replace the older ones, it will come at a cost of decrease in satisfaction of overall populace.
The more people in general get disconnect from nature/physical world/reality. via layers of abstraction the more discontent they will become. These layers can be: 1) Automatics in agriculture. 2) Industries. 3) Electronics 4) Software 5) and now AI
Each higher layer depends on lower ones for its functioning without the need to worry about specifics and provides a framework for higher abstraction to work on.
The more we move up in hierarchy the more disconnected we become from the physical world.
To support this I observed that villagers in general are more jolly and content than city dwellers. In metropolis specially I saw that people are more rude, anxious and always agitated, while villagers are welcoming and peaceful.
Another good example is that of an artist finding it boring to guide AI even though he loves making paintings himself/herself.
The author is already an experienced programmer. Let me toss in an anecdote about the next generation of programmers. Vibe coding: also called playing pinball with the AI, hoping something useful comes out.
I taught a lecture in my first-semester programming course yesterday. This is in a program for older students, mostly working while going back to school. Each time, a few students are selected to present their code for an exercise that I pick randomly from those they were assigned.
This guy had fancy slides showing his code, but he was basically just reading the code off the page. So I ask him: “hey, that method you call, what exactly does it do?”.
Um…
So I ask "Ok, the result from that method is assigned to a variable. What kind of variable is it?" Note that this is Java, the data type is explicitly declared, so the answer is sitting there on his slide.
Um…
So I tear into him. You got this from ChatGPT. That’s fine, if you need the help, but you need to understand what you get. Otherwise you’ll never get a job in IT.
His answer: “I already have a job in IT.”
Fsck. There is your vibe coder. You really do not want them working on anything that you care about.
This is one of the biggest dangers imo. While I agree with the OP about the deflation of joy in experienced programmers, the related but more consequential effect seems to be dissuading people from learning. A generational threat to collective competence and a disservice to students and teachers everywhere
Does your course not have exams or in-lab assignments? Should sort itself out. Honestly, I'm all for homework fading away as professors can't figure out how to prevent people from using AI. It used to be the case that certain kids could get away with not doing much because they were popular enough to get people to let them copy their assignments (at least for certain subjects). Eventually the system will realize they can't detect AI and everything has to be in-person.
Sure, this guy is likely to fail the course. The point is: he is already working in the field. I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
> I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
That could be considered malpractice. I know our profession currently doesn't have professional standards, but it's just a side effect of it being very new and not yet solidified; it won't be long until some duty of care becomes required, and we're already starting to see some movement in that direction, with things like the EU CRA.
> After all, if we lose the joy in our craft, what exactly are we optimizing for?
For being one of the few lucky ones that gets to stay around taking care of the software factory robots, or designing them, while everyone else that used to work at the factory is now queueing somewhere else.
To me THIS is the most stressful part of the whole thing.
I like programming but I have other hobbies I find fulfilling, and nothing stops me from programming with a pen and paper.
The bad vibes are not caused by lack of programming, they're caused by headsman sharpening his axe behind me.
A few lucky programmers will be elevated to God status and we're all fighting for those spots now.
For me the most surprising part is the phase of wonder, from those that apparently never read anything in the history of industrial revolution, and think everyone will still have a place when we achieve Star Trek replicator level.
Not everyone gets a seat at the starship.
I think.. based on recent events.. that some of the corporate inefficiencies are very poorly captured. Last year we had an insane project that was thrown at us before end of the year, because, basically, company had a tiff with the vendor and would rather have us spend our time in meetings trying to do what they are doing rather than pay vendor for that thing. From simple money spent perspective, one would think company's simple amoral compass would be a boon.
AI coding is similar. We just had a minor issue with ai generated code that was clearly not vetted as closely as it should have been making output it generated over a couple of months not as accurate as it should be. Obviously, it had to be corrected, then vetted and so on, because there is always time to correct things...
edit: What I am getting at is the old-fashioned, penny smart, but pound foolish.
There is craft in business, in product, and in engineering.
A lot of these discussions focus on craft in engineering and there's lots of merit there regarding AI tools and how they change that process, but I've found that folks who enjoy both the product side of things and the engineering side of things are thriving while those who were very engineering focused understandably feel apprehensive.
I will say, in my day job, which is often at startups, I have to focus more on the business / product side just given the phase of the company. So, I get joy from engineering craft in side projects or other things I work on in my own time to scratch the itch.
I love that quote he led with.
In my case, I couldn't agree more, with the premise of the article, but my life today, is centered around writing software the very best that I can; regardless of value or price.
It's not very effective, if I were to be trying to make a profit.
It's really hard to argue for something, if the something doesn't result in value, as perceived by others.
For me, the value is the process. I often walk away from my work, once I have it up and shipping. I do like to take my work all the way through shipping, support, and maintenance, but find that my eye is always drawn towards new shores[0].
“A ship in harbor is safe, but that is not what ships are built for.”
–John A. Shedd
[0] https://littlegreenviper.com/miscellany/thats-not-what-ships...Flow Management
Flow comes when challenge meets skill
Too much skill and too little challenge creates boredom;
too little skill and too much challenge creates anxiety
AI has reduced the challenge needed for achieving your goal, creating boredom
Remedy: find greater challenges?
I will start by saying I don't have much experience with the latest AI coding tools.
From what I've seen using them would lead to more boredom. I like solving problems. I don't like doing code reviews. I wouldn't trust any AI generated code at this stage without reviewing it. If I could swap that around so I write code and AI gives me a reasonable code review and catches my mistakes I'd be much more interested.
It's 9am in the morning. I login to my workstation and muddle my way through the huge enterprise code base which doesn't fit into any model context window for the AI tool to be useful (and even if it did, we can't use any random model due to compliance and proprietary and whatnot).
I have thousands deadlines which are suddenly coming due and a bunch of code which is broken because some poor soul under the same pressure put something that "works" in. And it worked, until it didn't, and now it's my turn in the barrel.
Is this the joy?
I'm not complaining, I'm doing it for the good money.
Would you be happier and feel more flow if you were typing in assembly? What about hand-punching cards? To me this reads more as nostalgia than a genuine concern. Tools are always increasing in abstraction, but there’s no reason you can’t achieve flow with new tools. Learning to prompt is the new learning to type.
The post focuses on flow, but depending on what you mean by it, it isn't necessarily a good thing. Trying to solve something almost too difficult usually gets you out of flow. You still need concentration, though.
My main worry about AI is that people just keep using the garbage that exists instead of trying to produce something better, because AI takes away much of the pain of interacting with garbage. But most people are already perfectly fine using garbage, so probably not much will change here.
My experience has been almost the opposite.
Typing isn't the fun part of it for me. It's a necessary evil to realize a solution.
The fun part of being an engineer for me is figuring out how it all should work and fit together. Once that's done - I already basically have all of the code for the solution in my head - I've just got to get it out through my fingers and slog through all the little ways it isn't quite right, doesn't satisfy x or y best practice, needs to be reshaped to accommodate some legacy thing it has to integrate that is utterly uninteresting to me, etc.
In the old model, I'd enjoy the first few hours or days of working on something as I was designing it in my mind, figuring out how it was all going to work. Then would come the boring part. Toiling for days or weeks to actually get all the code just so and closing that long-tail gap from 90% done (and all interesting problems solved) to 100% done (and all frustrating minutia resolved).
AI has dramatically reduced the amount of time the unsatisfying latter part of a given effort lasts for me. As someone with high-functioning ADD, I'm able to stay in the "stimulation zone" of _thinking_ about the hard / enjoyable part of the problem and let AI do (50-70%, depending on domain / accuracy) of the "typing toil".
Really good prompts that specify _exactly_ what I want (in technical terms) are important and I still have to re-shape, clean up, correct things - but it's vastly different than it was before AI.
I'm seeing on the horizon an ability to materialize solutions as quickly as I can think / articulate - and that to me is very exciting.
I will say that I am ruthlessly pragmatic in my approach to development, focusing on the most direct solution to meet the need. For those that obsesses over beautiful, elegant code - personalizing their work as a reflection of their soul / identity or whatever, I can see how AI would suck all the joy from the process. Engineering vs. art, basically. AI art sucks and I expect that's as true for code as it is for anything else.
Honestly, most of the "real engineer" rhetoric is exhausting. Here's the thing: the people most obsessed with software craftsmanship, pattern orthodoxy, and layered complexity often create some of the most brittle, hostile, constantly mutating systems imaginable. You may be able to build abstractions, but if you're shipping stuff that users have to re-learn every quarter because someone needed to justify a promotion via another UI revamp or tech stack rewrite, you're not designing well. You're just changing loudly.
Also, stop gatekeeping AI tooling like it’s cheating. We’re not in a craft guild. The software landscape is full of shovelware and half-baked “best practices” that change more often than a JavaScript framework’s logo. I'm not here to honor the tradition of suffering through YAML hell or memorizing the 400 ways to configure a build pipeline. I’m here to make something work well, fast, and that includes leveraging AI like the power tool it is.
So yeah, you can keep polishing the turd pile of over-engineered “real” systems. The rest of us will be using AI to build, test, and ship faster than your weekly stand-up even finishes.
I found myself recently making decent superficial progress only to introduce a bug and had a system crash (unusual bc it’s python) bc I didn’t really understand how the package worked (bc I bypassed the docs for the AI examples). It did end up working out ok - I then went into the weeds and realised the AI has given me two examples that worked in isolation but not together - inconsistent API calls essentially. I do like understanding what I’m doing as much or more than getting it done, bc it always comes back to you, sooner or later.
The things I'm usually tabbing through in cursor are not the things that make me feel a lot of enjoyment in your work. The things that are most enjoyable are usually the system level design aspects, the refactorings to make things work better. These you can brainstorm with AI, but cannot delegate to AI today.
The rest is glorified boilerplate that I find usually saps me of my energy, not gives me energy. I'm a fan of anything that can help me skip over that and get to the more enjoyable work.
Yeah I wonder how do the code look after such professional AI development. I tried ChatGPT 1o to ask it about simple C function - what errors are there. It answered only after I directly asked about the aspects I was expecting it to tell about. It means that if I didn't know that the LLM wouldn't tell me...
Funny that I found this article going to hacker news as a pause in my work : I had to chose between using Aider or my brain to code a small algorithmic task, sorting items of a list based on dependences between items written in a YAML file.
Using Aider would probably solve the task in 5 minutes. Coding it in 30 minutes. The former choice would result in more time for other tasks or reading HN or having a hot beverage or walking in the sun. The second would challenge my rusting algorithmic skills and give me a better understanding of what I'm doing for the medium term.
Hard choice. In any case, I have a good salary, even with the latter option I can decide to spend good times.
As a scientist, I actually greatly enjoy the AI assisted coding because it can help with the boring/tedious side of coding. I.e. I occasionally have some new ideas/algorithms to try, and previously I did not have enough time to explore them out, because there was just too much boring code to be written. Now this part is essentially solved, and I can more easily focus on key algorithms/new ideas.
Typing isn't what makes programming fun.
AI coding preserves flow more than legacy coding. You never have to go read documentation for an hour. You can continuously code.
I had a lot of joy making an experimental DSL with a web server runtime using primarily LLM tools.
Then I shared it on HN and was subject to literal harassment.
Have you encounter anything regarding tech debt when using AI?
Don't see any mention regarding this in the post, which is the common objection people have regarding vibe coding.
One of the things people often overlook don't talk about in this arguments is the manager's point of view and how it's contributing to the shakeups in this industry.
As a developer I'm bullish on coding agents and GenAI tools, because they can save you time and can augment your abilities. I've experienced it, and I've seen it enough already. I love them, and want to see them continue to be used.
I'm bearish on the idea that "vibe coding" can produce much of value, and people without any engineering background becoming wildly productive at building great software. I know I'm not alone. If you're a good problem solver who doesn't know how to code, this is your gateway. And you better learn what's happening with the code while you can to avoid creating a huge mess later on.
Developers argue about the quality of "vibe coded" stuff. There are good arguments on both sides. At some point I think we all agree that AI will be able generate high quality software faster than a human, someday. But today is not that day. Many will try to convince you that it is.
Within a few years we'll see massive problems from AI generated code, and it's for one simple reason:
Managers and other Bureaucrats do not care about the quality of the software.
Read it again if you have to. It's an uncomfortable idea, but it's true. They don't care about your flow. They don't care about how much you love to build quality things. They don't care if software is good or bad they care about closing tickets and creating features. Most of them don't care, and have never cared about the "craft".
If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together. A wall is a wall. That's how the majority of managers view software development today. By the time that shoddy wall crumbles they'll be at another company anyway so it's someone else's problem.
When I talk about the software industry collapsing now, and in a few years we're mired with garbage software everywhere, this is why. These people in "leadership" are salivating at the idea of finally getting something for nothing. Paying a few interns to "vibe code" piles of software while they high five each other and laugh.
It will crash. The bubble will pop.
Developers: Keep your skills sharp and weather out the storm. In a few years you'll be in high demand once again. When those walls crumble, they will need people who what they're doing to repair it. Ask for fair compensation to do so.
Even if I'm wrong about all of this I'm keeping my skills sharp. You should too.
This isn't meant to be anti-management, but it's based on what I've seen. Thanks for coming to my TED talk.
* And to the original point, In my experience the tools interrupt the "flow" but don't necessarily take the joy out of it. I cannot do suggestion/autocomplete because it breaks my flow. I love having a chat window with AI nearby when I get stuck or want to generate some boilerplate.
> If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together.
IDK, there's still a place in society for master masons to work on 100+ year old buildings built by other master masons.
Same with the robots. They can implement solutions but I'm not sure I've heard of any inventing an algorithmic solution to a problem.
i dont know where you are working, but where I work i cant prompt 90% of my job away using cursor. in fact, I find all of these tools to be more and more useless and our codebase is growing and becoming more complex
based on the current state of AI and the progress im witnessing on a month-by-month basis - my current prediction is there is zero chance AI agents are going to be coding and replacing me in the next few years. if i could short the startups claiming this, I would.
Don't get distracted by claims that AI agents "replace programmers". Those are pure hype.
I'm willing to bet that in a few years most of the developers you know will be using LLMs on a daily basis, and will be more productive because of it (having learned how to use it).
I have the same experience. It‘s basically a better StackOverflow, but just like with SO you have to be very careful about the replies, and also just like SO its utility diminishes as you get more proficient.
As an example, just today I was trying to debug some weird WebSocket behaviour. None of the AI tools could help, not Cursor, not plain old ChatGPT with lots of prompting and careful phrasing of the problem. In fact every LLM I tried (Claude 3.7, GPT o4-mini-high, GPT 4.5) introduced errors into my debugging code.
I’m not saying it will stay this way, just that it’s been my experience.
I still love these tools though. It’s just that I really don’t trust the output, but as inspiration they are phenomenal. Most of the time I just use vanilla ChatGPT though; never had that much luck with Cursor.
Yeah, they're currently horrible at debugging -- there seems to be blind spots they just can't get past so end up running in circles.
A couple days ago I was looking for something to do so gave Claude a paper ("A parsing machine for PEGs") to ask it some questions and instead of answering me it spit out an almost complete implementation. Intrigued, I threw a couple more papers at it ("A Simple Graph-Based Intermediate Representation" && "A Text Pattern-Matching Tool based on Parsing Expression Grammars") where it fleshed out the implementation and, well... color me impressed.
Now, the struggle begins as the thing has to be debugged. With the help of both Claude and Deepseek we got it compiling and passing 2 out of 3 tests which is where they both got stuck. Round and round we go until I, the human who's supposed to be doing no work, figured out that Claude hard coded some values (instead of coding a general solution for all input) which they both missed. In applying ever more and more complicated solutions (to a well solved problem in compiler design) Claude finally broke all debugging output and I don't understand the algorithms enough to go in and debug it myself.
Of course I didn't use any sort of source code management so I could revert to a previous version before it was broken beyond all fixing...
Honestly, I don't even consider this a failure. I learned a lot more on what they are capable of and now know that you have to give them problems in smaller sections where they don't have to figure out the complexities of how a few different algorithms interact with each other. With this new knowledge in hand I started on what I originally intended to do before I got distracted with Claude's code solution to a simple question.
--edit--
Oh, the irony...
After typing this out and making an espresso I figured out the problem Claude and Deepseek couldn't see. So much for the "superior" intelligence.
This has become especially true for me in the past four months. The new long context reasoning models are shockingly good at digging through larger volumes of gnarly code. o3, o4-mini and Claude 3.7 Sonnet "thinking" all have 200,000 token context limits, and Gemini 2.5 Pro and Flash can do 1,000,000. As "reasoning" models they are much better suited to following the chain of a program to figure out the source of an obscure bug.
Makes me wonder how many of the people who continue to argue that LLMs can't help with large existing codebases are missing that you need to selectively copy the right chunks of that code into the model to get good results.
But 1 million tokens is like 50k lines of code or something. That's only medium sized. How does that help with large complex codebases?
What tools are you guys using? Are there none that can interactively probe the project in a way that a human would, e.g. use code intelligence to go-to-definition, find all references and so on?
> Fast forward to today, and that joy of coding is decreasing rapidly. Well, I’m a manager these days, so there’s that… But even when I do get technical, I usually just open Cursor and prompt my way out of 90% of it. It’s way more productive, but more passive as well.
Dude's an engineering manager who codes maybe 5% of the time and his joy is decreasing. AI is not the problem, it's being an engineering manager.
in the meantime im having lots of fun coding and using AI, reinventing every wheel i can. 0 stress cos i don't do it for money :). I think a lot of people are having a tantrum because programing is not sexy anymore, its getting easier, the bar is lower now , the quality is awful and nobody cares. its like any other boring soul crushing job.
also if you want to see the real cost (at least part of it) of AI coding or the whole fucked up IT industry, go to any mining town in the global south.
I think a lot of this discussion is moot - it all devolves into the same arguments rehashed between people who like using AI and people who do not.
What we really need are more studies on the productivity and skill outcomes of using AI tools. Microsoft did one, with results that were very negative towards AI tools [1]. I would like to see more (and much larger cohort) studies along this line, whether they validate Microsoft's conclusions or oppose them.
Personally I do not find AI coding tools to be useful at all - but I have not put extensive time into developing a "skillset" to use them optimally. Mainly because I believe, similar to what the study by MS found, that they are detrimental to my critical reasoning skills. If this turns out to be wrong, I would not mind evaluating changing course on that decision - but we need more data.
1. https://www.microsoft.com/en-us/research/wp-content/uploads/...
>"...the one thing that currently worries me most about using AI for software development: lack of joy."
I struggled with this at first too. But it just becomes another kind of joy. Think of it like jogging versus riding a motorcycle. Jogging is fun, people enjoy it, and they always will. But flying down a canyon road at 90MPH and racing through twists and turns is... way more fun. Once you've learned how to do it. But there's a gap there in which it stops being fun until you do.
That’s an interesting analogy but I do disagree with it.
I would say that programming without an AI is like riding a motorcycle. You’re in complete control and it’s down to your skill to get you we’re your going.
While using AI is like taking a train. You got to plan the route but you’re just along for the ride.
Which I think lines up to the article. If you want to get somewhere easily and fast, take a train. But that does take away the joy of the journey.
Earlier this year, a hackernews started quizzing me about the size and scope of the projects I worked on professionally, with the implication that I couldn't really be working on anything large or complex -- that I couldn't really be doing serious development, without using a full-fat IDE like IntelliJ. I wasn't going to dox myself or my professional work just so he could reach a conclusion he's already arrived at. The point is, to this person, beyond a certain complexity threshold -- simple command-line tools, say -- an IDE was a must, otherwise you were just leaving productivity on the table.
https://news.ycombinator.com/item?id=42511441
People are going to be making the same judgements about AI-assisted coding in the near future. Sure, you could code everything yourself for your own personal enrichment, or simply because it's fun. But that will be a pursuit for your own time. In the realm of business, it's a different story: you are either proompting, or you're effectively stealing money from your employer because you're making suboptimal use of the tools available. AI gets you to something working in production so much faster that you'd be remiss not to use it. After all, as Milt and Tim Bryce have shown, the hard work in business software is in requirements analysis and design; programming is just the last translation step.
So if I'm understanding this, there are two central arguments being made here.
1. AI Coding leads to a lack of flow.
2. A lack of flow leads to a lack of joy.
Personally, I can't find myself agreeing with the first argument. Flow happens for me when I use AI. It wouldn't surprise me if this differed developer to developer. Or maybe it is the size of requests I'm making, as mine tend to be on the smaller size where I already have an idea of what I want to write but think the AI can spit it out faster. I also don't really view myself as prompt engineering; instead it feels more like a natural back and forth with the AI to refine the output I'm looking for. There are times it gets stubborn and resistant to change but that is generally a sign that I might want to reconsider using AI for that particular task.
One trend I've been finding interesting over the past year is that a lot of engineers I know who moved into engineering management are writing code again - because LLMs mean they can get something productive done in a couple of hours where previously it would have taken them a full day.
Managers usually can't carve out a full day - but a couple of hours is manageable.
See also this quote from Gergely Orosz:
Despite being rusty with coding (I don't code every day
these days): since starting to use Windsurf / Cursor with
the recent increasingly capable models: I am SO back to
being as fast in coding as when I was coding every day
"in the zone" [...]
When you are driving with a firm grip on the steering
wheel - because you know exactly where you are going, and
when to steer hard or gently - it is just SUCH a big
boost.
I have a bunch of side projects and APIs that I operate -
but usually don't like to touch it because it's (my)
legacy code.
Not any more.
I'm making large changes, quickly. These tools really
feel like a massive multiplier for experienced devs -
those of us who have it in our head exactly what we want
to do and now the LLM tooling can move nearly as fast as
my thoughts!
From https://x.com/GergelyOrosz/status/1914863335457034422> a lot of engineers I know who moved into engineering management are writing code again
They should be managing instead. Not to say that they can't code their own tools, but the statement sounds like a construction supervisor nailing studs or welding steel bars. Can work for a small team, but that's not your primary job.
Hard disagree.
I've been an engineering manager and it's a lot easier to make useful decisions that your team find credible if you can keep your toes in the water just a little bit.
My golden rule is to stay out of the critical path of shipping a user-facing feature: if a product misses a deadline because the engineering manager slipped on their coding commitments, that's bad.
The trick is to use your minimal coding time for things that are outside of that critical path: internal tools, prototypes, helping review code to get people unstuck, that kind of thing.
Yeah I think flow is more about holding a lot of knowledge about the code and its control flow in your head at a time. I think there's an XKCD or something that illustrates that.
You still need to do that if you're using AI, otherwise how do you know if it's actually done a good job? Or are people really just vibe coding without even reading the code at all? That seems... unlikely to work.
The catch is that when AI handles 95% or 99% of a task, people say great, don't need humans. 99% is great.
But when that last 1% breaks and AI can’t fix it. That’s where you need the humans.
I don't know man, maybe prompt most of your work, eyeball it and verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!), run a script to commit and push after 3 hours and then... work on whatever code makes you happy without using an LLM?
Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have. I have started deriving a lot of value from those LLMs I chose to interact with by specifying clear boundaries on what's the priority and what can wait for later and what should be completely ignored due to this or that objective (and a number of other parameters I am giving them). When you do that well, they are extremely useful.