Comment by jerf
If AIs were to plateau where they are for an extended period of time, I definitely worry about their net effect on software quality.
One of the things I worry about is people not even learning what they can ask the computer to do properly because they don't understand the underlying system well enough.
One of my little pet peeves, especially since I do a lot of work in the networking space, is code that works with strings instead of streams. For example, it is not that difficult (with proper languages and libraries) to write an HTTP POST handler that will accept a multi-gigabyte file and upload it to an S3 bucket, perhaps gzip'ing it along the way, such that any size file can be uploaded without reference to the RAM on the machine, by streaming it rather than loading the entire file into a string on upload, then uploading that file to S3, requiring massive amounts of RAM in the middle. There's still a lot of people and code out in the world that works that way. AIs are learning from all that code. The mass of not-very-well-written code can overwhelm the good stuff.
And that's just one example. A whole bunch of stuff that proliferates across a code base like that and you get yet another layer of sloppiness that chews through hardware and negates yet another few generations of hardware advances.
Another thing is that, at the moment, code that is good for an AI is also good for a human. They may not quite be 100% the same but right now they're still largely in sync. (And if we are wise, we will work to keep it that way, which is another conversation, and we probably won't because we aren't going to be this wise at scale, which is yet another conversation.) I do a lot of little things like use little types to maintain invariants in my code [1]. This is good for humans, and good for AIs. The advantages of strong typing still work for AIs as well. Yet none of the AIs I've used seem to use this technique, even with a code base in context that uses this techique extensively, nor are they very good at it, at least in my experience. They almost never spontaneously realize they need a new type, and whenever they go to refactor one of these things they utterly annihilate all the utility of the type in the process, completely blind to the concept of invariants. Not only do they tend to code in typeless goo, they'll even turn well-typed code back into goo if you let them. And the AIs are not so amazing that they overcome the problems even so.
(The way these vibe coded code bases tend to become typeless formless goo as you scale your vibe coding up is one of the reasons why vibe coding doesn't scale up as well as it initially seems to. It's good goo, it's neat goo, it is no sarcasm really amazing that it can spew this goo at several lines per second, but it's still goo and if you need something stronger than goo you have problems. There are times when this is perfect; I'm just about to go spray some goo myself for doing some benchmarking where I just need some data generated. But not everything can be solved that way.)
And who is going to learn to shepherd them through writing better code, if nobody understands these principles anymore?
I started this post with an "if" statement, which wraps the whole rest of the body. Maybe AIs will advance to the point where they're really good at this, maybe better than humans, and it'll be OK that humans lose understanding of this. However, we remain a ways away from this. And even if we get there, it may yet be more years away than we'd like; 10, 15 years of accreting this sort of goo in our code bases and when the AIs that actually can clean this up get here they may have quite a hard time with what their predecessors left behind.
[1]: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion...