Comment by ithkuil

Comment by ithkuil 5 days ago

0 replies

Deciding what to build and catching when things go sideways (and I'd add: engineering things so that you can better deal with things going sideways) was always the limiting factor.

Sure, writing code was slower before the agentic coder era, but as people coded their understanding of what they coded grew with them while coding and that allowed people to make informed decisions on what to do next and how to fix things when they go sideways.

By replacing the human who writes code with an agent that does it but faster doesn't necessarily improve the speed of the overall process by the same amount. Some of the time saved in producing code is simply shifted elsewhere: to reading, validating, and reconstructing the understanding that previously emerged naturally while writing. If the human still needs a sufficiently deep mental model of the system in order to make correct decisions, diagnose failures, and decide what to do next, then that understanding must be acquired one way or another. When it no longer forms incrementally during the act of coding, it has to be rebuilt after the fact, often under worse conditions and with less context. In that sense, the apparent speedup only holds if we ignore the cost of comprehension and review; once those are included, the comparison becomes less about raw code throughput and more about where and how understanding is generated in the process.

Many people understand this tradeoff in general terms. Just like we generally understand the concept of technical debt.

But just as it's very hard to deal with classic technical debt, it will be very hard to counterbalance the short term gains of AI producing endless streams of code