Comment by Kiboneu
When coding agents are unavailable I just continue to code myself or focus on architecture specification / feature descriptions. This really helps me retain my skills, though there is some "skew" (I'm not sure how to describe it, it's a feeling). Making instructions to LLMs to me is pretty similar to doing the basic software architecture and specification work that a lot of people tend to skip (now, there's not choice and it's directly useful). When you skip specification for a sufficiently complex project, you likely introduce footguns along the way that slows down development significantly. So what would one expect when they run a bunch of agents based on a single sentence prompt?!
Like the architecture work and making good quality specs, working on code has a guiding effect on the coding agents. So in a way, it also benefits to clarify items that may be more ambiguous in the spec. If I write some of the code myself, it will make fewer assumptions about my intent when it touches it (especially when I didn't specify them in the architecture or if they are difficult to articulate in natural language).
In small iterations, the agent checks back for each task. Because I spend a lot of time on architecture, I already have a model in my mind of how small code snippets and feature will connect.
Maybe my comfort with reviewing AI code comes form spending a large chunk of my life reverse engineering human code, to understand it to the extent that complex bugs and vulnerabilities emerge. I've spent a lot of time with different styles of code writing from awful to "this programmer must have a permanent line to god to do this so elegantly". The models is train on that, so I have a little cluster of neurons in my head that's shaped closely enough to follow the model's shape.