Comment by 8note
i think the review cycles weve been doing for the past decade or two are going to change to match the output of the LLMs and how the LLMs prefer to make whole big changes.
i immediately see that the most important thing to have understand a change is future LLMs more than people. we still need to understand whats going on, but if my LLM and my coworkers LLM are better aligned, chances are my coworker will have a better time working with the code that i publish than if i got them to understand it well but without their LLM understanding it.
with humans as the architects of LLM systems that build and maintain a code based system, i think the constraints are different, and that we dont ahve a great idea on what the actual requirements are yet.
it certainly mismatches with how we've been doing things in publishing small change requests that only do a part of a whole
I think any workflow that doesn't cater to human constraints is suspect, until genAI tooling is a lot more mature.
Or to put it another way -- understandable piecemeal commits are a best practice for a fundamental human reason; moving away from them is risking lip-service reviews and throwing AI code right into production.
Which I imagine we'll get to (after there are much more robust auto-test/scan wrap-arounds), but that day isn't today.