Comment by Terr_
There's an implied assumption here that developers who end up spending all their time reviewing LLM code won't lose their skills or become homicidal. :p
There's an implied assumption here that developers who end up spending all their time reviewing LLM code won't lose their skills or become homicidal. :p
I expect that comes from the contrast and synthesis between how the author is anticipating things will develop or be explained, versus what the other person actually provided and trying to understand their thought process.
What happens if the reader no longer has enough of that authorial instinct, their own (opinionated) independent understanding?
I think the average experience would drift away from "I thought X was the obvious way but now I see by doing Y you were avoid that other problem, cool" and towards "I don't see the LLM doing anything too unusual compared to when I ask it for things, LGTM."
It seems counter intuitive that the reader would no longer have that authorial instinct due to lack of writing. Like, maybe they never had it, in which case, yes. But being exposed to a lot of different "writing opinions" tends to hone your own.
Let's say you're right though, and you lose that authorial instinct. If you've got five different proposals/PRs from five different models, each one critiqued by the other four, the needs for authorial instinct diminish significantly.
Fair enough. ;-)
I'm actually curious about the "lose their skills" angle though. In the open source community it's well understood that if anything reviewing a lot of code tends to sharpen your skills.