Comment by kjgkjhfkjf

Comment by kjgkjhfkjf 4 days ago

17 replies

If you want to remain relevant in the AI-enabled software engineering future, you MUST get very good at reviewing code that you did not write.

AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.

Educational establishments MUST prioritize teaching code review skills, and other high-level leadership skills.

ZYbCRq22HbJ2y7 4 days ago

> AI can already write very good code

Debatable, with same experience, depends on the language, existing patterns, code base, base prompts, and complexity of a task

  • netghost 4 days ago

    How about AI can write large amounts of code that might look good out of context.

    • ZYbCRq22HbJ2y7 4 days ago

      Yeah, LLMs can do that very well, IMO. As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code. In my experience, LLMs tend to conflate shape and correctness.

      • dragonwriter 4 days ago

        > As an experienced reviewer, the "shape" of the code shouldn't inform correctness, but it can be easy to fall into this pattern when you review code.

        For human written code, shape correlates somewhat with correctness, largely because the shape and the correctness are both driven by the human thought patterns generating the code.

        LLMs are trained very well at reproducing the shape of expected outputs, but the mechanism is different than humans and not represented the same way in the shape of the outputs. So the correlation is, at best, weaker with the LLMs, if it is present at all.

        This is also much the same effect that makes LLMs convincing purveyors of BS in natural language, but magnified for code because people are more used to people bluffing with shape using natural language, but churning out high-volume, well-shaped, crappy substance code is not a particularly useful skill for humans to develop, and so not a frequently encountered skill. And so, prior to AI code, reviewers weren't faced with it a lot.

nop_slide 4 days ago

I’m considered one of the stronger code reviewers on the team, what grinds my gears is seeing large, obviously AI heavy PRs and finding a ton of dumb things wrong with them. Things like totally different patterns and even bugs. I’ve lost trust that the person putting up the PR has even self reviewed their own code and has verified it does what they intend.

If you’re going to use AI you have to be even more diligent and self reviewed your code, otherwise you’re being a shitty team mate.

  • kubectl_h 4 days ago

    Same. I work at a place that has gone pretty hard into AI coding, including onboarding managers into using it to get them into the dev lifecycle, and it definitely puts an inordinate amount of pressure on senior engineers to scrutinize PRs much more closely. This includes much more thorough reviews of tests as well since AI writes both the implementation and tests.

    It's also caused an uptick in inbound to dev tooling and CI teams since AI can break things in strange ways since it lacks common sense.

  • [removed] 4 days ago
    [deleted]
  • faangguyindia 4 days ago

    if you are seeing that it just means they are not using the tool properly or using the wrong tool.

    AI assisted commits on my team are "precise".

h4ny 4 days ago

> you MUST get very good at reviewing code that you did not write.

I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.

> AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.

I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.

If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.

> Educational establishments MUST prioritize teaching code review skills

Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.

> and other high-level leadership skills.

Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.

  • [removed] 4 days ago
    [deleted]
jonahx 4 days ago

There is no reason to think that code review will magically be spared by the AI onslaught while code writing falls, especially as devs themselves lean more on the AI and have less and less experience coding every day.

There just hasn't been as many resources yet poured into improving AI code reviews as there has for writing code.

And in the end the whole paradigm itself may change.

  • [removed] 4 days ago
    [deleted]
im_lince 4 days ago

Totally agree with this. Code review is quickly becoming the most important skill for engineers in the AI era. Tools can generate solid code, but judgment, context, and maintainability come from humans. That’s exactly why we built LiveReview(https://hexmos.com/livereview/) — to help teams get better at reviewing and learning from code they didn’t write.

gf000 4 days ago

> AI can write better code than most of them can at this point

So where is your 3 startups?

wfhrto 4 days ago

AI can review code. No need for human involvement.

  • gf000 4 days ago

    For styling and trivial issues, sure. And if it's free, do make use of it.

    But it is just as unable to properly reason about anything slightly more complex as when writing code.