Comment by 000ooo000

Comment by 000ooo000 5 days ago

6 replies

>Essentially, the PR a process at most tech companies is nothing more than a gatekeeping/complicance/CYA mechanism, and realizing this can definitely help your career..

It has value particularly when the team has newcomers or less-experienced developers. Writing it off as 'CYA' discards an opportunity for async feedback to be given on work.

sarchertech 5 days ago

Sure that’s fine when you’re the one doing the review for a newcomer or less experienced dev.

However, I think that PR reviews after the code has been written are the absolute worst time to do this.

  • hakunin 5 days ago

    I completely disagree. By writing the code, the author gains understanding of the hidden details of the problem. The reviewer gets to explore a full solution that accounts for those details, and provides tests. With the context still fresh, it's a great time for a new pair of eyes to explore, ask questions (which should lead to clarifications in the code), and think of ways to simplify and untangle the solution further while keeping the tests passing. In fact, it can take more time to keep assuming and discussing what the solution might be than to write the code and iterate on it. In an asynchronous/remote work environment, iterating on PRs is not only okay, but probably the most effective way to collaborate.

    • sarchertech 5 days ago

      I'm not saying I never fire up a draft PR so that I can nail some things down and get comments on it.

      But at the same time not everything needs to be asynchronous and if you try to make everything asynchronous, such that you never have ad hoc meetings or synchronous conversations, you're going to move much slower than is necessary. I've been exclusively remote for 10 years now, so I love doing thing async when I can, but I recognize the limits.

      By the time my PR is ready to merge the assumption should be that it's ready to merge, unless you see something actually wrong. Not just hey this could be a little easier to reason about if you completely redesigned your solution.

      9/10 when people do make those kinds of suggestions it's not objectively better, and 9/10 changing it won't move the needle with respect to maintainability or meaningful performance.

      So given that we have a funnel of reviewers where most people are only going to do surface level reviews, of those that do deep dives most of them aren't going to find any problems, of those that do most of the solutions aren't going to make any kind of significant impact. Is it worth going through this whole process? My conclusion is that no it is not.

      Comparing the software we produced before the whole PR review for every merge request thing became common, and the software we produce now, I don't think software quality has improved at all. So for me to change my mind, I'm going to need to see hard data that says the opposite.

      • hakunin 5 days ago

        > 9/10 when people do make those kinds of suggestions it's not objectively better, and 9/10 changing it won't move the needle with respect to maintainability or meaningful performance.

        The 9/10 number seems incongruent with my experience, but that could be because I often help set the PR review culture, and encourage people to identify the questions: is it "how" (…is it working), "what" (…is it doing), or "why" (…is it done this way). If you recognize those, it makes it way easier for the author to address the specifics without a total redesign. (How = hard to follow the logic, what = better names needed, why = comments needed).

        But sometimes a redesign can be super compelling to everyone, and the question is: do we want to be stuck with the subpar version? Because really, the closer you are to remembering how it all works, the easier it is to rewrite it. Not later, when it faded from memory and overgrew with dependent code.

        > Comparing the software we produced before the whole PR review for every merge request thing became common, and the software we produce now, I don't think software quality has improved at all. So for me to change my mind, I'm going to need to see hard data that says the opposite.

        The kind of subjective reviews we're discussing (not security, performance, consistency, edge case issues, or major simplifications) are not really about quality, they're about your specific team understanding your specific code. Help your colleagues (just the small circle of maintainers) have more confidence in coming in later to change this code. If you're working alone, this isn't as big an issue.

        To collect data on this, you would need to witness what happens when everyone architects their own way, and then you have to find the person who wrote that part to decipher it, and compare the amount of slowdown that can cause. Nearly impossible to measure in practice, I think.

        > not everything needs to be asynchronous

        I completely agree. PRs provide great tools for discussing code asynchronously so it's good to lean on them sometimes, and discuss the work on a call other times. This is however a big departure from your original statement that PRs are the absolute worst time to review the code.