Comment by shagie

Comment by shagie 17 hours ago

62 replies

> Now I just assume they're taking my feedback and feeding it right back to the LLM.

This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."

Part of the challenge (and I don't have an answer either) is there are some juniors who use AI to assist... and some who use it to delegate all of their work to.

It is especially frustrating that the second group doesn't become much more than a proxy for an LLM.

New juniors can progress in software engineering - but they have to take the road of disciplined use of AI and make sure that they're learning the material rather than delegating all their work to it... and that delegating work is very tempting... especially if that's what they did in college.

johnnyanmac 12 hours ago

I must ask once again why we are having these 5+ round interview cycles and we aren't able to filter for qualities that the work requires of its talent. What are all those rounds for if we're getting engineers who aren't as valued for the team's needs at the end of the pipeline?

  • getnormality 11 hours ago

    There's no fix for this problem in hiring upfront. Anyone can cram and fake if they expect a gravy train on the other end. If you want people to work after they're hired, you have to be able to give direct negative feedback, and if that doesn't work, fire quickly and easily.

    • JosephjackJR 2 hours ago

      The bar for “junior” has quietly turned into “mid-level with 3 years of production experience, a couple of open-source contributions, and perfect LeetCode” while still paying junior money. Companies list “0-2 years” but then grill candidates on system design, distributed tracing, and k8s internals like they’re hiring for staff roles. No wonder the pipeline looks broken. I’ve interviewed dozens of actual juniors in the last six months. Most can ship features, write clean code, and learn fast, but they get rejected for not knowing the exact failure modes of Raft or how to tune JVM garbage collection on day one. The same companies then complain they “can’t find talent” and keep raising the bar instead of actually training people.

      Real junior hiring used to mean taking someone raw, pairing them heavily for six months, and turning them into a solid mid. Now the default is “we’ll only hire someone who needs zero ramp-up” and then wonder why the market feels empty.

    • johnnyanmac 11 hours ago

      >Anyone can cram and fake if they expect a gravy train on the other end.

      If you're still asking trvia, yes. Maybe it's time to shift from the old filter and update the process?

      If you can see in the job that a 30 minute PR is the problem, then maybe replace that 3rd leetcode round with 30 minutes of pair programming. Hard to chatGPT in real time without sounding suspicion.

      • nradov 10 hours ago

        That approach to interviewing will cause a lot of false negatives. Many developers, especially juniors, get anxious when thrown into a pair programming task with someone they don't know and will perform badly regardless of their actual skills.

  • locknitpicker 5 hours ago

    > I must ask once again why we are having these 5+ round interview cycles and we aren't able to filter for qualities that the work requires of its talent.

    Hiring well is hard, specially if compensation isn't competitive enough to attract talented individuals who have a choice. It's also hard to change institutional hiring practices. People don't get fired by buying IBM, and they also don't get fired if they follow the same hiring practices in place in 2016.

    > What are all those rounds for if we're getting engineers who aren't as valued for the team's needs at the end of the pipeline?

    Software development is a multidiscinary field. It involves multiple non-overlapping skill sets, bot hard skills and soft skills. Also, you need multiple people vetting a candidate to eliminate corruption and help weed out candidates who outright clash with company culture. You need to understand that hiring someone is a disruptive activity, that impacts not only what skill sets are available in your organization but also how the current team dynamics. If you read around, you'll stumble upon stories of people who switch roles in reaction to new arrivals. It's important to get this sort of stuff right.

    • johnnyanmac 5 hours ago

      >It's important to get this sort of stuff right.

      Well I'm still waiting. Your second paragraph seems to contradict the first. Which perfectly encapsulates the issue with hiring. Too afraid to try new things, so instead add beuracracy to leases accountability.

      • locknitpicker 5 hours ago

        > Well I'm still waiting. Your second paragraph seems to contradict the first. Which perfectly encapsulates the issue with hiring. Too afraid to try new things, so instead add beuracracy to leases accountability.

        I think you haven't spend much time thinking about the issue. Changing hiring practices does not mean they are improve. It only means they changed. You are still faced with the task of hiring adequate talent, but if you change processes them now you don't have baselines and past experiences to guide you. You keep those baselines if you keep your hiring practices then you stick with something that is proven to work albeit with debatable optimality, and mitigate risks because your experience with the process helps you be aware of some red flags. The worst case scenario is that you repeat old errors, but those will be systematic errors which are downplayed by the fact that your whole organization is proof that your hiring practices are effective.

        • johnnyanmac 4 hours ago

          >Changing hiring practices does not mean they are improve.

          No, but I'd like to at least see conversation on how to improve the process. We aren't even at that point. We're just barely past acknowledging that it's even an issue.

          >but if you change processes them now you don't have baselines and past experiences to guide you.

          I argue we're already at this point. The reason we got past the above point of "acknowledging problem" (a decade too late, arguably) is that the baselines are failing to new technology, which is increasing false positives.

          You have a point, but why does tech pick this point to finally decide not to "move fast and break things"? Not when it comes to law and ethics, but for aquiring new talent (which meanwhile is already disrupting heir teams with this AI slop?)

          >those will be systematic errors which are downplayed by the fact that your whole organization is proof that your hiring practices are effective.

          okay, so back to step zero then. Do we have a hiring problem? The thesis of this article says yes.

          "it worked before" seems to be the antipattern the tech industry tried to fight back against for decades.

  • venturecruelty 12 hours ago

    It's the cargo cult kayfabe of it all. People do it because Google used to do it, now it's just spread like a folk religion. But nobody wants guilds or licensure, so we have to make everyone do a week-long take-home and then FizzBuzz in front of a very awkward committee. Might as well just read chicken bones, at least that would be less humiliating.

    • nradov 9 hours ago

      And who would write the guild membership or licensure criteria? How much should those focus on ReactJS versus validation criteria for cruise missile flight control software?

      • throwup238 9 hours ago

        Guild members? Who else?

        You’re asking these rhetorical questions as if we haven’t had centuries of precedent here, both bad and good. How does the AMA balance between neurosurgeons and optometrists? Bar associations between corporate litigators and family estate lawyers? Professional engineering associations between civil engineers and chemical engineers?

    • ThrowawayR2 9 hours ago

      Guilds and licensure perform gatekeeping, by definition, and the more useful they are at providing a good hiring signal, the more people get filtered out by the gatekeeping. So there's no support for it because everyone is afraid that effective guilds or licensing would leave them out in the cold.

    • johnnyanmac 11 hours ago

      Yeah, I'd be more than fine with licensing if I didn't have to keep going through 5 rounds of trivia only to be ghosted. Let me do that once and show I can code my way out of a paper bag.

  • ponector 11 hours ago

    I can understand such process for freshman, but for industry veteran with 10+ years of experience, with with recommendation from multiple senior managers?

    And yet welcome to leetcode grind.

    • johnnyanmac 11 hours ago

      Yeah, I was told I'd get less of this as I got real experience. More additions to the pile of lies and misconceptions.

      If you need to fizzbuzz me, fine. But why am I still making word search solver project in my free time as if I'm applying for a college internship?

      • zmgsabst 9 hours ago

        I’ve started using ChatGPT for their take home projects, with only minor edits or refactors myself. If they’re upset I saved a couple hours of tedium, they’re the wrong employer for me.

        And I’m being an accelerationist hoping the whole thing collapses under its own ridiculousness.

        • ponector 4 hours ago

          Also they explicitly say to not use AI assistance for such assignments.

          Recruitment is broken even more than before chatgpt.

locknitpicker 5 hours ago

> Part of the challenge (and I don't have an answer either) is there are some juniors who use AI to assist... and some who use it to delegate all of their work to.

This is not limited to junior devs. I had the displeasure of working with a guy who was hired as a senior dev who heavily delegated any work they did. He failed to even do the faintest review of what the coding agent and of course did zero testing. At one time these stunts resulted in a major incident where one of these glorious PRs pushed code that completely inverted a key business rule and resulted in paying customers being denied access to a paid product.

Sometimes people are slackers with little to no ownership or pride in their craftsmanship, and just stumbled upon a career path they are not very good at. They start at juniors but they can idle long enough to waddle their way to senior positions. This is not a LLM problem, or caused by it.

mooreds 16 hours ago

> there are some juniors who use AI to assist... and some who use it to delegate all of their work to.

Hmmm. Is there any way to distinguish between these two categories? Because I agree, if someone is delegating all their work to an LLM or similar tool, cut out the middleman. Same as if someone just copy/pasted from Stackoverflow 5 years ago.

I think it is also important to think about incentives. What incentive does the newer developer have to understand the LLM output? There's the long term incentive, but is there a short term one?

  • supriyo-biswas 16 hours ago

    Dealing with an intern at work who I suspect is doing exactly this, I discussed this with a colleague. One way seems to be to organize a face to face meeting where you test their problem solving skills without AI use, the other may be to question them about their thought process as you review a PR.

    Unfortunately, the use of LLMs has brought about a lot of mistrust in the workplace. Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice as they see it as sermonizing when an “easy” process to get “acceptable” results exists.

    • chairmansteve 11 hours ago

      The intern is not producing code that is up to the standard you expect, and will not change it?

      I saw a situation like this many years ago. The newly hired midlevel engineer thought he was smarter than the supervisor. Kept on arguing about code style, system design etc. He was fired after 6 months.

      But I was friendly with him, so we kept in touch. He ended up working at MSFT for 3 times the salary.

    • throwaway2037 12 hours ago

          > Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice
      
      Hot take: This reads like an old person looking down upon young people. Can you explain why it isn't? Else, this reads like: "When I was young, we worked hard and listened to our elders. These days, young people ignore our advice." Every time I see inter-generational commentary like this (which is inevitably from personal experience), I am immediately suspicious. I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way. Why would this be any different in the current generation? In my experience, it isn't.

      On a positive note: I can remember mentoring some young people and watching them comb through blogs to learn about programming. I am so old that my shelf is/was full of O'Reilly books. By the time I was mentoring them, few people under 25 were reading O'Reilly books. It opened my eyes that how people changes more than what people learn. Example: Someone is trying to learning about access control modifiers for classes/methods in a programming language. Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.

      • ryandrake 11 hours ago

        > Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT.

        The answer to this (throughout the ages) should be the same: read the authoritative source of information. The official API docs, the official language specification, the man page, the textbook, the published paper, and so on.

        Maybe I am showing my age, but one of the more frustrating parts of being a senior mentoring a junior is when they come with a question or problem, and when I ask: “what does the official documentation say?” I get a blank stare. We have moved from consulting the primary source of information to using secondary sources (like O’Reilly, blogs and tutorials), now to tertiary sources like LLMs.

      • transfer92 12 hours ago

        > I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way.

        Hot take: This reads like a person who was difficult to work with.

        Senior people have responsibility, therefore in a business situation they have authority. Junior people who think they know it all don't like this. If there's a disagreement between a senior person and a junior person about something, they should, of course, listen to each other respectfully. If that's not happening, then one of them is not being a good employee. But if they are, then the supervisor makes the final call.

      • shagie 12 hours ago

        > Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.

        The tangent to that is it is also changing with the how much one internalizes about the problem domain and is able to apply that knowledge later. Hard fought knowledge from the old days is something that shapes how I design systems today.

        However, the tendency of people who reach for ChatGPT today to solve a problem results in them making the same mistakes again the next time since the information is so easy to access. It also results in things that are larger are more difficult... the "how do you architect this larger system" is something you learn by building the smaller systems and learning about them so that their advantages and disadvantages and how and such becomes an inherent part of how you conceive of the system as a whole. ... Being able to have ChatGPT do it means people often don't think about the larger problem or how it fits together.

        I believe that is harder for a junior who is using ChatGPT to advance to being a mid level or senior developer than it is for a junior from the old days because of the lack of retention of the knowledge of the problems and solutions.

        • moosedev 7 hours ago

          They’re going to get promoted anyway. The “senior” title will simply (continue to) lose meaning to inflation.

      • chinaexpert1 11 hours ago

        Yeah Ive got to agree with this hot take. Put yourself in the junior's shoes: if s/he wasn't there you'd be pulling it out of Claude Code yourself, until your satisfied with what comes out enough to start adding your "senior" touches. The fact is the way code is written has changed fundamentally, especially for kids straight out of college, and the answer is to embrace that everyone is using it, not all this shaming. If you're so senior, why not show the kid how to use the LLM right, so the work product is right from the start? It seems part of the problem is dinosaurs are suspicious of the tech, and so dont know how to mentor for it. That being said, Im a machine learning engineer not a developer, and these LLMs have been a godsend. Assuming I do it correctly, there's just no way I could write a whole 10,000 line pipeline in under a week without it. While coding from outputs and error-driven is the wrong way for software Juniors, its fine by me for my AI work. It comes down to knowing when there's a silent error, if you haven't been through everything line by line. I've been caught before, Im not immune, its embarrassing, but every since GPT was in preview I have made it my business to master it.

        I have a friend who is a dev, a very senior one at that, who spins up 4 Claudes at once and does the whole enterprises work. Hes a "Senior AI Director" with nobody beneath him, not a single direct report, and NO knowledge of AI or ML, to my chagrin.

        So now I'm whining too...

        • JSR_FDED 8 hours ago

          This isn’t a question of the senior teaching the junior how to use the LLM correctly.

          Once you’re a senior you can exercise judgement on when/how to use LLMs.

          When you’re a junior you haven’t developed that judgement yet. That judgement comes from consulting documentation, actually writing code by hand, seeing how you can write a small program just fine, but noticing that some things need to change when the code gets a lot bigger.

          A junior without judgement isn’t very valuable unless he/she is working hard to develop that judgement. Passing assignments through to the LLM does not build judgement, so it’s not a winning strategy.

  • icedchai 15 hours ago

    There are some definite signs of over reliance on AI. From emojis in comments, to updates completely unrelated to the task at hand, if you ask "why did you make this change?", you'll typically get no answer.

    I don't mind if AI is used as a tool, but the output needs to be vetted.

    • throwaway2037 12 hours ago

      What is wrong with emojis in comments? I see no issue with it. Do I do it myself? No. Would I pushback if a young person added emojis to comments? No. I am looking at "the content, not the colour".

      • chihuahua 12 hours ago

        I think GP may be thinking that emojis in PR comments (plus the other red flags they mentioned) are the result of copy/paste from LLM output, which might imply that the person who does mindless copy/pasting is not adding anything and could be replaced by LLM automation.

      • [removed] 3 hours ago
        [deleted]
      • venturecruelty 12 hours ago

        The point is that heavy emoji use means AI was likely used to produce a changeset, not that emojis are inherently bad.

      • icedchai 11 hours ago

        The emojis are not a problem themselves. They're a warning sign: slop is (probably) present, look deeper.

    • wwweston 13 hours ago

      Exactly. Use LLMs as a tutor, a tool, and make sure you understand the output.

      • agumonkey 12 hours ago

        My favorite prompt is "your goal is to retire yourself"

  • hombre_fatal 16 hours ago

    Just like anything, anyone who did the work themself should be able to speak intelligently about the work and the decisions behind its idiosyncrasies.

    For software, I can imagine a process where junior developers create a PR and then run through it with another engineer side by side. The short-term incentive would be that they can do it, else they'd get exposed.

  • water-data-dude 14 hours ago

    Is/was copy/pasting from Stackoverflow considered harmful? You have a problem, you do a web search and you find someone who asked the same question on SO, and there's often a solution.

    You might be specifically talking about people who copy/paste without understanding, but I think it's still OK-ish to do that, since you can't make an entire [whatever you're coding up] by copy/pasting snippets from SO like you're cutting words out of a magazine for a ransom note. There's still thought involved, so it's more like training wheels that you eventually outgrow as you get more understanding.

    • vkou 14 hours ago

      > Is/was copy/pasting from Stackoverflow considered harmful?

      It at least forces you to tinker with whatever you copied over.

  • gunsch 14 hours ago

    Pair programming! Get hands-on with your junior engineers and their development process. Push them to think through things and not just ask the LLM everything.

    • johnnyanmac 12 hours ago

      I've seen some overly excessive pair programming initiatives out there, but it does baffle me why less people who struggle with this do it. Take even just 30 minutes to pair program on a problem and see their process and you can reveal so much.

      But I suppose my question is rhetorical. We're laying off hundreds of thousands of engineers and maming existing ones do the work of 3-4 engineers. Not much time to help the juniors.

  • bryanrasmussen 14 hours ago

    having dealt with a few people who just copy/pasted Stackoverflow I really feel that using an LLM is an improvement.

    That is at least for the people who don't understand what they're doing, the LLM tends to come out with something I can at least turn into something useful.

    It might be reversed though for people who know what they're doing. IF they know what they're doing they might theoretically be able to put together some stackoverflow results that make sense, and build something up from that better than what gets generated from LLM (I am not asserting this would happen, and thinking it might be the case)

    However I don't know as I've never known anyone who knew what they were doing who also just copy/pasted some stackoverflow or delegated to LLM significantly.

  • lll-o-lll 16 hours ago

    > Is there any way to distinguish between these two categories?

    Yes, it should be obvious. At least at the current state of LLMs.

    > There's the long term incentive, but is there a short term one?

    The short term incentive is keeping their job.

sevenseacat 3 hours ago

> This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."

And then in the next PR, you have to request the exact same changes

anal_reactor 12 hours ago

> This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."

I've learnt that saying this exact phrase does wonders when it comes to advancing your career. I used to argue against stupid ideas but not only did I achieve nothing, but I was also labelled uncooperative and technically incompetent. Then I became a "yes-man" and all problems went away.

  • shagie 12 hours ago

    I was attempting to mock Claude's "You are absolutely right" style of response when corrected.

    I have seen responses to PRs that appear to be a copy and paste of my feedback into it and a copy and paste of the response and fixes into the PR.

    It may be the that the developer is incorporating the mannerisms of Claude into their own speech... that would be something to delve into (that was intentional). However, more often than not in today's world of software development such responses are more likely to indicate a copy and paste of LLM generated content.

    • anal_reactor 10 hours ago

      > However, more often than not in today's world of software development such responses are more likely to indicate a copy and paste of LLM generated content.

      This is nothing new. People rarely have independent thoughts, usually they just parrot whatever they've been told to parrot. LLMs created common world-wide standard on this parroting, which makes the phenomenon more evident, but it doesn't change the fact that it existed before LLMs.

      Have you ever had a conversation with an intelligent person and thought "wow that's refreshing"? Yeah. There's a reason why it feels so good.

  • throwaway2037 12 hours ago

    This. May you have great success! My PR comments that I get are so dumb. I can put the most obvious bugs in my code, but people are focused in the colour of the bike shed. I am happy to repaint the bike shed whatever colour they need it to be!