AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'
(theregister.com)1653 points by JustExAWS 3 days ago
1653 points by JustExAWS 3 days ago
Time to apply the best analogy I've ever heard.
> "Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs." -- Bill Gates
Do we reward the employee who has added the most weight? Do we celebrate when the AI has added a lot of weight?
At first, it seems like, no, we shouldn't, but actually, it depends. If a person or AI is adding a lot of weight, but it is really important weight, like the engines or the main structure of the plane, then yeah, even though it adds a lot of weight, it's still doing genuinely impressive work. A heavy airplane is more impressive than a light weight one (usually).
I just can’t resist myself when airplanes come up in discussion.
I completely understand your analogy and you are right. However just to nitpick, it is actually super important to have a weight on the airplane at the right place. You have to make sure that your aeroplane does not become tail heavy or it is not recoverable from a stall. Also a heavier aeroplane, within its gross weight, is actually safer as the safe manoeuverable speed increases with weight.
> a heavier aeroplane … is actually safer
Just to nitpick your nitpick, that’s only true up to a point, and the range of safe weights isn’t all that big really - max payload on most planes is a fraction of the empty weight. And planes can be overweight, reducing weight is a good thing and perhaps needed far more often than adding weight is needed. The point of the analogy was that over a certain weight, the plane doesn’t fly at all. If progress on a plane is safety, stability, or speed, we can measure those things directly. If weight distribution is important to those, that’s great we can measure weight and distribution in service of stability, but weight isn’t the primary thing we use.
Like with airplane weight, you absolutely need some code to get something done, and sometimes more is better. But is more better as a rule? Absolutely not.
right, thats why its a great analogy - because you also need to have at least some code in a successful piece of software. But simply measuring by the amount of code leads to weird and perverse incentives - code added without thought is not good, and too much code can itself be a problem. Of course, the literal balancing aspect isn't as important.
This is a pretty narrow take on aviation safety. A heavier airplane has a higher stall speed, more energy for the brakes to dissipate, longer takeoff/landing distances, a worse climb rate… I’ll happily sacrifice maneuvering speed for better takeoff/landing/climb performance.
> the safe manoeuverable speed increases with weight
The reason this is true is because at a higher weight, you'll stall at max deflection before you can put enough stress on the airframe to be a problem. That is to say, at a given speed a heavier airplane will fall out of the air [hyperbole, it will merely stall - significantly reduced lift] before it can rip the wings/elevator off [hyperbole - damage the airframe]. That makes it questionable whether heavier is safer - just changes the failure mode.
Progress on airplanes is often tracked by # of engineering drawings released, which means that 1000s of little clips, brackets, fittings, etc. can sometimes misrepresent the amount of engineering work that has taken place compared to preparing a giant monolithic bulkhead or spar for release. I have actually proposed measuring progress by part weight instead of count to my PMs for this reason
> the best analogy I've ever heard.
It’s an analogy that gets the job done and is targeted at non-tech managers.
It’s not perfect. Dead code has no “weight” unless you’re in a heavily storage-constrained environment. But 10,000 unnecessary rivets has an effect on the airplane everywhere, all the time.
> Dead code has no “weight”
Assuming it is truly dead and not executable (which someone would have to verify is & remains the case), dead code exerts a pressure on every human engineer who has to read (around) it, determine that it is still dead, etc. It also creates risk that it will be inadvertently activated and create e.g. security exposure.
In this analogy, I'd say dead code corresponds to airplane parts that aren't actually installed on the aircraft. When people talk about the folly of measuring productivity in lines of code, they aren't referring to the uselessness of dead code, they're referring to the harms that come from live code that's way bigger than it needs to be.
When you are thinking of development and refactoring, dead code absolutely has weight.
This reminds me of a piece on folklore.org by Andy Hertzfeld[0], regarding Bill Atkinson. A "KPI" was introduced at Apple in which engineers were required to report how many lines of code they had written over the week. Bill (allegedly) claimed "-2000" (a completely, astonishingly negative report), and supposedly the managers reconsidered the validity of the "KPI" and stopped using it.
I don't know how true this is in fact, but I do know how true this is in my work - you cannot apply some arbitrary "make the number bigger" goal to everything and expect it to improve anything. It feels a bit weird seeing "write more lines of code" becoming a key metric again. It never worked, and is damn-near provably never going to work. The value of source code is not in any way tied to its quantity, but value still proves hard to quantify, 40 years later.
0. https://www.folklore.org/Negative_2000_Lines_Of_Code.html
While most of us know the next word guessing is how it works in reality…
That sentiment ignores the magic of how well this works. There are mind blowing moments using AI coding, to pretend that it’s “just auto correct and tab complete” is just as deceiving as “you can vibe code complete programs”.
All that said, I'm very keen on companies telling me how much of their codebase was written by AI.
I just won't use that information in quite the excitable, optimistic way they offer it.
I want to have the model re-write patent applications, and if any portion of your patent filing was replicated by it your patent is denied as obvious and derivative.
"...just raised a $20M Series B and are looking to expand the team and products offered. We are fully bought-in to generative AI — over 40% of our codebase is built and maintained by AI, and we expect this number to continue to grow as the tech evolves and the space matures."
"What does your availability over the next couple of weeks look like to chat about this opportunity?"
"Yeah, quite busy over the next couple of weeks actually… the next couple of decades, really - awful how quickly time fills by itself these days, right? I'd have contributed towards lowering that 40% number which seems contrary to your goals anyway. But here's my card, should you need help with debugging something tricky some time in the near future and nobody manages to figure it out internally. I may be able to make room for you if you can afford it. I might be VERY busy though."
Something I wonder about the percent of code - I remember like 5-10 years ago there was a series of articles about Google generating a lot of their code programmatically, I wonder if they just adapted their code gen to AI.
I bet Google has a lot of tools to say convert a library from one language to another or generate a library based on an API spec. The 30% of code these LLMs are supposedly writing is probably in this camp, not net novel new features.
It is a really attractive idea for lazy people who don’t want to learn things
In academia the research pipeline is this
Undergraduate -> Graduate Student -> Post-doc -> Tenure/Senior
Some exceptions occur for people getting Tenure without post doc or people doing some other things like taking undergraduate in one or two years. But no one expect that we for whole skip the first two and then get any senior researchers.
The same idea applies anywhere, the rule is that if you don't have juniors then you don't get seniors so better prepare your bot to do everything.
As always, the truth is somewhere in the middle. AI is not going to replace everyone tomorrow, but I also don't think we can ignore productivity improvements from AI. It's not going to replace engineers completely now or in the near future, but AI will probably reduce the number of engineers needed to solve a problem.
I'm a technical co-founder rapidly building a software product. I've been coding since 2006. We have every incentive to have AI just build our product. But it can't. I keep trying to get it to...but it can't. Oh, it tries, but the code it writes is often overly complex and overly-verbose. I started out being amazed at the way it could solve problems, but that's because I gave it small, bounded, well-defined problems. But as expectations with agentic coding rose, I gave it more abstract problems and it quickly hit the ceiling. As was said, the engineering task is identifying the problem and decomposing it. I'd love to hear from someone who's used agentic coding with more success. So far I've tried Co-pilot, Windsurf, and Alex sidebar for Xcode projects. The most success I have is via a direct question with details to Gemini in the browser, usually a variant of "write a function to do X"
> As was said, the engineering task is identifying the problem and decomposing it.
In my experience if you do this and break the problem down into small pieces, the AI can implement the pieces for you.
It can save a lot of time typing and googling for docs.
That said, once the result exceeds a certain level of complexity, you can't really ask it to implement changes to existing code anymore, since it stops understanding it.
At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.
So, my upshot is so far that it works great for small projects and for prototyping, but the gain after a certain level of complexity is probably quite small.
But then, I've also find quite some value in using it as a code search engine and to answer questions about the code, so maybe if nothing else that would be where the benefit comes from.
> At which point you now have to do it yourself, but you know the codebase less well than if you'd hand written it.
Appreciate you saying this because it is my biggest gripe in these conversations. Even if it makes me faster I now have to put time into reading the code multiple times because I have to internalize it.
Since the code I merge into production "is still my responsibility" as the HN comments go, then I need to really read and think more deeply about what AI wrote as opposed to reading a teammate's PR code. In my case that is slower than the 20% speedup I get by applying AI to problems.
I'm sure I can get even more speed if I improve prompts, when I use the AI, agentic vs non-agentic, etc. but I just don't think the ceiling is high enough yet. Plus I am someone who seems more prone to AI making me lazier than others so I just need to schedule when I use it and make that time as minimal as possible.
Are we trying to guilt trip corporations to do socially responsible thing regarding young workers skill acquisition?
Haven't we learned that it almost always ends up in hollow PR and marketing theater?
Basically the solution to this is extending education so that people entering workforce are already at senior level. Of course this can't be financed by the students, because their careers get shortened by longer education. So we need higher taxes on the entities that reap the new spoils. Namely those corporations that now can pass on hiring junior employees.
I do not agreed. Its was not even worth without llm. Junior will always take a LOT of time from seniors. and when the junior become good enough, he will find another job. and the senior will be stuck in this loop.
junior + llm, it even worse. they become prompt engineers
There is a dedicated website on this topic: https://learnhowtolearn.org
I was going to say something, then I realized my cynicism is already at maximum.
Junior staff will be necessary but you'll have to defend them from the bean-counters.
You need people who can validate LLM-generated code. It takes people with testing and architecture expertise to do so. You only get those things by having humans get expertise through experience.
>teach “how do you think and how do you decompose problems”
That's rich coming from AWS!
I think he meant "how do you think about adding unnecessary complexity to problems such that it can enable the maximum amount of meetings, design docs and promo packages for years to come"!
Two things that will hurt us in the long run, working from home and AI. I'm generally in favour of both, but with newbies it hurts them as they are not spending enough face to face time with seniors to learn on the job.
And AI will hurt them in their own development and with it taking over the tasks they would normally cut their teeth on.
We'll have to find newer ways of helping the younger generation get in the door.
A weekly 1 hour call, where pair programming/ exploration of an on-going issue, technical idea would be enough to replace face to face time with seniors. This has been working great for us, at a multi billion dollar profitable public company thats been fully remote.
I would argue that just being in the office or not using AI doesn't guarantee any better learning of younger generations. Without proper guidance a junior would still struggle regardless of their location or AI pilot.
The challenge now is for companies, managers and mentors to adapt to more remote and AI assisted learning. If a junior can be taught that it's okay to reach out (and be given ample opportunities to do so), as well as how to productively use AI to explain concepts that they may feel too scared to ask because they're "basics", then I don't see why this would hurt in the long run.
I can't wait for that damn bubble to explode, really...
This is becoming unbreathable for hackers.
It's already exploding.
The hype train is going to keep on moving for a while yet though.
A lot of companies that have stopped hiring junior employees are going to be really hurting in a couple of years, once all of their seniors have left and they have no replacements trained and ready to go.
If AI is so great and had PhD level skills (Musk) then logic says you should be replacing all of your _senior_ developers. That is not the conclusion they reached which implies that the coding ability is not that hot. Q.E.D.
Finally someone from a top position said this. After all the trash the CEOs have been spewing and sensationalizing every AI improvement, for a change, a person in a non-engineering role speaks the truth.
Rather than AI that can function as many junior coders to enable a senior programmer to be more efficient.
Having AI function as a senior programmer for lots of junior programmers that helps them learn and limits the interruptions for human senior coders makes so much more sense.
Yesterday, I was asked to scrape data from a website. My friend used ChatGPT to scrape data but didn't succeded even spent 3h+. I looked website code and understand with my web knowledge and do some research with LLM. Then I described how to scrape data to LLM it took 30 minutes overall. The LLM cant create best way but you can create with using LLM. Everything is same, at the end of the day you need someone who can really think.
LLMs can do small things well, but you must use small parts to form big picture.
it is not that easy, there is lazy loading in the page that is triggered by scroll of specific sections. You need to find clever way, no way to scrape with bs4, so tough with even selenium.
Bravo.. Finally a voice of reason.
As someone who works in AI, any CEO who says that AI is going to replace junior workers has no f*cking clue what they are talking about.
Point is nobody has figured out how much AI can replace humans. People. There is so much of hype out there as every tech celebrity sharing their opinions without responsibility of owning them. We have to wait & see. We could change courses when we know the reality. Until then, do what we know well.
My respect for people that take this approach is very high. This is the right way to approach integration of technology.
Can SOME people's jobs be replaced by AI. Maybe on paper. But there are tons of tradeoffs to START with that approach and assume fidelity of outcome.
Agree. AI is, so far, almost as good as StackOverflow, except it lies confidently and generates questionable code.
That's right it should be used to replace senior stuff right away
Remark is at 12:02 in the video.
CEOs that get paid the most don't care about problems like that.
Why do so many people act like they’re mutually exclusive? Junior staff can use AI too.
No one's getting replaced, but you may not hire that new person that otherwise would have been needed. Five years ago, you would have hired a junior to crank out UI components, or well specc'd CRUD endpoints for some big new feature initiative. Now you probably won't.
> well specc'd CRUD endpoints
I’m really tired of this trope. I’ve spent my whole career on “boring CRUD” and the number of relational db backed apps I’ve seen written by devs who’ve never heard of isolation levels is concerning (including myself for a time).
Coincidentally, as soon as these apps see any scale issues pop up.
On the other hand, that extra money can be used to expand the business in other ways, plus most kids coming out of college these days are going to be experts in getting jobs done with AI (although they will need a lot of training in writing actual secure and maintainable code).
Even the highest ranking engineers should be experts. I don’t understand why there’s this focus on juniors as the people who know AI best.
Using AI isn’t rocket science. Like you’re talking about using AI as if typing a prompt in English is some kind of hard to learn skill. Do you know English? Check. Can you give instructions? Check. Can you clarify instructions? Check.
> I don’t understand why there’s this focus on juniors as the people who know AI best.
Because junior engineers have no problem with wholeheartedly embracing AI - they don't have enough experience to know what doesn't work yet.
In my personal experience, engineers who have experience are much more hesitant to embrace AI and learn everything about it, because they've seen that there are no magic bullets out there. Or they're just set in their ways.
To management that's AI obsessed, they want those juniors over anyone that would say "Maybe AI isn't everything it's cracked up to be." And it really, really helps that junior engineers are the cheapest to hire.
> plus most kids coming out of college these days are going to be experts in getting jobs done with AI
“You won’t lose your job to AI, you’ll lose it to someone who uses AI better than you do”
Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.
At least in my personal case, struggling with renewal at Virgin Broadband, multiple humans wasted probably an hour of everyone's time overall on the phone bouncing me around departments, unable to comprehend my request, trying to upsell and pitch irrelevant services, applying contextually inappropriate talking scripts while never approaching what I was asking them in the first place. Giving up on those brainless meat bags and engaging with their chat bot, I was able to resolve what I needed in 10 minutes.
Its strange you have to write this.
In India most of the banks now have apps that do nearly all the banking you can do by visiting a branch personally. To that extent this future is already here.
When I had to close my loan and had to visit a branch nearly a few times, the manager tells me, significant portion of his people's time now goes into actual banking- which according to him was selling products(fixed deposits, insurances, credit cards) and not customer support(which the bank thinks is not its job and has to because there is no other alternative to it currently).
> Sure. First line tech support as well. In many situations customers will get vastly superior service if AI agent answers the call.
In IT, if at a minimum, AI would triage the problem intelligently (and not sound like a bot while doing it), that would save my more expensive engineers a lot more time.
This is mostly because CS folks are given such sales and retention targets; and while I’ve never encountered a helpful support bot even in the age of LLMs, I presume in your case the company management was just happy to have a support bot talking to people without said metrics.
No it isn't, he's lying. Sorry guys.
Claude code is better than a junior programmer by a lot and these guys think it only gets better from there and they have people with decades in the industry to burn through before they have to worry about retraining a new crop.
LLM defenders with the "yOu cAn't cRiTiCiZe iT WiThOuT MeNtIoNiNg tHe mOdEl aNd vErSiOn, It mUsT Be a lAnGuAgE LiMiTaItOn" crack me up. I used code generation out of curiosity once, for a very simple script, and it fucked it up so badly I was laughing.
Please tell me which software you are building with AI so I can avoid it.
They're deliberately popping the bubble. If they'd actually thought this and cared, they'd have said it 2 years ago, before the layoffs.
Stop getting played.
I mean I used Copilot / JetBrains etc. to work on my code base but for large scale changes it did so much damage that it took me days to fix it and actually slowed me down. These systems are just like juniors in their capabilities, actually worse because junior developers are still people and able to think and interact with you coherently over days or weeks or months, these models aren’t even at that level I think.
> “Often times fewer lines of code is way better than more lines of code,” he observed. “So I'm never really sure why that's the exciting metric that people like to brag about.”
I remember someone that had a .sig that I loved (Can't remember where. If he's here, kudos!):
> I hate code, and want as little of it in my programs as possible.
[UPDATE] Is this a source?: https://softwarequotes.com/quote/i-hate-code--and-i-want-as-...
It's refreshing to finally see CEOs and other business leaders coming around to what experienced, skeptical engineers have been saying for this entire hype cycle.
I assumed it would happen at some point, but I am relieved that the change in sentiment has started before the bubble pops - maybe this will lesson the economic impact.
Yeah, the whole AI thing has very unpleasant similarities to the dot com bubble that burst to the massive detriment of the careers of the people that were working back then.
The parallels in how industry members talk about it is similar as well. No one denies that the internet boom was important and impactful, but it's also undeniable that companies wasted unfathomable amounts of cash for no return at the cost of worker well being.
Finally some fucking sanity from a megacorp CEO. Been a long time.
Sounds like someone got the memo that the bubble's about to burst.
Time will show you are right and almost all the other dumbasses here at HN are wrong. Which is hardly surprising since they are incapable of applying themselves around their coming replacement.
They are 100% engineers and 100% engineers have no ability to adapt to other professions. Coding is dead, if you think otherwise then I hope you are right! But I doubt it because so far I have only heard arguments to the counter that are obviously wrong and retarded.
> Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.
You really want to believe, maybe even need to believe, that anyone who comes up with this idea in their head has never written a single line of code in their life.
It is on its face absurd. And yet I don't doubt for a second that Garman et al. have to fend off legions of hacks who froth at the mouth over this kind of thing.