Comment by devjab
I don't think that research will show what you're hoping it would. I'm not a big proponent of AI, you shouldn't bother going through my history but it is there to back up my statement if you're bored. Anyway, even I find it hard to argue against AI agents for productivity, but I think ik depend a lot on how you use them. As an anecdotal example I mainly work with Python, C and Go, but once in a while I also work with Typescript and C#. I've got 15 years experience with js/ts but when I've been away from it for a month it's not easy for me to remember the syntax, and before AI agents I'd need to go to https://developer.mozilla.org/en-US/docs/Web/JavaScript or similar quite a lot when I jumped back into it. AI agents allow me to do the same thing so much quicker.
These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.
The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.
Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.
Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.
You're proving the point in the actual research. Programmers who only use AI for learning/coding will lose this knowledge (of python, for example) that you have gained by actually "doing" it.