Comment by tguvot
Comment by tguvot 6 months ago
Now, let's do same exercise but with programming and over longer period of time.
Would really like to present it to management that pushes ai assistance for coding
Comment by tguvot 6 months ago
Now, let's do same exercise but with programming and over longer period of time.
Would really like to present it to management that pushes ai assistance for coding
> ai assistance for coding
I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)
If by "cognitive debt", you mean "you don't really understand the code of the application that we're trying to extend/maintain", then yes, it's almost certainly going to apply to programming.
If I write the application, I have an internal map that corresponds (more or less) to what's going on in the code. I built that map as I was writing it, and I use that map as I debug, maintain, and extend the application.
But if I use AI, I have much less clear of a map. I become dependent on AI to help me understand the code well enough to debug it. Given AI's current limitations of actually understanding, that should give you pause...
> Would really like to present it to management that pushes ai assistance for coding
Your management presumably cares more about results, than your long term cognitive decline?
i guess one of the questions is how quick cognitive decline sets it and how it influences system stability (we have big system with very high sla due to nature of system and it takes some serious cognitive abilities to reason about it operation).
if todays productivity is traded for longer term stability, i am not sure that it's a risk they would like to take
Companies don't own employees: workers can leave at any time.
Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)
To quote myself:
> Companies don't own employees: workers can leave at any time.
> Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)
You are talking about productivity, I'm talking about knowledge. You may come-up with a product, then fire all engineers having built it. Then, what? It's not sustainable for a business to start from scratch every other year. Your LLM won't be a substitute for owning your product.
> There’s got to be the world’s largest class action lawsuit
You'd have to articulate harm, so this is basically dead in the water (in the US). Good luck.
I don't think that research will show what you're hoping it would. I'm not a big proponent of AI, you shouldn't bother going through my history but it is there to back up my statement if you're bored. Anyway, even I find it hard to argue against AI agents for productivity, but I think ik depend a lot on how you use them. As an anecdotal example I mainly work with Python, C and Go, but once in a while I also work with Typescript and C#. I've got 15 years experience with js/ts but when I've been away from it for a month it's not easy for me to remember the syntax, and before AI agents I'd need to go to https://developer.mozilla.org/en-US/docs/Web/JavaScript or similar quite a lot when I jumped back into it. AI agents allow me to do the same thing so much quicker.
These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.
The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.
Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.
Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.
You're proving the point in the actual research. Programmers who only use AI for learning/coding will lose this knowledge (of python, for example) that you have gained by actually "doing" it.
i'll add this quote from article:
Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.
When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.
Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.
This opinion is the exact thinking that has lead to the massive layoffs in the design industry. Their jobs are being destroyed because they think lawsuits and current state of the art will show they are right. These models actually can't produce unique input and if you use them for ideation they do only help you get to already solved problems.
But engineers aren't being fired completely in droves because we have adapted. The human can still break down the problem, tell the LLM to come up with multiple different ways of solving the problem, throw away all of them and asking for more. My most effective use is usually looking and seeing what I would do normally, breaking it down, and then asking for it in chunks that make sense that would touch multiple places, then coding details. It's just a shift in thinking like knowing when to copy and paste when being DRY.
Designers are screwing themselves right now waiting for case law instead of using their talents to make one unique thing not in the training set to boost their productivity and shaming tools that let them do that.
It will be a competitive advantage in the future to short sighted companies that took humans out the loop completely, but any company not using the tech will be horse shoe makers not worried because of all the mechanical issues with horseless carriages