Comment by waprin

Comment by waprin a day ago

75 replies

To some degree, traditional coding and AI coding are not the same thing, so it's not surprising that some people are better at one than the other. The author is basically saying that he's much better at coding than AI coding.

But it's important to realize that AI coding is itself a skill that you can develop. It's not just , pick the best tool and let it go. Managing prompts and managing context has a much higher skill ceiling than many people realize. You might prefer manual coding, but you might just be bad at AI coding and you might prefer it if you improved at it.

With that said, I'm still very skeptical of letting the AI drive the majority of the software work, despite meeting people who swear it works. I personally am currently preferring "let the AI do most of the grunt work but get good at managing it and shepherding the high level software design".

It's a tiny bit like drawing vs photography and if you look through that lens it's obvious that many drawers might not like photography.

dspillett 18 hours ago

> To some degree, traditional coding and AI coding are not the same thing

LLM-based¹ coding, at least beyond simple auto-complete enhancements (using it directly & interactively as what it is: Glorified Predictive Text) is more akin to managing a junior or outsourcing your work. You give a definition/prompt, some work is done, you refine the prompt and repeat (or fix any issues yourself), much like you would with an external human. The key differences are turnaround time (in favour of LLMs), reliability (in favour of humans, though that is mitigated largely by the quick turnaround), and (though I suspect this is a limit that will go away with time, possibly not much time) lack of usefulness for "bigger picture" work.

This is one of my (several) objections to using it: I want to deal with and understand the minutia of what I am doing, I got into programming, database bothering, and infrastructure kicking, because I enjoyed it, enjoyed learning it, and wanted to do it. For years I've avoided managing people at all, at the known expense of reduced salary potential, for similar reasons: I want to be a tinkerer, not a manager of tinkerers. Perhaps call me back when you have an AGI that I can work alongside.

--------

[1] Yes, I'm a bit of a stick-in-the-mud about calling these things AI. Next decade they won't generally be considered AI like many things previously called AI are not now. I'll call something AI when it is, or very closely approaches, AGI.

  • rwmj 16 hours ago

    Another difference if your junior will, over time, learn, and you'll also get a sense of whether you can trust them. If after a while they aren't learning and you can't trust them, you get rid of them. GenAI doesn't gain knowledge in the same way, and you're always going to have the same level of trust in it (which in my experience is limited).

    Also if my junior argued back and was wrong repeatedly, that's be bad. Lucky that has never happened with AIs ...

    • averageRoyalty 16 hours ago

      Cline, Roocode etc have the concept of rules that can be added to over time. There are heaps of memory bank and orchestration methods for AI.

      LLMs absolutely can improve over time.

  • danielbln 13 hours ago

    > I want to be a tinkerer, not a manager of tinkerers.

    We all want many things, doesn't mean someone will pay you for it. You want to tinker? Great, awesome, more power to you, tinker on personal projects to your heart's content. However, if someone pays you to solve a problem, then it is our job to find the best, most efficient way to cleanly do it. Can LLMs do this on their own most of the time? I think not, not right now at least. The combination of skilled human and LLM? Most likely, yes.

    • dspillett 9 hours ago

      If it gets to the point where I can't compete in the role with those using LLMs, I'll move on. I'm not happy with remote teams essentially being the only way of working these days (if you aren't working alone) anyway, and various other directions the industry has moved in (the shit-show that is client-side stack for instance!).

      Maybe I'll retrain for lab work, I know a few people in the area, yeah I'd need a pay cut, but… Heck, I've got the mortgage paid, so I could take quite a cut and not be destitute, especially if I get sensible and keep my savings where they are and building instead of getting tempted to spend them! I don't think it'll get to that point for quite a few years though, and I might have been due to throw the towel in by that point anyway. It might be nice to reclaim tinkering as a hobby rather than a chore!

  • thefz 12 hours ago

    > I want to deal with and understand the minutia of what I am doing, I got into programming, database bothering, and infrastructure kicking, because I enjoyed it, enjoyed learning it, and wanted to do it

    A million times yes.

    And we live in a time in which people want to be called "programmers" because it's oh-so-cool but not doing the work necessary to earn the title.

mitthrowaway2 21 hours ago

The skill ceiling might be "high" but it's not like investing years of practice to become a great pianist. The most experienced AI coder in the world has about three years of practice working this way, much of which is obsoleted because the models have changed to the point where some lessons learned on GPT 3.5 don't transfer. There aren't teachers with decades of experience to learn from, either.

  • freehorse 17 hours ago

    Moreover, the "ceiling" may still be below the "code works" level, and you have no idea when you start if it is or not.

  • dr_dshiv 19 hours ago

    It’s mostly attitude that you are learning. Playfulness, persistence and a willingness to start from scratch again and again.

    • suddenlybananas 19 hours ago

      >persistence and a willingness to start from scratch again and again.

      i.e. continually gambling and praying the model spits something out that works instead of thinking.

      • tsurba 18 hours ago

        Gambling is where I end up if I’m tired and try to get an LLM to build my hobby project for me from scratch in one go, not really bothering to read the code properly. It’s stupid and a waste of time. Sometimes it’s easier to get started this way though.

        But more seriously, in the ideal case refining a prompt based on a misunderstanding of an LLM due to ambiguity in your task description is actually doing the meaningful part of the work in software development. It is exactly about defining the edge cases, and converting into language what is it that you need for a task. Iterating on that is not gambling.

        But of course if you are not doing that, but just trying to get a ”smarter” LLM with (hopefully deprecated study of) ”prompt engineering” tricks, then that is about building yourself a skill that can become useless tomorrow.

      • chii 16 hours ago

        why is the process important? If they can continuously trial and error their way into a good output/result, then it's a fine outcome.

notnullorvoid 21 hours ago

Is it a skill worth learning though? How much does the output quality improve? How transferable is it across models and tools of today, and of the future?

From what I see of AI programming tools today, I highly doubt the skills developed are going to transfer to tools we'll see even a year from now.

  • vidarh 18 hours ago

    Given I see people insisting these tools don't work for them at all, and some of my results recently include spitting out a 1k line API client with about 5 brief paragraphs of prompts, and designing a website (the lot, including CSS, HTML, copy, database access) and populating the directory on it with entries, I'd think the output quality improves a very great deal.

    From what I see of the tools, I think the skills developed largely consists of skills you need to develop as you get more senior anyway, namely writing detail-oriented specs and understanding how to chunk tasks. Those skills aren't going to stop having value.

    • notnullorvoid 11 hours ago

      If I had a green field project that was low novelty I would happily use AI to get a prototype out the door quickly. I basically never work on those kinds of projects though, and I've seen AI tools royal screw up enough times given clear direction on both novel and trivial tasks in existing code bases.

      Detailed specs are certainly a transferable skill, what isn't is the tedious hand holding and defensive prompting. In my entire career I've worked with a lot of people, only one required as much hand holding as AI. That person was using AI to do all their work.

  • npilk 13 hours ago

    Maybe this is yet another application of the bitter lesson. It's not worth learning complex processes for partnering with AI models, because any productivity gains will pale in comparison to the performance improvement from future generations.

    • notnullorvoid 11 hours ago

      Perhaps... Even if I'm being optimistic though there is a ceiling for just how much productivity can be gained. Natural language is much more lossy compared to programming languages, so you'll still need a lot of natural language input to get the desired output.

  • serpix 21 hours ago

    Regarding using AI tools for programming it is not a one-for-all choice. You can pick a grunt work task such as "Tag every such and such terraform resource with a uuid" and let it do just that. Nothing to do with quality but everything to do with a simple task and not having to bother with the tedium.

    • autobodie 20 hours ago

      Why use AI to do something so simple? You're only increasing the possibility that it gets done wrong. Multi-cursor editing wil be faster anyway.

      • barsonme 20 hours ago

        Why not? I regularly have a couple Claude instances running in the background chewing through simple yet time consuming tasks. It’s saved me many hours of work and given me more time to focus on the important parts.

    • notnullorvoid 12 hours ago

      With such tedious tasks does it not take you just as long to verify it didn't screw up than if you had done it yourself?

  • jyounker 9 hours ago

    Describing things in enough detail that someone else can implement them is a pretty important skill. Learning how to break up a large project into smaller tasks that you can then delegate to others is also a pretty important skill.

skydhash a day ago

> But it's important to realize that AI coding is itself a skill that you can develop. It's not just , pick the best tool and let it go. Managing prompts and managing context has a much higher skill ceiling than many people realize

No, it's not. It's something you can pick in a few minutes (or an hour if you're using more advanced tooling, mostly spending it setting things up). But it's not like GDB or using UNIX as a IDE where you need a whole book to just get started.

> It's a tiny bit like drawing vs photography and if you look through that lens it's obvious that many drawers might not like photography.

While they share a lot of principles (around composition, poses,...), they are different activities with different output. No one conflates the two. You don't draw and think you're going to capture a moment in time. The intent is to share an observation with the world.

  • furyofantares 21 hours ago

    > No, it's not. It's something you can pick in a few minutes (or an hour if you're using more advanced tooling, mostly spending it setting things up). But it's not like GDB or using UNIX as a IDE where you need a whole book to just get started.

    The skill floor is something you can pick up in a few minutes and find it useful, yes. I have been spending dedicated effort toward finding the skill ceiling and haven't found it.

    I've picked up lots of skills in my career, some of which were easy, but some of which required dedicated learning, or practice, or experimentation. LLM-assisted coding is probably in the top 3 in terms of effort I've put into learning it.

    I'm trying to learn the right patterns to use to keep the LLM on track and keeping the codebase in check. Most importantly, and quite relevant to OP, I'd like to use LLMs to get work done much faster while still becoming an expert in the system that is produced.

    Finding the line has been really tough. You can get a LOT done fast without this requirement, but personally I don't want to work anywhere that has a bunch of systems that nobody's an expert in. On the flip side, as in the OP, you can have this requirement and end up slower by using an LLM than by writing the code yourself.

  • oxidant a day ago

    I do not agree it is something you can pick up in an hour. You have to learn what AI is good at, how different models code, how to prompt to get the results you want.

    If anything, prompting well is akin to learning a new programming language. What words do you use to explain what you want to achieve? How do you reference files/sections so you don't waste context on meaningless things?

    I've been using AI tools to code for the past year and a half (Github Copilot, Cursor, Claude Code, OpenAI APIs) and they all need slightly different things to be successful and they're all better at different things.

    AI isn't a panacea, but it can be the right tool for the job.

    • 15123123 a day ago

      I am also interested in how much of these skills are at the mercy of OpenAI ? Like IIRC 1 or 2 years ago there was an uproar of AI "artists" saying that their art is ruined because of model changes ( or maybe the system prompt changed ).

      >I do not agree it is something you can pick up in an hour.

      But it's also interesting that the industry is selling the opposite ( with AI anyone can code / write / draw / make music ).

      >You have to learn what AI is good at.

      More often than not I find it you need to learn what the AI is bad at, and this is not a fun experience.

      • oxidant 20 hours ago

        Of course that's what the industry is selling because they want to make money. Yes, it's easy to create a proof of concept but once you get out of greenfield into 50-100k tokens needed in the context (reading multiple 500 line files, thinking, etc) the quality drops and you need to know how to focus the models to maintain the quality.

        "Write me a server in Go" only gets you so far. What is the auth strategy, what endpoints do you need, do you need to integrate with a library or API, are there any security issues, how easy is the code to extend, how do you get it to follow existing patterns?

        I find I need to think AND write more than I would if I was doing it myself because the feedback loop is longer. Like the article says, you have to review the code instead of having implicit knowledge of what was written.

        That being said, it is faster for some tasks, like writing tests (if you have good examples) and doing basic scaffolding. It needs quite a bit of hand holding which is why I believe those with more experience get more value from AI code because they have a better bullshit meter.

      • solumunus 21 hours ago

        OpenAI? They are far from the forefront here. No one is using their models for this.

        • 15123123 20 hours ago

          You can substitute for whatever saas company of your choice.

  • viraptor a day ago

    > It's something you can pick in a few minutes

    You can start in a few minutes, sure. (Also you can start using gdb in minutes) But GP is talking about the ceiling. Do you know which models work better for what kind of task? Do you know what format is better for extra files? Do you know when it's beneficial to restart / compress context? Are you using single prompts or multi stage planning trees? How are you managing project-specific expectations? What type of testing gives better results in guiding the model? What kind of issues are more common for which languages?

    Correct prompting these days what makes a difference in tasks like SWE-verified.

    • sothatsit a day ago

      I feel like there is also a very high ceiling to how much scaffolding you can produce for the agents to get them to work better. This includes custom prompts, custom CLAUDE.md files, other documentation files for Claude to read, and especially how well and quickly your linting and tests can run, and how much functionality they cover. That's not to mention MCP and getting Claude to talk to your database or open your website using Playwright, which I have not even tried yet.

      For example, I have a custom planning prompt that I will give a paragraph or two of information to, and then it will produce a specification document from that by searching the web and reading the code and documentation. And then I will review that specification document before passing it back to Claude Code to implement the change.

      This works because it is a lot easier to review a specification document than it is to review the final code changes. So, if I understand it and guide it towards how I would want the feature to be implemented at the specification stage, that sets me up to have a much easier time reviewing the final result as well. Because it will more closely match my own mental model of the codebase and how things should be implemented.

      And it feels like that is barely scratching the surface of setting up the coding environment for Claude Code to work in.

      • freehorse 17 hours ago

        And where all this skill will go when newer models after one year use different tools and require different scaffolding?

        The problem with overinvesting in a brand new, developping field is that you get skills that are soon to be redundant. You can hope that the skills are gonna transfer to what will be needed after, but I am not sure if that will be the case here. There was a lot of talk about prompting techniques ("prompt engineering") last year, and now most of these are redundant and I really don't think I have learnt something that is useful enough for the new models, nor have I actually understood sth. These are all tricks and tips level, shallow stuff.

        I think these skills are just like learning how to use some tools in an ide. They increase productivity, it's great but if you have to switch ide they may not actually help you with the new things you have to learn in the new environment. Moreover, these are just skills in how to use some tools; they allow you to do things, but we cannot compare learning how to use tools vs actually learning and understanding the structure of a program. The former is obviously a shallow form of knowledge/skill, easily replaceable, easily redundant and probably not transferable (in the current context). I would rather invest more time in the latter and actually get somewhere.

        • sothatsit 15 hours ago

          A lot of the changes to get agents to work well is just good practice anyway. That's what is nice about getting these agents to work well - often, it just involves improving your dev tooling and documentation, which can help real human developers as well. I don't think this is going to become irrelevant any time soon.

          The things that will change may be prompts or MCP setups or more specific optimisations like subagents. Those may require more consideration of how much you want to invest in setting them up. But the majority of setup you do for Claude Code is not only useful to Claude Code. It is useful to human developers and other agent systems as well.

          > There was a lot of talk about prompting techniques ("prompt engineering") last year and now most of these are redundant.

          Not true, prompting techniques still matter a lot to a lot of applications. It's just less flashy now. In fact, prompting techniques matter a ton for optimising Claude Code and creating commands like the planning prompt I created. It matters a lot when you are trying to optimise for costs and use cheaper models.

          > I think these skills are just like learning how to use some tools in an ide. > if you have to switch ide they may not actually help you

          A lot of the skills you learn in one IDE do transfer to new IDEs. I started using Eclipse and that was a steep learning curve. But later I switched to IntelliJ IDEA and all I had to re-learn were key-bindings and some other minor differences. The core functionality is the same.

          Similarly, a lot of these "agent frameworks" like Claude Code are very similar in functionality, and switching between them as the landscape shifts is probably not as large of a cost as you think it is. Often it is just a matter of changing a model parameter or changing the command that you pass your prompt to.

          Of course it is a tradeoff, and that tradeoff probably changes a lot depending upon what type of work you do, your level of experience, how old your codebases are, how big your codebases are, the size of your team, etc... it's not a slam dunk that it is definitely worthwhile, but it is at least interesting.

      • viraptor a day ago

        > then it will produce a specification document from that

        I like a similar workflow where I iterate on the spec, then convert that into a plan, then feed that step by step to the agent, forcing full feature testing after each one.

      • bcrosby95 21 hours ago

        When you say specification, what, specifically, does that mean? Do you have an example?

        I've actually been playing around with languages that separate implementation from specification under the theory that it will be better for this sort of stuff, but that leaves an extremely limited number of options (C, C++, Ada... not sure what else).

        I've been using C and the various LLMs I've tried seem to have issues with the lack of memory safety there.

  • sagarpatil a day ago

    Yeah, you can’t do sh*t in an hour. I spend a good 6-8 hours every day using Claude Code, and I actually spend an hour every day trying new AI tools, it’s a constant process.

    Here’s what my today’s task looks like: 1. Test TRAE/Refact.ai/Zencoder: 70% on SWE verified 2. https://github.com/kbwo/ccmanager: use git tree to manage multiple Claude Code sessions 3. https://github.com/julep-ai/julep/blob/dev/AGENTS.md: Read and implement 4. https://github.com/snagasuri/deebo-prototype: Autonomous debugging agent (MCP) 5. https://github.com/claude-did-this/claude-hub: connects Claude Code to GitHub repositories.

  • __MatrixMan__ a day ago

    It definitely takes more than minutes to discover the ways that your model is going to repeatedly piss you off and set up guardrails to mitigate those problems.

  • JimDabell 21 hours ago

    > It's something you can pick in a few minutes (or an hour if you're using more advanced tooling, mostly spending it setting things up).

    This doesn’t give you any time to experiment with alternative approaches. It’s equivalent to saying that the first approach you try as a beginner will be as good as it possibly gets, that there’s nothing at all to learn.

dingnuts a day ago

> You might prefer manual coding, but you might just be bad at AI coding and you might prefer it if you improved at it.

ok but how much am I supposed to spend before I supposedly just "get good"? Because based on the free trials and the pocket change I've spent, I don't consider the ROI worth it.

  • qinsig a day ago

    Avoid using agents that can just blow through money (cline, roocode, claudecode with API key, etc).

    Instead you can get comfortable prompting and managing context with aider.

    Or you can use claude code with a pro subscription for a fair amount of usage.

    I agree that seeing the tools just waste several dollars to just make a mess you need to discard is frustrating.

  • goalieca a day ago

    And how often do your prompting skills change as the models evolve.

  • badsectoracula 21 hours ago

    It wont be the hippest of solutions, but you can use something like Devstral Small with a full open source setup to get experimenting with local LLMs and a bunch of tools - or just chat with it with a chat interface. I did pingponged between Devstral running as a chat interface and my regular text editor some time ago to make a toy project of a raytracer [0] (output) [1] (code).

    While it wasn't the fanciest integration (nor the best of codegen), it was good enough to "get going" (the loop was to ask the LLM do something, then me do something else in the background, then fix and merge the changed it did - even though i often had to fix stuff[2], sometimes it was less of a hassle than if i had to start from scratch[3]).

    It can give you a vague idea that with more dedicated tooling (i.e. something that does automatically what you'd do by hand[4]) you could do more interesting things (combining with some sort of LSP functionality to pass function bodies to the LLM would also help), though personally i'm not a fan of the "dedicated editor" that seems to be used and i think something more LSP-like (especially if it can also work with existing LSPs) would be neat.

    IMO it can be useful for a bunch of boilerplate-y or boring work. The biggest issue i can see is that the context is too small to include everything (imagine, e.g., throwing the entire Blender source code in an LLM which i don't think even the largest of cloud-hosted LLMs can handle) so there needs to be some external way to store stuff dynamically but also the LLM to know that external stuff are available, look them up and store stuff if needed. Not sure how exactly that'd work though to the extent where you could -say- open up a random Blender source code file, point to a function, ask the LLM to make a modification, have it reuse any existing functions in the codebase where appropriate (without you pointing them out) and then, if needed, have the LLM also update the code where the function you modified is used (e.g. if you added/removed some argument or changed the semantics of its use).

    [0] https://i.imgur.com/FevOm0o.png

    [1] https://app.filen.io/#/d/e05ae468-6741-453c-a18d-e83dcc3de92...

    [2] e.g. when i asked it to implement a BVH to speed up things it made something that wasn't hierarchical and actually slowed down things

    [3] the code it produced for [2] was fixable to do a simple BVH

    [4] i tried a larger project and wrote a script that `cat`ed and `xclip`ed a bunch of header files to pass to the LLM so it knows the available functions and each function had a single line comment about what it does - when the LLM wrote new functions it also added that comment. 99% of these oneliner comments were written by the LLM actually.

  • grogenaut a day ago

    how much time did you spend learning your last language to become comfortable with it?

  • stray a day ago

    You're going to spend a little over $1k to ramp up your skills with AI-aided coding. It's dirt cheap in the grand scheme of things.

    • viraptor a day ago

      Not even close. I'm still under $100, creating full apps. Stick to reasonable models and you can achieve and learn a lot. You don't need latest and greatest in max mode (or whatever the new one calls it) for majority of the tasks. You can have to throw the whole project at the service every time either.

      • viraptor 15 hours ago

        Typo: ...you don't have to throw the whole project context...

    • dingnuts a day ago

      do I get a refund if I spend a grand and I'm still not convinced? at some point I'm going to start lying to myself to justify the cost and I don't know how much y'all earn but $1k is getting close

      • theoreticalmal a day ago

        Would you ask for a refund from a university class if you didn’t get a job or skill from it? Investing in a potential skill is a risk and carries an opportunity cost, that’s part of what makes it a risk

      • HDThoreaun a day ago

        No one is forcing you to improve. If you don’t want to invest in yourself that is fine, you’ll just be left behind.

    • asciimov a day ago

      How are those without that kind of scratch supposed to keep up with those that do?

      • theoreticalmal a day ago

        This kind of seems like asking “how are poor people supposed to keep up with rich people” which we seem to not have a long term viable answer for right now

      • wiseowise 21 hours ago

        What makes you think those without that kind of scratch are supposed to keep up?

      • throwawaysleep a day ago

        If you lack "that kind of scratch", you are at the learning stage for software development, not the keeping up stage. Either that or horribly underpaid.