Comment by iamleppert

Comment by iamleppert 3 months ago

68 replies

There's nothing stopping you from coding if you enjoy it. It's not like they have taken away your keyboard. I have found that AI frees me up to focus on the parts of coding I'm actually interested in, which is maybe 5-10% of the project. The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about. I care about certain things that I know will make the product better, and achieve its goals in a clever and satisfying way.

Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.

melvinroest 3 months ago

> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.

Even more so, I remember making a Chrome extension and feeling intimidated. I knew that I'd be comfortable with most of it given that JS is used but I just didn't know how to start.

With an LLM it is way faster to spin up some default config and get going versus reading a tutorial. What I've noticed in that respect is that I just read what it does and then immediately reason why it's there. "Oh, there's a manifest.json file with permissions and a few other things, fair, makes sense. Oh, so you have the HTML/CSS/JS of the extension, you have the HTML/CSS/JS of the page you're injecting some code into and you have the JS of a background worker. Ah yea, I get that."

And then I just get immediately on coding.

  • dxroshan 3 months ago

    > What I've noticed in that respect is that I just read what it does and then immediately reason why it's there ....

    How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.

    • doix 3 months ago

      > How if it hallucinate and gives you wrong code

      Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.

      > It is better to read documentations and tutorials first.

      I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.

      • milesrout 3 months ago

        Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".

        As for my editor saying it is invalid..? That is just as untrustworthy as an LLM.

        >I "trust" LLM's more than tutorials, there's so much garbage out there.

        Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.

    • selfhoster11 3 months ago

      Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?

      Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.

      [0]: https://knowyourmeme.com/memes/how-to-draw-an-owl

      • skydhash 3 months ago

        The kind of documentation you’re looking for is called a tutorial or a guide, and you can always buy a book for it.

        Also something are meant to be approached with the correct foundational knowledge (you can’t do 3D without geometry, trigonometry, and matrixes. And a healthy dose of physics). Almost every time I see people strugling with documentation, it was because they lacked domain knowledge.

    • melvinroest 3 months ago

      Fair question. So far I've seen two things:

      1. Code doesn't compile. This case is obvious on what to do.

      2. Code does compile.

      I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.

      You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.

      There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.

      AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)

    • gilbetron 3 months ago

      What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?

      That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".

    • Spivak 3 months ago

      Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.

      • 59nadir 3 months ago

        Have you tried using AI only for things you already know for a while? I almost only do so (because I haven't found that LLMs speed up my actual process much) and I can tell you that the things that LLMs generally leave out/forget/don't "know" about are plentiful, they will result in tons of debugging and usually require me to "metagame" heavily and ask pointed questions that someone who didn't have my knowledge simply wouldn't know to ask in order to solve the issues with the code they generate. A LLM can't even give you basic OpenGL code in C for doing some basic framebuffer blitting without missing stuff that'll cost you potentially hours or a whole day in debugging time.

        Add to this that someone who uses a LLM to "just do things" for them like this is very unlikely to have much useful knowledge and so can't really resolve these issues themselves it's a recipe for disaster and not at all a time saver over simply learning and doing yourself.

        For what it's worth I've found that LLMs are pretty much only good for well understood basic theory that can give you a direction to look in and that's about it. I used to use GitHub Copilot (which years ago was (much?) better than Cursor with Claude Sonnet just a few months ago) to tab complete boilerplate and stuff but concluded that overall, I wasn't really saving time and energy because as nice as tab-completing boilerplate sometimes was, it also invariably turned into "It suggested something interesting, let's see if I can mold it into something useful" taking up valuable time, leading nowhere good in general and just generally being disruptive.

      • dxroshan 3 months ago

        I don't think so.How can you be so sure it solves the 'unknown unknowns'?

    • brigandish 3 months ago

      Most tutorials fail to add meta info like the system they're using and versions of things, that can be a real pain.

apothegm 3 months ago

So much this. The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts) and lets me focus on the interesting bits like what it is I want to build and how the pieces should fit together. And debugging, which I find satisfying.

Sadly, I find it sorely lacking at dealing with build systems and that particular type of boilerplate, mostly because it seems to mix up different versions of things too much and gives you totally broken setups more often than not. I’d just as soon never deal with the he’ll that is front end build/lint/test config again.

  • dxroshan 3 months ago

    > The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts)

    AI generated tests are a bad idea.

    • simonw 3 months ago

      AI generated tests are genuinely fantastic, if you treat them like any other AI generated code and review them thoroughly.

      I've been writing Python for 20+ years and I still can't use unittest.mock without looking up the details every time. ChatGPT and Claude are great at that, which means I use it more often because I don't have to deal with the frustration of figuring it out.

    • apothegm 3 months ago

      Just as with anything else AI, you never accept test code without reviewing it. And often it needs debugging. But it handles about 90% of it correctly and saves a lot of time and aggravation.

    • otabdeveloper4 3 months ago

      Well, maybe they just need X lines of so-called "tests" to satisfy some bullshit-job metrics.

  • tcfhgj 3 months ago

    Aren't stdlib functions the ones you know by heart after a while anyways?

    • apothegm 3 months ago

      Depends on the language. Python for instance has a massive default library, and there are entire modules I use anywhere from one a year to once a decade —- or never at all until some new project needs them.

    • danielbln 3 months ago

      Not everyone works in a single language and/or deep in some singular code base.

      • BigJono 3 months ago

        Gee do you think maybe that's why all our software sucks balls these days?

      • skydhash 3 months ago

        I struggle to think how one person is supposed to interact with that many languages on a daily (or even weekly) basis.

        I’ve been on projects with multiple languages, but the truly active code was done in only two. The other languages were used in completed modules where we do routine maintenance and rare alterations.

        • simonw 3 months ago

          "I struggle to think how one person is supposed to interact with that many languages on a daily (or even weekly) basis."

          LLMs. I've expanded the circle of languages I use on a frequent basis quite dramatically since I started leaning on LLMs more. I used to be Python, SQL and JavaScript only. These days I'm using jq, AppleScript, Bash, Go, awk, sed, ffmpeg and so many more.

          I used to avoid infrequently used DSLs because I couldn't hold them in my memory. Now I'll happily use "the best tool for the job" without worrying about spinning up on all the details first.

wvh 3 months ago

I think the fear for those of us who love coding, stability and security, that we are going to be confronted with apples that are rotten on the inside and our work, our love, is going to turn (even more so) into pain. The challenge in computing is that the powers that decide have little overview over the actual quality and longevity of any endeavour.

I work as a consultant assessing other people's code and it's hard not to lose my religion, sort of speak.

righthand 3 months ago

They perhaps haven’t taken away your keyboard but anecdotally, a few friends work at places where their boss is requiring them to use the LLMs. So you may not have to code with them but some people are starting to be chained to them.

  • hermanradtke 3 months ago

    Yes, there are bad places to work. There are also places that require detailed time tracking, do not allow any time to write tests, have very long hours, tons of on-call alerts, etc.

    • godelski 3 months ago

      You write that like the latter is in opposition to the former. Yet the content suggests the latter is the former

  • godelski 3 months ago

    And even when that's not the case you are still indirectly working with them because your coworker is and "somehow" their code has gotten worse

voidUpdate 3 months ago

> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing

I keep seeing people saying to use an LLM to write boilerplate, but like... do you not just copy that from another project where you already wrote it?

  • handzhiev 3 months ago

    No, because it's usually a few years old and already obsolete - the frameworks and the language have gone through a gazillion changes and what you did in 2021 suddenly no longer works at all.

    • moooo99 3 months ago

      I mean, the training data also has a cutoff date and changed beyond that are not reflected in the code suggestions.

      Also, I know that people love to joke on modern software and JS in particular. But if you take react code from 2020 and drop it into a new react codebase it still works. Even class based components work. Yes, if you jumped on the newest framework bandwagon every time stuff will break all the time, but AI won’t be able to help you with that either. If you went for relatively stable frameworks, you can re use boilerplate completely or with relatively minimal adjustments

      • whstl 3 months ago

        React is alright but the framework tooling around it changes a lot.

        If you take a project from 2020 it's a bit of a pain to upgrade it.

      • scarface_74 3 months ago

        True. But LLMs have access to the web. I’ve told ChatGPT plenty of times to verify an SDK API or if I knew the API was new, I just gave it a link to the documentation. This was mostly around various AWS SDKs

        • simonw 3 months ago

          The search improvements to o3 and o4-mini have made a huge difference in the last couple of weeks.

          I ran this prompt (and others like it) and it actually worked!

            This code needs to be upgraded to the new
            recommended JavaScript library from
            Google. Figure out what that is and
            then look up enough documentation to
            port this code to it
          
          https://simonwillison.net/2025/Apr/18/gemini-image-segmentat...
    • jay_kyburz 3 months ago

      lol, I've been cutting and pasting from the same projects I started in 2010. When you work in vanilla js it doesn't change.

    • asdff 3 months ago

      Ehh most people are good about at least throwing a warning before they break a legacy pattern. And you can also just use old versions of your tools. I'm sure the 2021 tool still does the job. Most people aren't working on the bleeding edge here. Old versions of numpy are fine.

  • moooo99 3 months ago

    I keep seeing that suggestion as well and the only sensible way I see would be to use one off boilerplate, anything else does not make sense.

    If you keep re-using boilerplate once in a while copying it from elsewhere is fine. If you re-use it all the time, just get a macro setup in your editor of choice. IMHO that is way more efficient than asking AI to produce somewhat consistent boilerplate

    • pelagicAustral 3 months ago

      You know. I have my boilerplate in Rails and it is just a work of art... I simply clone my BP repo, bundle, migrate, run and I have user management, auth, smtp client, sms alerts, and literally everything I need to get started. And it was just this same week I decided to try a code assistant, and my result was shockingly good, once you provide the assistant with a good clean starting point, and if you are very clear on what you want to build, then the results are just too good to be dismissed.

      So yes, boilerplate, but also yes, there is definitely something to be gained from using ai assistants.

jrapdx3 3 months ago

Like many others writing here, I enjoy coding (well, mostly anyway), especially the when it requires deep thought and patient experimentation to get anywhere. It's also great to preside over finally wiring together the routines (modules, libraries) that bind a project into a coherent whole.

Haven't much used AI to assist. After all, hard enough finding authentic humans capable and willing to voluntarily review/critique one's code. So far AI doesn't consistently provide that kind of help. OTOH seems almost certain over time AI systems will improve in terms of specific and comprehensive "insights" into the particular types of code one is writing.

I think an issue is that human creativity is hard to measure. Likely enough AI is even tougher to assess. Probably AI will increasingly be assigned tasks like constructing project skeletons, assuring parts can be joined together without undue strain, handling "boilerplate" and other routine chores. To be sure the landscape will look different in 50 years, I'm certain we'd be amazed were we able to see what future systems will be doing.

In any case, we shouldn't hesitate to use tools that genuinely boost our creativity. One badly needed role would be enabling development of higher reliability software. Still that's a far cry from the contributions emanating from the best of human originality, talent and motivation.

2snakes 3 months ago

I read one characterization which is that LLMs don't give new information (except to the user learning) but they reorganize old information.

  • docmechanic 3 months ago

    That’s only true if you tokenize words rather than characters. Character tokenization generates new content outside the training vocabulary.

    • selfhoster11 3 months ago

      All major tokenisers have explicit support for encoding arbitrary byte sequences. There's usually a consecutive range of tokens reserved for 0x00 to 0xFF, and you can encode any novel UTF-8 words or structures with it. Including emoji and characters that weren't a part of the model's initial training, if you show it some examples.

      • docmechanic 3 months ago

        Pretty sure that we’re talking apples and oranges. Yes to the arbitrary byte sequences used by tokenizers, but that is not the topic of discussion. The question is will the tokenizer come up with words not in the training vocabulary. Word tokenizers don’t, but character tokenizers do.

        Source: Generative Deep Learning by David Foster, 2nd edition, published in 2023. From “Tokenization” on page 134.

        “If you use word tokens: …. willnever be able to predict words outside of the training vocabulary.”

        "If you use character tokens: The model may generate sequences of characters that form words outside the training vocabulary."

        • selfhoster11 3 months ago

          Those tokens won't come up during training, but LLMs are capable of In-Context Learning. If you give it some examples of how to create new words/characters in this manner as a part of the prompt, they will be able to use those tokens at inference time. Show it some examples of how to compose a Thai or Chinese sentence out of byte tokens, and give them a description of the hypothetical Unicode range of a custom alphabet, and a sufficiently strong LLM will be able to just output bytes despite those codepoints not technically existing.

          And like I said, single-byte tokens very much are a part of word tokenisers, or to be precise, their token selection. "Word tokeniser" is a misnomer in any case - they are word piece tokenisers. English is simple enough that word pieces can be entire words. With languages where you have numerous suffixes, prefixes, and even in-fixes as a part of one "word" (as defined by "one or more characters preceded or followed by a space" - because the truth is more complicated than that), you have not so much "word tokenisers" as "subword tokenisers". A character tokeniser is just a special case of a subword tokeniser where the length of each subword is exactly 1.

    • asdff 3 months ago

      Why stop there? Just have it spit out the state of the bits on the hardware. English seems like a serious shackle for an LLM.

    • emaro 3 months ago

      Kind of, but character-based tokens make it a lot harder and more expensive to learn semantics.

      • docmechanic 3 months ago

        Source: Generative Deep Learning by David Foster, 2nd edition, published in 2023. From “Tokenization” on page 134.

        “If you use word tokens: …. willnever be able to predict words outside of the training vocabulary.”

        "If you use character tokens: The model may generate sequences of characters that form words outside the training vocabulary."

skydhash 3 months ago

> doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.

That’s a lot of trauma you’re dealing with.

godelski 3 months ago

I think you're robbing yourself.

Of course, it all depends how you use the LLM. While the same can be true for StackOverflow, the LLMs just scale the issues up.

  > The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.
Except you do care. It's why you're frustrated and annoyed. And good!!! That feeling is because what you're describing requires solving. If something is routine, automate it. But it's really not good to automate in a statistical way, especially when that statistical tool is optimized for human preference. Because remember that also means mistakes are optimized to be missed by humans.[0]

With expertise in anything, I'm sorry, but you also got to do the shit work. To be a great musician you gotta practice boring scales. It's true even if you just want to be a sub par one.

But a little grumpy is good. It drives you to fix things, and frankly, that's our job. The things that are annoying and creating friction don't need be repeated over and over, they need alternative solutions. The scripts you build are valuable. The "useless" knowledge you gain isn't so useless. Those little details add up without you knowing and make you better.

That undocumented code makes you frustrated and reminds you to document your own. You don't want to be a hypocrite. The author of the thing you're using probably thought the same thing: "No one is gonna use this garbage, I'm not going to waste my time documenting it". Yet here we are. Over and over again yet we don't learn the lesson.

I'm not gonna deny there's assholes. There are. But even assholes teach you. At worst, they teach you how not to act.

And some people are you telling you to RTM and not RTFM. Sure, it has lots of extra information in it that you don't need to get your specific job done, but have you also considered that it has lots of extra information in it? The person that wrote it clearly thought the context was important. Maybe it isn't. In that case, you learned a lesson in how not to write documentation!

What I'm getting at is that there's a lot of learning done all over the place. Trying to take out all the work and only have "the fun" is harming yourself and has a large potential to make less time for the fun stuff[0]. I'd be surprised if I'm alone in this, but a lot of stuff I enjoy now was stuff that originally frustrated me. IME this is pretty common! It's true for every person I know. Similarly, it's also true for things I learned that I thought I'd never use again. It always has a way of coming back.

I'm not going to pretend it's all fun and games. I'm explicitly saying it's not. But I'm confident in the long run it's better. Despite the lack of accuracy, I use LLMs (and Google, and even the TFM) like I would a solution guide the homework problems when I was in school. Try first, then consult. The struggle is an investment in your future. It sucks, but if all the best things in life were easy then we'd all have them. I'm just trying to convince you that it pays off.

I'm completely aware this is all context dependent. There's a time and place for everything. But given the percentages you mention (even taken as exaggeration), something sounds wrong. It's hard to suggest specific solutions without details but I'd be surprised if there weren't better and more rewarding solutions than having the LLM do it for you

[0] That's the big danger and what drives distrust in them. Because you need to work extra hard to find mistakes, increasing workload, not decreasing, because debugging is most of the job!

  • jspdown 3 months ago

    I share the same opinion.

    While it looks like a productivity boost, there's a clear price to pay. The more you use it, the less you learn and the less you are able to assess quality.

    • godelski 3 months ago

      Worse, it feels productive. But I'd bet if you watched the clock and tracked progress of a non-trivial project, you'd find what we've always known to be true: there are no shortcuts.

      I'm sure it's faster in the short term. Just like copy-paste-from-stack-overflow is. But it is debt. The shit builds and builds. But I think the problem is we're so surrounded by shit we've just normalized it. It is incredible how much bloat and low hanging fruit there is that can be cheaply resolved but there is no will to. And in my experience, it isn't just a lack of will, it is a lack of recognition. If the engineers can't recognize shit, then how do we build anything better? It is literally our job to find problems

johngladtj 3 months ago

This.

Frankly I don't want to spend 2 hours reading documentation just to find out some arcane incantation that gets the computer to do what I want it to do.

The interesting part of programming to me is designing the logic. It's the 'this, then that, except when this' flow that I'm really interested in, not the search for some obscure library that has some function that will parse this csv.

Llms are great for that, and let me get away from the pointless grind and into the things that I enjoy and are actually providing value.

The pair programming is also a super good thing. I work best when I can spitball and throw out random ideas and get quick feedback. Llms let me do that without bothering others who have their own work to do.

  • bluefirebrand 3 months ago

    > Frankly I don't want to spend 2 hours reading documentation just to find out some arcane incantation that gets the computer to do what I want it to do

    Then you are just straight up not cut out to be a software developer

    The existence of LLMs may reduce the need to slog through documentation, but it will not remove that need

    • johngladtj 3 months ago

      You're welcome to believe what you will, but the fact is I've written code that serves a purpose and provides value to those businesses, and at the end of the day that's is all that matters, not some arbitrary purity test you just made up.

      The purpose of programming is to provide value for people, not to read documents.

      • bluefirebrand 3 months ago

        This isn't some arbitrary purity test

        There is more to "providing value" than simply producing working code

        Does it have known security exploits built in that you have no idea about because you couldn't be bothered to read documentation?

        Is the "value" you provided extremely temporary because someone is going to come along and exploit your shitty LLM generated code to steal all of your client's customer data?

        Software Engineering isn't just about writing code it is about understanding what you're building because if you don't, other people will exploit that