Comment by dspillett

Comment by dspillett 18 hours ago

5 replies

> To some degree, traditional coding and AI coding are not the same thing

LLM-based¹ coding, at least beyond simple auto-complete enhancements (using it directly & interactively as what it is: Glorified Predictive Text) is more akin to managing a junior or outsourcing your work. You give a definition/prompt, some work is done, you refine the prompt and repeat (or fix any issues yourself), much like you would with an external human. The key differences are turnaround time (in favour of LLMs), reliability (in favour of humans, though that is mitigated largely by the quick turnaround), and (though I suspect this is a limit that will go away with time, possibly not much time) lack of usefulness for "bigger picture" work.

This is one of my (several) objections to using it: I want to deal with and understand the minutia of what I am doing, I got into programming, database bothering, and infrastructure kicking, because I enjoyed it, enjoyed learning it, and wanted to do it. For years I've avoided managing people at all, at the known expense of reduced salary potential, for similar reasons: I want to be a tinkerer, not a manager of tinkerers. Perhaps call me back when you have an AGI that I can work alongside.

--------

[1] Yes, I'm a bit of a stick-in-the-mud about calling these things AI. Next decade they won't generally be considered AI like many things previously called AI are not now. I'll call something AI when it is, or very closely approaches, AGI.

rwmj 16 hours ago

Another difference if your junior will, over time, learn, and you'll also get a sense of whether you can trust them. If after a while they aren't learning and you can't trust them, you get rid of them. GenAI doesn't gain knowledge in the same way, and you're always going to have the same level of trust in it (which in my experience is limited).

Also if my junior argued back and was wrong repeatedly, that's be bad. Lucky that has never happened with AIs ...

  • averageRoyalty 16 hours ago

    Cline, Roocode etc have the concept of rules that can be added to over time. There are heaps of memory bank and orchestration methods for AI.

    LLMs absolutely can improve over time.

danielbln 14 hours ago

> I want to be a tinkerer, not a manager of tinkerers.

We all want many things, doesn't mean someone will pay you for it. You want to tinker? Great, awesome, more power to you, tinker on personal projects to your heart's content. However, if someone pays you to solve a problem, then it is our job to find the best, most efficient way to cleanly do it. Can LLMs do this on their own most of the time? I think not, not right now at least. The combination of skilled human and LLM? Most likely, yes.

  • dspillett 9 hours ago

    If it gets to the point where I can't compete in the role with those using LLMs, I'll move on. I'm not happy with remote teams essentially being the only way of working these days (if you aren't working alone) anyway, and various other directions the industry has moved in (the shit-show that is client-side stack for instance!).

    Maybe I'll retrain for lab work, I know a few people in the area, yeah I'd need a pay cut, but… Heck, I've got the mortgage paid, so I could take quite a cut and not be destitute, especially if I get sensible and keep my savings where they are and building instead of getting tempted to spend them! I don't think it'll get to that point for quite a few years though, and I might have been due to throw the towel in by that point anyway. It might be nice to reclaim tinkering as a hobby rather than a chore!

thefz 13 hours ago

> I want to deal with and understand the minutia of what I am doing, I got into programming, database bothering, and infrastructure kicking, because I enjoyed it, enjoyed learning it, and wanted to do it

A million times yes.

And we live in a time in which people want to be called "programmers" because it's oh-so-cool but not doing the work necessary to earn the title.