Comment by lambdaone

Comment by lambdaone 4 days ago

4 replies

Hmm. I got ChatGPT-4o to write some code for me today. The results, while very impressive looking, simply didn't work. By the time I'd finished debugging it, I probably spent 80% of the time I would have spent writing it from scratch.

None of which is to discount the furture potential of LLMs, or the amazing ability they have right now - I've solved other simpler problems almost entirely with LLMs. But they are not a panacea.

Yet.

ern 4 days ago

Something interesting I observed after introducing LLMs to my team is that the most experienced team members reached out to me spontaneously to say it boosted their productivity (although when I asked other team members, every single one was using LLMS).

My current feeling is that LLMs great with dealing with known unknowns. You know what you want, but don’t know how to do it, or it’s too tedious to do yourself.

throw101010 4 days ago

> I probably spent 80% of the time I would have spent writing it from scratch.

A 20% time improvement sounds like a big win to me. That time can now be spent learning/improving skills.

Obviously learning when to use a specific tool to solve a problem is important... just like you wouldn't use a hammer to clean your windows, using a LLM for problems you know have never really been tackled before will often yield subpar/non-functional results. But even in these cases the answers can be a source of inspiration for me, even if I end up having to solve the problem "manually".

One question I've been thinking about lately is how will this work for people who always had this LLM "crutch" to solve problems when they've started learning how to solve problems? Will they skip a lot of the steps that currently help me know when to use a LLM and when it's rather pointless currently.

And I've started thinking of LLMs for coding as a form of abstraction, just like we have had the "crutch" of high-level programming languages for years, many people never learned or even needed to learn any low-level programming and still became proficient developers.

Obviously it isn't a perfect form of abstraction and they can have major issues with hallucinations, so the parallel isn't great... I'm still wondering how these models will integrate with the ways humans learn.

  • cageface 4 days ago

    The thing that limits my use of these tools is that it massively disrupts my mental flow to shift from coding to prompting and debugging the generated code.

    For self-contained tasks that aren't that complex they can save a lot of time but for features that require careful integration into a complex architecture I find them more than useless in their current state.

Mc91 4 days ago

I've been using ChatGPT (paid) and Perplexity (unpaid) to help with different coding stuff. I've found it very helpful in some situations. There are some instructions I give it almost every time - "don't use Kotlin non-null assertions". Sometimes the code doesn't work. I have some idea of its strengths and limitations and have definitely found them useful. I understand there are other AI programming tools out there too.