Comment by mrinfinitiesx

Comment by mrinfinitiesx 4 days ago

19 replies

I can know literally nothing about a programming language, ask a LLM to make me functions and a small program to do something, then read documentation and start building off of the base immediately, accelerating my learning allowing me to find new passions for new languages and new perspectives for systems. Whatever's going on in the AI world, assisting with learning curves and learning disabilities is something it's proving strong in. It's given me a way forward with trying new tech. If it can do that for me, it can do that for others.

Diminishing returns for investors maybe, but not for humans like me.

EternalFury 4 days ago

If you "know literally nothing about a programming language", there are two key consequences: 1) You cannot determine if the code is idiomatic to that language, and 2) You may miss subtle deficiencies that could cause problems at scale. I’ve used LLMs for initial language conversion between languages I’m familiar with. It saved me a lot of time, but I still had to invest effort to get things right. I will never claim that LLMs aren’t useful, nor will I deny that they’re going to disrupt many industries...this much is obvious. However, it’s equally clear that much of the drama surrounding LLMs stems from the gap between the grand promises (AGI, ASI) and the likely limits of what these models can actually deliver. The challenge for OpenAI is this: If the path ahead isn’t as long as they initially thought, they’ll need to develop application-focused business lines to cover the costs of training and inference. That's a people business, rather than a data+GPU business. I once worked for an employer that used multi-linear regression to predict they’d be making $5 trillion in revenue by 2020. Their "scaling law" didn’t disappoint for more than a decade; but then it stopped working. That’s the thing with best-fit models and their projections: they work until they don’t, because the physical world is not a math equation.

  • mewpmewp2 4 days ago

    It still requires effort, but it decreases so much of those early hurdles, which I often face, and demotivate me. E.g. I have constant "why" questions, which I can keep asking LLM forever with it having infinite patience. But these are very difficult to find Googling.

lambdaone 4 days ago

Hmm. I got ChatGPT-4o to write some code for me today. The results, while very impressive looking, simply didn't work. By the time I'd finished debugging it, I probably spent 80% of the time I would have spent writing it from scratch.

None of which is to discount the furture potential of LLMs, or the amazing ability they have right now - I've solved other simpler problems almost entirely with LLMs. But they are not a panacea.

Yet.

  • ern 4 days ago

    Something interesting I observed after introducing LLMs to my team is that the most experienced team members reached out to me spontaneously to say it boosted their productivity (although when I asked other team members, every single one was using LLMS).

    My current feeling is that LLMs great with dealing with known unknowns. You know what you want, but don’t know how to do it, or it’s too tedious to do yourself.

  • throw101010 4 days ago

    > I probably spent 80% of the time I would have spent writing it from scratch.

    A 20% time improvement sounds like a big win to me. That time can now be spent learning/improving skills.

    Obviously learning when to use a specific tool to solve a problem is important... just like you wouldn't use a hammer to clean your windows, using a LLM for problems you know have never really been tackled before will often yield subpar/non-functional results. But even in these cases the answers can be a source of inspiration for me, even if I end up having to solve the problem "manually".

    One question I've been thinking about lately is how will this work for people who always had this LLM "crutch" to solve problems when they've started learning how to solve problems? Will they skip a lot of the steps that currently help me know when to use a LLM and when it's rather pointless currently.

    And I've started thinking of LLMs for coding as a form of abstraction, just like we have had the "crutch" of high-level programming languages for years, many people never learned or even needed to learn any low-level programming and still became proficient developers.

    Obviously it isn't a perfect form of abstraction and they can have major issues with hallucinations, so the parallel isn't great... I'm still wondering how these models will integrate with the ways humans learn.

    • cageface 4 days ago

      The thing that limits my use of these tools is that it massively disrupts my mental flow to shift from coding to prompting and debugging the generated code.

      For self-contained tasks that aren't that complex they can save a lot of time but for features that require careful integration into a complex architecture I find them more than useless in their current state.

  • Mc91 4 days ago

    I've been using ChatGPT (paid) and Perplexity (unpaid) to help with different coding stuff. I've found it very helpful in some situations. There are some instructions I give it almost every time - "don't use Kotlin non-null assertions". Sometimes the code doesn't work. I have some idea of its strengths and limitations and have definitely found them useful. I understand there are other AI programming tools out there too.

Eddy_Viscosity2 4 days ago

Diminishing returns means is not getting better. Its not saying anything about the current state. So that's great that its current capabilities meet your needs, but if you had a different use-case where it didn't quite work that well and were just waiting till the next version, your wait will be longer than you think based on past progress.

  • mewpmewp2 4 days ago

    It seems like it would still be too early to tell. Especially since the modern level LLMs have been for here for such a short period of time. And this person tried to predict the wall before GPT-4 which was a massive leap seemingly out of nowhere.

swatcoder 4 days ago

We've been learning new languages by tinkering on examples and following leads for decades longer than many people on this website have been alive.

Learning new programming languages wasn't a hurdle or mystery for anyone experienced in programmong previously, and learning programming (well) in the first place ultimately needs a real mentor to intervene sooner than later anway.

AI can replace following rote tutorials and engaging with real people on SO/forums/IRC, and deceive one into thinking they don't need a mentor, but all those alternatives are already there, already easily available, and provide very significant benefits for actual quality of learning.

Learning to code or to code in new languages with the help of AI is a thing now. But it's no revolution yet, and the diminishing returns problem suggests it probably won't become one.

__MatrixMan__ 4 days ago

I find that its capability is massively dependent on the availability of training data. It really struggles to write syntactically correct nushell but it appears to be an emacs-lisp wizard. So even if we're up against so some kind of ceiling, there's a lot of growth opportunity in getting it to to be uniformly optimal, rather than capable only in certain areas.

  • [removed] 4 days ago
    [deleted]
SoftTalker 4 days ago

You can do that with “hello, world” in any programming language

poink 4 days ago

> Diminishing returns for investors maybe, but not for humans like me.

The diminishing returns for humans like you are in the training cost vs. the value you get out of it compared to simply reading a blog post or code sample (which is basically what the LLM is doing) and implementing yourself.

Sure, you might be happy at the current price point, but the current price point is lighting investor money on fire. How much are you willing to pay?

thefz 4 days ago

Then if you don't know anything about the language good luck in fixing the eventual bugs in the generated code.

player1234 3 days ago

Super cool bro'! Hey VCs, look here I got the killer app, lets get our 100s of billions back. /s