Comment by EternalFury

Comment by EternalFury 4 days ago

1 reply

If you "know literally nothing about a programming language", there are two key consequences: 1) You cannot determine if the code is idiomatic to that language, and 2) You may miss subtle deficiencies that could cause problems at scale. I’ve used LLMs for initial language conversion between languages I’m familiar with. It saved me a lot of time, but I still had to invest effort to get things right. I will never claim that LLMs aren’t useful, nor will I deny that they’re going to disrupt many industries...this much is obvious. However, it’s equally clear that much of the drama surrounding LLMs stems from the gap between the grand promises (AGI, ASI) and the likely limits of what these models can actually deliver. The challenge for OpenAI is this: If the path ahead isn’t as long as they initially thought, they’ll need to develop application-focused business lines to cover the costs of training and inference. That's a people business, rather than a data+GPU business. I once worked for an employer that used multi-linear regression to predict they’d be making $5 trillion in revenue by 2020. Their "scaling law" didn’t disappoint for more than a decade; but then it stopped working. That’s the thing with best-fit models and their projections: they work until they don’t, because the physical world is not a math equation.

mewpmewp2 4 days ago

It still requires effort, but it decreases so much of those early hurdles, which I often face, and demotivate me. E.g. I have constant "why" questions, which I can keep asking LLM forever with it having infinite patience. But these are very difficult to find Googling.