Comment by ahofmann

Comment by ahofmann 2 days ago

39 replies

Ok, I'll bite. I predict that everything in this article is horse manure. AGI will not happen. LLMs will be tools, that can automate away stuff, like today and they will get slightly, or quite a bit better at it. That will be all. See you in two years, I'm excited what will be the truth.

Tenoke 2 days ago

That seems naive in a status quo bias way to me. Why and where do you expect AI progress to stop? It sounds like somewhere very close to where we are at in your eyes. Why do you think there won't be many further improvements?

  • PollardsRho 2 days ago

    It seems to me that much of recent AI progress has not changed the fundamental scaling principles underlying the tech. Reasoning models are more effective, but at the cost of more computation: it's more for more, not more for less. The logarithmic relationship between model resources and model quality (as Altman himself has characterized it), phrased a different way, means that you need exponentially more energy and resources for each marginal increase in capabilities. GPT-4.5 is unimpressive in comparison to GPT-4, and at least from the outside it seems like it cost an awful lot of money. Maybe GPT-5 is slightly less unimpressive and significantly more expensive: is that the through-line that will lead to the singularity?

    Compare the automobile. Automobiles today are a lot nicer than they were 50 years ago, and a lot more efficient. Does that mean cars that never need fuel or recharging are coming soon, just because the trend has been higher efficiency? No, because the fundamental physical realities of drag still limit efficiency. Moreover, it turns out that making 100% efficient engines with 100% efficient regenerative brakes is really hard, and "just throw more research at it" isn't a silver bullet. That's not "there won't be many future improvements", but it is "those future improvements probably won't be any bigger than the jump from GPT-3 to o1, which does not extrapolate to what OP claims their models will do in 2027."

    AI in 2027 might be the metaphorical brand-new Lexus to today's beat-up Kia. That doesn't mean it will drive ten times faster, or take ten times less fuel. Even if high-end cars can be significantly more efficient than what average people drive, that doesn't mean the extra expense is actually worth it.

  • ahofmann 2 days ago

    I write bog-standard PHP software. When GPT-4 came out, I was very frightened that my job could be automated away soon, because for PHP/Laravel/MySQL there must exist a lot of training data.

    The reality now is, that the current LLMs still often create stuff, that costs me more time to fix, than to do it myself. So I still write a lot of code myself. It is very impressive, that I can think about stopping writing code myself. But my job as a software developer is, very, very secure.

    LLMs are very unable to build maintainable software. They are unable to understand what humans want and what the codebase need. The stuff they build is good-looking garbage. One example I've seen yesterday: one dev committed code, where the LLM created 50 lines of React code, complete with all those useless comments and for good measure a setTimeout() for something that should be one HTML DIV with two tailwind classes. They can't write idiomatic code, because they write code, that they were prompted for.

    Almost daily I get code, commit messages, and even issue discussions that are clearly AI-generated. And it costs me time to deal with good-looking but useless content.

    To be honest, I hope that LLMs get better soon. Because right now, we are in an annoying phase, where software developers bog me down with AI-generated stuff. It just looks good but doesn't help writing usable software, that can be deployed in production.

    To get to this point, LLMs need to get maybe a hundred times faster, maybe a thousand or ten thousand times. They need a much bigger context window. Then they can have an inner dialogue, where they really "understand" how some feature should be built in a given codebase. That would be very useful. But it will also use so much energy that I doubt that it will be cheaper to let a LLM do those "thinking" parts over, and over again instead of paying a human to build the software. Perhaps this will be feasible in five or eight years. But not two.

    And this won't be AGI. This will still be a very, very fast stochastic parrot.

  • AnimalMuppet 2 days ago

    ahofmann didn't expect AI progress to stop. They expected it to continue, but not lead to AGI, that will not lead to superintelligence, that will not lead to a self-accelerating process of improvement.

    So the question is, do you think the current road leads to AGI? How far down the road is it? As far as I can see, there is not a "status quo bias" answer to those questions.

bayarearefugee 2 days ago

I predict AGI will be solved 5 years after full self driving which itself is 1 year out (same as it has been for the past 10 years).

mitthrowaway2 2 days ago

What's an example of an intellectual task that you don't think AI will be capable of by 2027?

  • jdauriemma 2 days ago

    Being accountable for telling the truth

    • myhf 2 days ago

      accountability sinks are all you need

  • kubb 2 days ago

    It won't be able to write a compelling novel, or build a software system solving a real-world problem, or operate heavy machinery, create a sprite sheet or 3d models, design a building or teach.

    Long term planning and execution and operating in the physical world is not within reach. Slight variations of known problems should be possible (as long as the size of the solution is small enough).

  • coolThingsFirst 2 days ago

    programming

    • lumenwrites 2 days ago

      Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?

      • burningion 2 days ago

        So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers".

        If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models.

        I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems.

        Will an LLM model read a person's mind about what they want to build better than they can communicate?

        That's already what recommender systems (like the TikTok algorithm) do.

        But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering?

        I think that's where there's a gap in (basically) belief systems of the future.

        If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless.

        This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us.

      • kody 2 days ago

        It's 60-80% as good as Stack Overflow copy-pasting programmers, sure, but those programmers were already providing questionable value.

        It's nowhere near as good as someone actually building and maintaining systems. It's barely able to vomit out an MVP and it's almost never capable of making a meaningful change to that MVP.

        If your experiences have been different that's fine, but in my day job I am spending more and more time just fixing crappy LLM code produced and merged by STAFF engineers. I really don't see that changing any time soon.

      • boringg 2 days ago

        Because ewe still haven't figured out fusion but its been promised for decades. Why would everything thats been promised by people with highly vested interests pan out any different?

        One is inherently a more challenging physics problem.

      • coolThingsFirst 2 days ago

        Try this, launch Cursor.

        Type: print all prime numbers which are divisible by 3 up to 1M

        The result is that it will do a sieve. There's no need for this, it's just 3.

        • mysfi 2 days ago

          Just tried this with Gemini 2.5 Pro. Got it right with meaningful thought process.

      • [removed] 2 days ago
        [deleted]
    • mitthrowaway2 2 days ago

      Can you phrase this in a concrete way, so that in 2027 we can all agree whether it's true or false, rather than circling a "no true scotsman" argument?

      • abecedarius a day ago

        Good question. I tried to phrase a concrete-enough prediction 3.5 years ago, for 5 years out at the time: https://news.ycombinator.com/item?id=29020401

        It was surpassed around the beginning of this year, so you'll need to come up with a new one for 2027. Note that the other opinions in that older HN thread almost all expected less.

kristopolous 2 days ago

People want to live their lives free of finance and centralized personal information.

If you think most people like this stuff you're living in a bubble. I use it every day but the vast majority of people have no interest in using these nightmares of philip k dick imagined by silicon dreamers.

jstummbillig 2 days ago

When is the earliest that you would have predicted where we are today?

meroes a day ago

I’m also unafraid to say it’s BS. I don’t even want to call it scifi. It’s propaganda.