Comment by alsetmusic

Comment by alsetmusic 3 days ago

293 replies

It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.

I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.

miki123211 3 days ago

I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.

This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.

  • bryanlarsen 3 days ago

    Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.

    Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.

    • tombert 3 days ago

      Today I learned that Sears founded Prodigy!

      Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.

      • kens 2 days ago

        On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time.

      • dh2022 3 days ago

        My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that).

        To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.

      • gcanyon 2 days ago

        :-) Then it's going to blow your mind that CompuServe (while not founded by them) was a product of H&R Block.

      • htrp 3 days ago

        Blame short sighted investors asking Sears to "focus"

    • djtango 2 days ago

      This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988

      A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.

      Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish

      • mschuster91 2 days ago

        Reminds me of Elon not taking a no for an answer. He did it twice, with a massive success.

        A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up.

        And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch.

    • tracker1 2 days ago

      On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...

      They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.

    • Imustaskforhelp 2 days ago

      My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only.

      But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.

    • cyanydeez 2 days ago

      the problem is ISP became a Utility, not some fountain of unlimited growth.

      What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.

      I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.

      • bryanlarsen 2 days ago

        Sears started Prodigy to become Amazon, not Comcast.

        • cyanydeez 2 days ago

          The product itself determines wether ita a utility, not the business interest. Assuming democracy works correctly. Only a dysfunctional government ignores natural monopolies.

    • outside1234 3 days ago

      Newton at Apple is another great one, though they of course got there.

      • platevoltage 2 days ago

        They sure did. This reminds me of when I was in the local Mac Dealer right after the iPod came out. The employees were laughing together saying “nobody is going to buy this thing”.

  • deegles 2 days ago

    > We're clearly seeing what AI will eventually be able to do

    Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

    Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.

    For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.

    Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.

    • hnfong 2 days ago

      I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.

      Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.

      • deegles 2 days ago

        You know, you're right. It -also- sucks at those tasks because on top of the issue you mention, unedited LLM text is identifiable if you get used to its patterns.

      • h4ck_th3_pl4n3t 2 days ago

        By definition, transformers can never exceed average.

        That is the thing, and what companies pushing LLMs don't seem to realize yet.

      • mdemare 2 days ago

        Exactly. Books are still being translated by human translators.

        I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.

        GPT-5 output for example:

        Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem. Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart. Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted. They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them. Each bore a respectable, bourgeois name from more carefree days: Welgelegen Buitenrust Nooitgedacht Rustenburg Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.

    • rstuart4133 2 days ago

      > Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.

      I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.

      Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.

      LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.

      Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

      • deegles 2 days ago

        > it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

        That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.

    • miki123211 2 days ago

      > If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

      When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.

      The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.

      • mrtranscendence 2 days ago

        Microsoft claims that they have an AI setup that outperforms human doctors on diagnosis tasks: https://microsoft.ai/new/the-path-to-medical-superintelligen...

        "MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases."

        Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment.

      • randomNumber7 2 days ago

        If you consider how little time doctors have to look at you (at least in Germanys half broken public health sector) and how little they actually care ...

        I think x is already higher than y for me.

        • deegles 2 days ago

          That's fair. Reliable access to a 70% expert is better than no access to a 99% expert.

    • [removed] 2 days ago
      [deleted]
  • me551ah 3 days ago

    Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.

    So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.

    Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

    The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years

    • mindcrime 3 days ago

      Scaling AI will require an exponential increase in compute and processing power,

      A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.

      • lawlessone 3 days ago

        Deepmind where experimenting with this https://github.com/google-deepmind/lab a few years ago.

        Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).

        They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.

      • fluoridation 2 days ago

        If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on.

    • foobarian 2 days ago

      > Scaling AI will require an exponential increase in compute and processing power,

      I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.

      • RugnirViking 2 days ago

        I dont fully even get why; inference costs are way lower than training costs no?

    • thfuran 2 days ago

      We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.

      • hyperbovine 2 days ago

        The fact that the human brain, heck all brains, are so much more efficient than “state of the art” nnets, in terms of architecture, power consumption, training cost, what have you … while also being way more versatile and robust … is what convinces me that this is not the path that leads to AGI.

    • miki123211 2 days ago

      > We are already at the limit of how small we can scale chips

      I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.

      Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.

      > so unless the price of electricity comes down exponentially

      This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.

      > Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

      "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." Sam Altman, OpenAI CEO[1].

      [1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...

      • thfuran a day ago

        >doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.

        An implementation of inference on some specific ANN in fixed function analog hardware can probably pretty easily beat a commodity GPU by a couple orders of magnitude in perf per watt too.

      • mrtranscendence 2 days ago

        > "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."

        That's OpenAI (though I'd be curious if that statement holds for subscriptions as opposed to API use). What about the downstream companies that use OpenAI models? I'm not sure the picture is as rosy for them.

  • armada651 2 days ago

    > The groundwork has been laid, and it's not too hard to see the shape of things to come.

    The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.

    • jimbokun 2 days ago

      Is it still giving people headaches and making them nauseous?

      • armada651 2 days ago

        Yes, it still gives people headaches because the convergence-accommodation conflict remains unsolved. We have a few different technologies to address that, but they're expensive, don't fully address the issue, and none of them have moved beyond the prototype stage.

        Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR.

        • duderific 2 days ago

          > Mind you, some people also get motion sick by watching a first-person shooter on a flat screen

          Yep I'm that guy. I blame it on being old.

  • matthewdgreen 2 days ago

    As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.

    • tracker1 2 days ago

      Oh, like RealPlayer in the late 90's (buffering... buffering...)

      • matthewdgreen 2 days ago

        RealPlayer in the late 90s turned into (working) Napster, Gnutella and then the iPod in 2001, Podcasts (without the name) immediately after, with the name in 2004, Pandora in 2005, Spotify in 2008. So a decade from crummy idea to the companies we’re familiar with today, but slowed down by tremendous need for new (distributed) broadband infrastructure and complicated by IP arrangements. I guess 10 years seems like a long time from the front end, but looking back it’s nothing. Don’t go buying yourself a Tower Records.

  • nutjob2 3 days ago

    > We're clearly seeing what AI will eventually be able to do

    I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.

    Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.

  • skeezyboy 3 days ago

    >I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

    I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.

    • dml2135 3 days ago

      It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.

      There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.

      We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.

      Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.

      The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.

      "Progress" moves in fits and starts. It is the furthest thing from inevitable.

      • novembermike 2 days ago

        Most growth is actually logistic. An S shaped curve that starts exponential but slows down rapidly as it reaches some asymptote. In fact basically everything we see as exponential in the real world is logistic.

      • jopsen 2 days ago

        True, but adoption of AI has certainly seen exponential growth.

        Improvement of models may not continue to be exponential.

        But models might be good enough, at this point it seems more like they need integration and context.

        I could be wrong :)

    • echelon 3 days ago

      Speaking of Netflix -

      I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.

      Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.

      I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.

      For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.

      • AnotherGoodName 3 days ago

        Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy.

      • didibus 3 days ago

        You're right, and I also think LLMs have an impact.

        The issue is the way the market is investing they are looking for massive growth, in the multiples.

        That growth can't really come from trading cost. It has to come from creating new demand for new things.

        I think that's what not happened yet.

        Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?

      • jaimebuelta 3 days ago

        I see the point at the moment on “low quality advertising”, but we are still far from high quality video generated for AI.

        It’s the equivalent of those cheap digital effects. They look bad for a Hollywood movie, but it allows students to shot their action home movies

      • mh- 3 days ago

        It's quite incredible how fast the generative media stuff is moving.

        The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).

    • dormento 2 days ago

      > I did think the same thing about the 8bit era of video games.

      Can you elaborate? That sounds interesting.

      • skeezyboy 2 days ago

        too soon to get it to market, though it obviously all sold perfectly well, people were sufficiently wowed by it

  • Q6T46nT668w6i3m 3 days ago

    There’s no evidence that it’ll scale like that. Progress in AI has always been a step function.

    • ghurtado 3 days ago

      There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.

      > Progress in AI has always been a step function.

      There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.

      • ezst 2 days ago

        > There's also no evidence that it won't

        There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.

        • mschuster91 2 days ago

          Oh, I believe that while LLMs are a dead end now, the applications of AI in vision and physical (i.e. robots with limbs) world will usher in yet another wrecking of the lower classes of society.

          Just as AI has killed off all demand for lower-skill work in copywriting, translation, design and coding, it will do so for manufacturing. And that will be a dangerous bloodbath because there will not be enough juniors any more to replace seniors aging out or quitting in frustration of being reduced to cleaning up AI crap.

      • dml2135 3 days ago

        What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations.

        We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.

        Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.

      • Q6T46nT668w6i3m 2 days ago

        I don’t follow. We have benchmarks that have survived decades and illustrate the steps.

    • the8472 3 days ago

      rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero.

    • ninetyninenine 3 days ago

      Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up.

    • eichin 2 days ago

      The innovation here is that the step function didn't traditionally go down

  • [removed] 2 days ago
    [deleted]
  • i_love_retros 3 days ago

    Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?

    Its hard for me to imagine Skynet growing from chatgpt

    • whatevaa 2 days ago

      The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous.

  • thefourthchime 3 days ago

    I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.

    • [removed] 3 days ago
      [deleted]
  • kokanee 2 days ago

    I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.

    What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.

StopDisinfo910 2 days ago

> A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)

If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.

The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.

  • bcrosby95 2 days ago

    Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.

    • what_ever 2 days ago

      Hard to know what OP asked for but if they asked for AI specifically, the advise does not need to be holistic.

  • aksss 2 days ago

    Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.

    • BrawnyBadger53 2 days ago

      Personal opinion, I'm bearish on the shovel seller long term because the companies that are training AI are likely to build their own hardware. Google already does this. Seems like a matter of time for the rest of the mag 7 to join. The rest of the buyers aren't growing enough to offset that loss imo.

      • godelski 2 days ago

        FWIW, Nvidia's moat isn't hardware and they know this (they even talk about it). Hardware wise AMD is neck and neck with them, but AMD still doesn't have a CUDA equivalent. CUDA is the moat. As painful as it is to use, there's a long way to go for companies like AMD to compete here. Their software is still pretty far behind, despite their rapid and impressive advancements. It will also take time to get developer experience to saturate within the market, and that will likely mean AMD needs some good edge over Nvidia, like adding things Nvidia can't do or being much more cost competitive. And that's not something like adding more VRAM or just taking smaller profit margins because Nvidia can respond to those fairly easily.

        That said, I still suggested the parent sell. Real money is better than potential money. Classic gambler's fallacy, right? FOMO is letting hindsight get in the way of foresight.

    • godelski 2 days ago

      What's the old Rockefeller clique? When your shoe shiner is giving you stock advice it is time to sell (may have heard the taxicab driver version).

      It depends on how risk adverse you are and how much money you have there.

      If you're happy with those returns, sell. FOMO is dumb. You can't time the market, the information just isn't available. If those shares are worth a meaningful amount of money, sell. Take your wins and walk away. A bird in your hand is worth more than two in the bush, right? That money isn't worth anything until it is realized[0].

      Think about it this way: how much more would you need to make to risk making nothing? Or losing money? This is probably the most important question when investing.

      If you're a little risk adverse or a good chunk of your profile is in it, sell 50-80% of it and then diversify. You're taking wins and restructuring.

      If you wanna YOLO, then YOLO.

      My advice? Don't let hindsight get in the way of foresight.

      [0] I had some Nvidia stocks at 450 and sold at 900 (before the split, so would be $90 today). I definitely would have made more money if I kept them. Almost double if I sold today! But I don't look back for a second. I sold those shares and was able to pay off my student debt. Having this debt paid off is still a better decision in my mind because I can't predict the future. I could have sold 2 weeks later and made less! Or even in April of this year and made the same amount of money.

    • StopDisinfo910 2 days ago

      I have absolutely no clue whatsoever. I have zero insider information. For all I know, the bubble could pop tomorrow or we might be at the beginning of a shift of a similar magnitude to the industrial revolution. If I could reliably tell, I wouldn’t tell you anyway. I would be getting rich.

      I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.

torginus 3 days ago

It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).

  • benterix 3 days ago

    It's a cliche but people really underestimate and try to downplay the role of luck[0].

    [0] https://www.scientificamerican.com/blog/beautiful-minds/the-...

    • Aurornis 2 days ago

      People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out.

      • [removed] 2 days ago
        [deleted]
      • not_the_fda 2 days ago

        Sure helps to be born wealthy, go to private school, and Ivy League college.

    • jauntywundrkind 3 days ago

      Luck. And capturing strong network effect.

      The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.

    • ericd 2 days ago

      Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results.

      • quantified 2 days ago

        Win a monster pot and you can play a lot of more interesting hands.

      • whatevertrevor 2 days ago

        Except you can play hundreds of thousands of poker hands in your lifetime, but only have time/energy/money to start a handful of businesses.

    • marknutter 3 days ago

      [flagged]

      • Miraste 3 days ago

        This might be true for a normal definition of success, but not lottery-winner style success like Facebook. If you look at Microsoft, Netflix, Apple, Amazon, Google, and so on, the founders all have few or zero previous attempts at starting a business. My theory is that this leads them to pursue risky behavior that more experienced leaders wouldn't try, and because they were in the right place at the right time, that earned them the largest rewards.

        • technotony 3 days ago

          Not true of Netflix, founder came from PayPal. Apple required founder to leave and learn with a bunch of other companies like Pixar and next.

      • oa335 3 days ago

        What "massive string of failed attempts" did Zuckerberg or Bezos ever accumulate?

      • michaelt 3 days ago

        This is just cope for people with a massive string of failed attempts and no successes.

        Daddy's giving you another $50,000 because he loves you, not because he expects your seventh business (blockchain for yoga studio class bookings) is going to go any better than the last six.

      • tovej 3 days ago

        IMO this strengthens the case for luck. If the probability of winning the lottery is P, then trying N times gives you a probability of 1-(1-P)^N.

        Who's more likely to win, someone with one lottery ticket or someone with a hundred?

      • ghurtado 3 days ago

        "Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires."

        Some will read this and laser in on the "socialism" part, but obviously the interesting bit is the second half of the quote.

        • belter 3 days ago

          That phrase explains the US Health Care System

    • UltraSane 3 days ago

      Every billionaire could have died from childhood cancer.

  • jocaal 3 days ago

    Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.

    • vovavili 3 days ago

      Plenty of smart people prefer not to try their luck, though. A smart but risk-avoidant person will never be the one to create Facebook either.

      • estearum 3 days ago

        Plenty of them do try and fail, and then one succeeds, and it doesn't mean that person is intrinsically smarter/wiser/better/etc than the others.

        There are far, far more external factors on a business's success than internal ones, especially early on.

      • dgfitz 3 days ago

        What risk was there in creating facebook? I don't see it.

        Dude makes a website in his dorm room and I guess eventually accepts free money he is not obligated to pay back.

        What risk?

        • CamperBob2 3 days ago

          Once you go deep enough into a personal passion project like that, you run a serious risk of flunking out of school. For most people that feels like a big deal. And for those of us with fewer alternatives in life, it's usually enough to keep us on the straight and narrow path.

          People from wealthy backgrounds often have less fear of failure, which is a big reason why success disproportionately favors that clique. But frankly, most people in that position are more likely to abuse it or ignore it than to take advantage of it. For people like Zuckerberg and Dell and Gates, the easiest thing to do would have been to slack off, chill out, play their expected role and coast through life... just like most of their peers did.

    • miki123211 3 days ago

      I view success as the product of three factors, luck, skill and hard work.

      If any of these is 0, you fail, regardless of how high the other two are. Extraordinary success needs all three to be extremely high.

      • whodidntante 3 days ago

        There is another dimension, which is mostly but not fully characterized as perseverance, but many times with an added dose of ruthlessness

        Microsoft, Facebook, Uber, google and many others all had strong doses of ruthlessness

      • benterix 3 days ago

        Or you can just have rich parents and do nothing, and still be considered successful. What you say only applies to people who start from zero, and even then I'd call luck the dominant factor (based on observing my skillful and hardworking but not really successful friends).

      • nirav72 3 days ago

        >luck, skill and hard work.

        Another key component is knowing the right people or the network you're in. I've known a few people that lacked 2 of those 3 things and yet somehow succeeded. Simply because of the people they knew.

  • ninetyninenine 3 days ago

    Giving 1.5 million salary is nothing for these people.

    It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.

    You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.

    Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.

    • epolanski 3 days ago

      But how is it worth for meta, since they won't really monetize it.

      At least the others can kinda bundle it as a service.

      After spending tens of billions in AI how has it impacted a single dollar on meta's revenue?

      • amalcon 2 days ago

        The not-so-secret is that the "killer apps" for deep neural networks are not LLMs or diffusion models. Those are very useful, but the most valuable applications in this space are content recommendation and ad targeting. It's obvious how Meta can use those things.

        The genAI stuff is likely part talent play (bring in good people with the hot field and they'll help with the boring one), part R&D play (innovations in genAI are frequently applicable to ad targeting), and part moonshot (if it really does pan out in the way boosters seem to think, monetization won't really be a problem).

      • ninetyninenine 3 days ago

        >But how is it worth for meta, since they won't really monetize it.

        Meta needs growth as there main platform is slowing down. To move forward they need to gamble on potential growth. VR was a gamble. They bombed that one. This is another gamble.

        They're not stupid. Like all the risks you're aware of, they're also aware of. They were aware of the risks for VR too. They need to find a new high growth niche. Gambling on something with even a 40% chance of exploding into success is a good bet for them given there massive resources.

      • anshumankmr 2 days ago

        Isn't Meta doing some limited rollout of Llama as an API? Still I haven't got my hands on it so I cannot say for sure whether it is currently paid or not, but that can drive some revenue.

  • ghurtado 3 days ago

    When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.

  • saubeidl 3 days ago

    It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.

    • aspenmayer 2 days ago

      > meritocracy is a comforting lie.

      Meritocracy used to be a dirty word, before my time, of course, but for different reasons than you may think. Think about the racial quotas in college admissions and you’ll maybe see why the powers that be didn’t want merit to be a determining factor at that time.

      Now that the status quo is in charge of college admissions, we don’t need those quotas generally, and yet meritocracy still can’t save us. The problem of merit is that we rarely need the best person for a given job, and those with means can be groomed their entire life to do that job, if it’s profitable enough. Work shouldn’t be charity either, as work needs to get done, after all, and it’s called work instead of charity or slavery for good reasons, but being too good at your job at your current pay rate can make you unpromotable, which is a trap just as hard to see as the trap of meritocracy.

      Meritocracy is ego-stroking writ large if you get picked, just so we can remind you that you’re just the best one for our job that applied, and we can replace you at any time, likely for less money.

  • PhantomHour 3 days ago

    The answer is fairly straightforward. It's fraud, and lots of it.

    A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.

    A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.

    A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".

    Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.

    The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.

    • NickC25 3 days ago

      As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.

      He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.

      • PhantomHour 3 days ago

        Zuckerberg started as a sex pest and got not an iota better.

        But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds.

        • NickC25 2 days ago

          Unfortunately I think that ship has sailed.

          And since we live in the era of the real golden rule (i.e "he who has the gold makes the rules), there's no chance that we'll ever get the chance to catch the ship. Mark lives in his own world, because we gave him a quarter trillion dollars and never so much as slapped him on the wrist.

    • dgs_sgd 2 days ago

      What is a good resource to read about the ad fraud? This is the first I'm hearing of that.

      • jbreckmckye 2 days ago

        I used to work in adtech. I don't have any direct information but, I assume this relates to the persistent rumours that Facebook inflates impressions and turns a blind eye to bot activity.

    • travisgriggs 3 days ago

      Ha ha.

      You used “honest” and “businessman” in the same sentence.

      Good one.

  • balamatom 3 days ago

    I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.

    Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.

    Gee, what makes it grow so big though? The power of human ambition?

    And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.

    To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.

    For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students

    See also: "Beyond Power / Knowledge", Graeber 2006.

    • ghurtado 3 days ago

      why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes?

      It's very unique to this site and these type of comments all have an eerily similar vibe.

      • Karrot_Kream 2 days ago

        This is pretty common on HN but not unique to it. Lots of rationalist adjacent content (like stuff on LessWrong, replies to Scott Alexander's substack, etc) has it also. Here I think it comes from users that try to intellectualize their not-very-intellectual, stream of consciousness style thoughts, as if using technical jargon to convey your feelings makes them more rational and less emotional.

      • JumpCrisscross 2 days ago

        Between “presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about,” before going on and sharing an opinion on that subject, and “even the Invisible Hand of the market is hand-shaped,” I think it may just be AI slop.

      • balamatom 3 days ago

        >why is there so much of this on HN?

        Where?

blitzar 3 days ago

> record-setting bonuses they were dolling out to hire the top minds in AI

That was soooo 2 weeks ago.

mrits 3 days ago

I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.

  • evilduck 3 days ago

    I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.

    Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.

    • [removed] 3 days ago
      [deleted]
  • sebstefan 3 days ago

    I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.

    My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.

    I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.

    • thinkharderdev 2 days ago

      Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?

    • Schiendelman 2 days ago

      The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.

      You'll probably have a player that sells privacy as well.

  • OtherShrezzing 3 days ago

    I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.

    • HDThoreaun 3 days ago

      With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.

      • criddell 2 days ago

        But would they be profitable enough? They've taken on more than $50 billion of investment.

        I think it's relatively easy for Meta to plow billions into AI. Last quarter their revenue was something like $15 billion. Open AI will be lucky to generate that over the next year.

        • HDThoreaun 2 days ago

          meta net profit last quarter was over $18 billion so yea the big tech players definitely have a lot more runway

      • epicureanideal 3 days ago

        > if they stopped research and just focused on productionizing inference use cases I think they’d be profitable

        For a couple of years, until someone who did keep doing research pulled ahead a bit with a similarly good UI.

raydev 2 days ago

> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted

Or the more likely explanation is that they feel they've completed the hiring necessary to figure out what's next.

baxtr 3 days ago

> …lot of jobs will disappear.

So it’s true that AI will kill jobs, but not in the way they’ve imagined?!

epolanski 3 days ago

> A couple of years ago, I asked a financial investment person about AI as a trick question.

Why do you assume this people know any better than average Joe on the street?

Study after study demonstrates they can't even keep up with the market benchmarks, how would they be any wiser to tell you what's a fad or not.

  • quantified 2 days ago

    I think the point of the question was to differentiate this person from the average Jane on the Street.

hbosch 2 days ago

>It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?

I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.

Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.

  • azinman2 2 days ago

    > AI is very much an existential threat to Meta.

    How so?

    • hdgvhicv 2 days ago

      “you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?”

      • azinman2 2 days ago

        Meta doesn’t really serve companionship. It used to make Yu connected to others in your social graph, which AI cannot replace. If IG still has the eyeballs, people can put AI generated content on it with or without meta’s permission.

        Like with most things, people will want what’s expensive and not what’s cheap. AI is cheap, real humans are not. Why buy diamonds when you can’t tell the difference with cubic zirconia? And yet demand for diamonds only increases.

la64710 2 days ago

Correction if I may: Lot of AI jobs will disappear. Lot of usual jobs that were put on hold will return. This is good news for most of humankind.

FrustratedMonky 2 days ago

"little shortsighted"

Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.

[removed] 2 days ago
[deleted]
throawaywpg 3 days ago

The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.

snihalani 2 days ago

When will the investors run out of money and stop funding hypes?

baby 3 days ago

As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.

  • GoatInGrey 3 days ago

    You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.

    In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.

    • AnotherGoodName 3 days ago

      What you've described is reasonable and a clear takeaway is that AI is a timesaving tool you should learn.

      Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.

      • gspencley 2 days ago

        I went through the parents, looking for a claim somewhere that AI was "useless." I couldn't find it.

        Yes there are lots of skeptics amongst programmers when it comes to AI. I was one myself (and still am depending on what we're talking about). My skepticism was rooted in the fact that AI is trained on human-generated output. Most human written code is not very good, and so AI is going to produce not very good code by design because that's what it was trained on.

        Then you add to that the context problem. AI is not very good at understanding your business goals, or the nuanced intricacies of your problem domain.

        All of this pointed to the fact, very early on, that AI would not be a good tool to replace programmers. And THAT'S the crux of why so many programmers pushed back. Because the hype was claiming that automation was coming for engineering jobs.

        I have started to use LLMs regularly for a variety of tasks. Including some with engineering. But I always end up spending a lot of time refactoring what LLMs produce for me, code-wise. And much of the time I find that I"m still learning what the LLMs can do for me that truly saves me time, vs what would have been faster to just write myself in the first place.

        LLMs are not useless. But if only 20% of a programmer's time is actually spent writing code on average then even if you can net a 50% increase in coding productivity... you're only netting a 10% overall productivity optimization for an engineer BEST CASE SCENARIO.

        And that's not "useless" but compared to the hype and bullshit coming out of the mouths of CEOs, it's as good as useless. It's as good as the MIT study finding that only 5% of generative AI projects have netted ANY measurable returns for the business.

    • cm2012 3 days ago

      I know a company that replaced their sales call center with an AI calling bot instead. The bot got better sales and higher feedback scores from customers.

  • agos 3 days ago

    why is it a train? If it's so transformative surely I can join in in a year or so?

  • conartist6 3 days ago

    I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"

    • Windchaser 3 days ago

      Or, quite similarly, the internet bubble of the large ‘90s

      Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.

  • skywhopper 2 days ago

    How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?

  • eulers_secret 3 days ago

    If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.

    Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)

    The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.

hearsathought 3 days ago

> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

If AI is going to be integral to society going forward, how is it shortsighted?

> She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).

So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.

> She skillfully navigated the question in a way that won my respect.

She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?

> I personally believe that a lot of investment money is going to evaporate before the market resets.

But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?