alsetmusic 2 days ago

It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.

I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.

  • miki123211 2 days ago

    I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

    We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.

    This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.

    • bryanlarsen 2 days ago

      Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.

      Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.

      • tombert 2 days ago

        Today I learned that Sears founded Prodigy!

        Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.

      • djtango 2 days ago

        This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988

        A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.

        Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish

      • tracker1 2 days ago

        On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...

        They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.

      • Imustaskforhelp 2 days ago

        My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only.

        But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.

      • cyanydeez 2 days ago

        the problem is ISP became a Utility, not some fountain of unlimited growth.

        What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.

        I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.

      • outside1234 2 days ago

        Newton at Apple is another great one, though they of course got there.

        • platevoltage 2 days ago

          They sure did. This reminds me of when I was in the local Mac Dealer right after the iPod came out. The employees were laughing together saying “nobody is going to buy this thing”.

    • deegles 2 days ago

      > We're clearly seeing what AI will eventually be able to do

      Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

      Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.

      For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.

      Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.

      • hnfong 2 days ago

        I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.

        Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.

      • rstuart4133 2 days ago

        > Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.

        I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.

        Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.

        LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.

        Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

        • deegles 2 days ago

          > it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.

          That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.

      • miki123211 2 days ago

        > If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?

        When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.

        The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.

      • [removed] 2 days ago
        [deleted]
    • me551ah 2 days ago

      Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.

      So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.

      Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

      The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years

      • mindcrime 2 days ago

        Scaling AI will require an exponential increase in compute and processing power,

        A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.

      • foobarian 2 days ago

        > Scaling AI will require an exponential increase in compute and processing power,

        I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.

        • RugnirViking 2 days ago

          I dont fully even get why; inference costs are way lower than training costs no?

      • thfuran 2 days ago

        We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.

        • hyperbovine 2 days ago

          The fact that the human brain, heck all brains, are so much more efficient than “state of the art” nnets, in terms of architecture, power consumption, training cost, what have you … while also being way more versatile and robust … is what convinces me that this is not the path that leads to AGI.

      • miki123211 2 days ago

        > We are already at the limit of how small we can scale chips

        I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.

        Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.

        > so unless the price of electricity comes down exponentially

        This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.

        > Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.

        "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." Sam Altman, OpenAI CEO[1].

        [1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...

    • armada651 2 days ago

      > The groundwork has been laid, and it's not too hard to see the shape of things to come.

      The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.

      • jimbokun 2 days ago

        Is it still giving people headaches and making them nauseous?

    • matthewdgreen 2 days ago

      As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.

      • tracker1 2 days ago

        Oh, like RealPlayer in the late 90's (buffering... buffering...)

    • nutjob2 2 days ago

      > We're clearly seeing what AI will eventually be able to do

      I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.

      Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.

    • skeezyboy 2 days ago

      >I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."

      I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.

      • dml2135 2 days ago

        It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.

        There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.

        We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.

        Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.

        The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.

        "Progress" moves in fits and starts. It is the furthest thing from inevitable.

      • echelon 2 days ago

        Speaking of Netflix -

        I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.

        Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.

        I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.

        For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.

      • dormento 2 days ago

        > I did think the same thing about the 8bit era of video games.

        Can you elaborate? That sounds interesting.

        • skeezyboy 2 days ago

          too soon to get it to market, though it obviously all sold perfectly well, people were sufficiently wowed by it

    • Q6T46nT668w6i3m 2 days ago

      There’s no evidence that it’ll scale like that. Progress in AI has always been a step function.

      • ghurtado 2 days ago

        There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.

        > Progress in AI has always been a step function.

        There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.

      • the8472 2 days ago

        rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero.

      • ninetyninenine 2 days ago

        Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up.

      • eichin 2 days ago

        The innovation here is that the step function didn't traditionally go down

    • [removed] 2 days ago
      [deleted]
    • i_love_retros 2 days ago

      Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?

      Its hard for me to imagine Skynet growing from chatgpt

      • whatevaa 2 days ago

        The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous.

    • thefourthchime 2 days ago

      I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.

      • [removed] 2 days ago
        [deleted]
    • kokanee 2 days ago

      I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.

      What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.

  • StopDisinfo910 2 days ago

    > A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)

    If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.

    The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.

    • bcrosby95 2 days ago

      Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.

      • what_ever 2 days ago

        Hard to know what OP asked for but if they asked for AI specifically, the advise does not need to be holistic.

    • aksss 2 days ago

      Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.

      • BrawnyBadger53 2 days ago

        Personal opinion, I'm bearish on the shovel seller long term because the companies that are training AI are likely to build their own hardware. Google already does this. Seems like a matter of time for the rest of the mag 7 to join. The rest of the buyers aren't growing enough to offset that loss imo.

        • godelski 2 days ago

          FWIW, Nvidia's moat isn't hardware and they know this (they even talk about it). Hardware wise AMD is neck and neck with them, but AMD still doesn't have a CUDA equivalent. CUDA is the moat. As painful as it is to use, there's a long way to go for companies like AMD to compete here. Their software is still pretty far behind, despite their rapid and impressive advancements. It will also take time to get developer experience to saturate within the market, and that will likely mean AMD needs some good edge over Nvidia, like adding things Nvidia can't do or being much more cost competitive. And that's not something like adding more VRAM or just taking smaller profit margins because Nvidia can respond to those fairly easily.

          That said, I still suggested the parent sell. Real money is better than potential money. Classic gambler's fallacy, right? FOMO is letting hindsight get in the way of foresight.

      • godelski 2 days ago

        What's the old Rockefeller clique? When your shoe shiner is giving you stock advice it is time to sell (may have heard the taxicab driver version).

        It depends on how risk adverse you are and how much money you have there.

        If you're happy with those returns, sell. FOMO is dumb. You can't time the market, the information just isn't available. If those shares are worth a meaningful amount of money, sell. Take your wins and walk away. A bird in your hand is worth more than two in the bush, right? That money isn't worth anything until it is realized[0].

        Think about it this way: how much more would you need to make to risk making nothing? Or losing money? This is probably the most important question when investing.

        If you're a little risk adverse or a good chunk of your profile is in it, sell 50-80% of it and then diversify. You're taking wins and restructuring.

        If you wanna YOLO, then YOLO.

        My advice? Don't let hindsight get in the way of foresight.

        [0] I had some Nvidia stocks at 450 and sold at 900 (before the split, so would be $90 today). I definitely would have made more money if I kept them. Almost double if I sold today! But I don't look back for a second. I sold those shares and was able to pay off my student debt. Having this debt paid off is still a better decision in my mind because I can't predict the future. I could have sold 2 weeks later and made less! Or even in April of this year and made the same amount of money.

      • StopDisinfo910 2 days ago

        I have absolutely no clue whatsoever. I have zero insider information. For all I know, the bubble could pop tomorrow or we might be at the beginning of a shift of a similar magnitude to the industrial revolution. If I could reliably tell, I wouldn’t tell you anyway. I would be getting rich.

        I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.

  • torginus 2 days ago

    It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).

    • benterix 2 days ago

      It's a cliche but people really underestimate and try to downplay the role of luck[0].

      [0] https://www.scientificamerican.com/blog/beautiful-minds/the-...

      • Aurornis 2 days ago

        People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out.

      • jauntywundrkind 2 days ago

        Luck. And capturing strong network effect.

        The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.

      • ericd 2 days ago

        Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results.

      • UltraSane 2 days ago

        Every billionaire could have died from childhood cancer.

    • jocaal 2 days ago

      Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.

      • vovavili 2 days ago

        Plenty of smart people prefer not to try their luck, though. A smart but risk-avoidant person will never be the one to create Facebook either.

      • miki123211 2 days ago

        I view success as the product of three factors, luck, skill and hard work.

        If any of these is 0, you fail, regardless of how high the other two are. Extraordinary success needs all three to be extremely high.

    • ninetyninenine 2 days ago

      Giving 1.5 million salary is nothing for these people.

      It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.

      You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.

      Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.

      • epolanski 2 days ago

        But how is it worth for meta, since they won't really monetize it.

        At least the others can kinda bundle it as a service.

        After spending tens of billions in AI how has it impacted a single dollar on meta's revenue?

    • ghurtado 2 days ago

      When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.

    • saubeidl 2 days ago

      It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.

      • aspenmayer 2 days ago

        > meritocracy is a comforting lie.

        Meritocracy used to be a dirty word, before my time, of course, but for different reasons than you may think. Think about the racial quotas in college admissions and you’ll maybe see why the powers that be didn’t want merit to be a determining factor at that time.

        Now that the status quo is in charge of college admissions, we don’t need those quotas generally, and yet meritocracy still can’t save us. The problem of merit is that we rarely need the best person for a given job, and those with means can be groomed their entire life to do that job, if it’s profitable enough. Work shouldn’t be charity either, as work needs to get done, after all, and it’s called work instead of charity or slavery for good reasons, but being too good at your job at your current pay rate can make you unpromotable, which is a trap just as hard to see as the trap of meritocracy.

        Meritocracy is ego-stroking writ large if you get picked, just so we can remind you that you’re just the best one for our job that applied, and we can replace you at any time, likely for less money.

    • PhantomHour 2 days ago

      The answer is fairly straightforward. It's fraud, and lots of it.

      A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.

      A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.

      A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".

      Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.

      The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.

      • NickC25 2 days ago

        As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.

        He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.

      • dgs_sgd 2 days ago

        What is a good resource to read about the ad fraud? This is the first I'm hearing of that.

        • jbreckmckye 2 days ago

          I used to work in adtech. I don't have any direct information but, I assume this relates to the persistent rumours that Facebook inflates impressions and turns a blind eye to bot activity.

      • travisgriggs 2 days ago

        Ha ha.

        You used “honest” and “businessman” in the same sentence.

        Good one.

    • balamatom 2 days ago

      I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.

      Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.

      Gee, what makes it grow so big though? The power of human ambition?

      And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.

      To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.

      For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students

      See also: "Beyond Power / Knowledge", Graeber 2006.

      • ghurtado 2 days ago

        why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes?

        It's very unique to this site and these type of comments all have an eerily similar vibe.

  • blitzar 2 days ago

    > record-setting bonuses they were dolling out to hire the top minds in AI

    That was soooo 2 weeks ago.

  • mrits 2 days ago

    I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.

    • evilduck 2 days ago

      I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.

      Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.

      • [removed] 2 days ago
        [deleted]
    • sebstefan 2 days ago

      I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.

      My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.

      I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.

      • thinkharderdev 2 days ago

        Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?

      • Schiendelman 2 days ago

        The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.

        You'll probably have a player that sells privacy as well.

    • OtherShrezzing 2 days ago

      I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.

      • HDThoreaun 2 days ago

        With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.

  • raydev 2 days ago

    > It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted

    Or the more likely explanation is that they feel they've completed the hiring necessary to figure out what's next.

  • baxtr 2 days ago

    > …lot of jobs will disappear.

    So it’s true that AI will kill jobs, but not in the way they’ve imagined?!

  • epolanski 2 days ago

    > A couple of years ago, I asked a financial investment person about AI as a trick question.

    Why do you assume this people know any better than average Joe on the street?

    Study after study demonstrates they can't even keep up with the market benchmarks, how would they be any wiser to tell you what's a fad or not.

    • quantified 2 days ago

      I think the point of the question was to differentiate this person from the average Jane on the Street.

  • hbosch 2 days ago

    >It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

    Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?

    I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.

    Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.

    • azinman2 2 days ago

      > AI is very much an existential threat to Meta.

      How so?

      • hdgvhicv 2 days ago

        “you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?”

        • azinman2 2 days ago

          Meta doesn’t really serve companionship. It used to make Yu connected to others in your social graph, which AI cannot replace. If IG still has the eyeballs, people can put AI generated content on it with or without meta’s permission.

          Like with most things, people will want what’s expensive and not what’s cheap. AI is cheap, real humans are not. Why buy diamonds when you can’t tell the difference with cubic zirconia? And yet demand for diamonds only increases.

  • la64710 2 days ago

    Correction if I may: Lot of AI jobs will disappear. Lot of usual jobs that were put on hold will return. This is good news for most of humankind.

  • FrustratedMonky 2 days ago

    "little shortsighted"

    Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.

  • [removed] 2 days ago
    [deleted]
  • throawaywpg 2 days ago

    The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.

  • snihalani 2 days ago

    When will the investors run out of money and stop funding hypes?

  • baby 2 days ago

    As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.

    • GoatInGrey 2 days ago

      You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.

      In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.

      • AnotherGoodName 2 days ago

        What you've described is reasonable and a clear takeaway is that AI is a timesaving tool you should learn.

        Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.

        • gspencley 2 days ago

          I went through the parents, looking for a claim somewhere that AI was "useless." I couldn't find it.

          Yes there are lots of skeptics amongst programmers when it comes to AI. I was one myself (and still am depending on what we're talking about). My skepticism was rooted in the fact that AI is trained on human-generated output. Most human written code is not very good, and so AI is going to produce not very good code by design because that's what it was trained on.

          Then you add to that the context problem. AI is not very good at understanding your business goals, or the nuanced intricacies of your problem domain.

          All of this pointed to the fact, very early on, that AI would not be a good tool to replace programmers. And THAT'S the crux of why so many programmers pushed back. Because the hype was claiming that automation was coming for engineering jobs.

          I have started to use LLMs regularly for a variety of tasks. Including some with engineering. But I always end up spending a lot of time refactoring what LLMs produce for me, code-wise. And much of the time I find that I"m still learning what the LLMs can do for me that truly saves me time, vs what would have been faster to just write myself in the first place.

          LLMs are not useless. But if only 20% of a programmer's time is actually spent writing code on average then even if you can net a 50% increase in coding productivity... you're only netting a 10% overall productivity optimization for an engineer BEST CASE SCENARIO.

          And that's not "useless" but compared to the hype and bullshit coming out of the mouths of CEOs, it's as good as useless. It's as good as the MIT study finding that only 5% of generative AI projects have netted ANY measurable returns for the business.

      • cm2012 2 days ago

        I know a company that replaced their sales call center with an AI calling bot instead. The bot got better sales and higher feedback scores from customers.

    • agos 2 days ago

      why is it a train? If it's so transformative surely I can join in in a year or so?

    • conartist6 2 days ago

      I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"

      • Windchaser 2 days ago

        Or, quite similarly, the internet bubble of the large ‘90s

        Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.

    • skywhopper 2 days ago

      How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?

    • eulers_secret 2 days ago

      If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.

      Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)

      The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.

  • hearsathought 2 days ago

    > It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.

    If AI is going to be integral to society going forward, how is it shortsighted?

    > She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).

    So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.

    > She skillfully navigated the question in a way that won my respect.

    She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?

    > I personally believe that a lot of investment money is going to evaporate before the market resets.

    But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?