Comment by JCM9

Comment by JCM9 a day ago

38 replies

These numbers are pretty ugly. You always expect new tech to operate at a loss initially but the structure of their losses is not something one easily scales out of. In fact it gets more painful as they scale. Unless something fundamentally changes and fast this is gonna get ugly real quick.

spacebanana7 a day ago

The real answer is in advertising/referral revenue.

My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.

  • ecommerceguy 18 hours ago

    Insurance agents—unlike many tech-focused sales jobs—are licensed and regulated, requiring specific training, background checks, and ongoing compliance to sell products that directly affect customers’ financial stability and wellbeing. Mortgage brokers also adhere to licensing and compliance regulations, and their market expertise, negotiation ability, and compliance duties are not easily replaced by AI tools or platforms.

    t. perplexity ai

    • stogot 8 hours ago

      Yeah, I don’t want my mortgage recommendations to come from a prompt injection

  • lkramer a day ago

    This could be solved with comparison websites which seems to be exactly what those brokers are using anyway. I had a broker proudly declare that he could get me the best deal, which turned out to be exactly the same as what moneysavingexperts found for me. He wanted £150 for the privilege of searching some DB + god knows how much commission he would get on top of that...

    • spacebanana7 a day ago

      Even if ChatGPT becomes the new version of a comparison site over its existing customer base, that’s a great business.

mannyv 3 hours ago

No, they're not.

$4.3B in revenue is tremendous.

What are you comparing them to?

anthonypasq a day ago

they could keep the current model in chatGPT the same forver and 99% of users wouldnt know or care, and unless you think hardware isnt going to improve, the cost of that will basically decrease to 0.

  • impossiblefork a day ago

    For programming it's okay, for maths it's almost okay. For things like stories and actually dealing with reality, the models aren't even close to okay.

    I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstood sentences, generated crazy things, lost track of everything-- completely beyond how bad I thought it could possibly be.

    I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.

    This extends to analyzing discussions. It simply misunderstands what people say. If you try to do this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.

    When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.

  • davidcbc 20 hours ago

    This just doesn't match with the claims that people are using it as a replacement for Google. If your facts are out of date you're useless as a search engine

    • anthonypasq 2 hours ago

      all these models just use web search now to stay up to date. knowledge cutoffs arent as important. also fine tuning new data into the base model after the fact is way cheaper than having to retrain the whole thing from scratch

    • treyd 18 hours ago

      Which is why there's so much effort to build RAG workflows so that you can progressively add to the pool of information that the chatbot has access to, beyond what's baked into the underlying model(s).

  • jampa a day ago

    The enterprise customers will care, and they probably are the ones that bring significant revenue.

  • toshinoriyagi a day ago

    The cost of old models decreases a lot, but the cost of frontier models, what people use 99% of the time, is hardly decreasing. Plus, many of the best models rely on thinking or reasoning, which use 10-100x as many tokens for the same prompt. That doesn't work on a fixed cost monthly subscription.

    • anthonypasq a day ago

      im not sure that you read what i just said. Almost no one using chatgpt would care if they were still talking to gpt5 2 years from now. If compute per watt doubles in the next 2 years, then the cost of serving gpt5 just got cut in half. purely on the hardware side, not to mention we are getting better at making smaller models smarter.

      • serf 20 hours ago

        I don't really believe that premise in a world with competition, and the strategy it supports -- let AI companies produce profit off of old models -- ignores the need for SOTA advancement and expansion by these very same companies.

        In other words, yes GPT-X might work well enough for most people, but the newer demo for ShinyNewModelZ is going to pull customers of GPT-X's in regardless of both fulfilling the customer needs. There is a persistent need for advancement (or at least marketing that indicates as much) in order to have positive numbers at the end of the churn cycle.

        I have major doubts that can be done without trying to push features or SOTA models, without just straight lying or deception.

whizzter a day ago

I've said it before and I'll say it again.. if I was able to know the time it takes for bubbles to pop I would've shorted many of the players long ago.

  • Esophagus4 18 hours ago

    Eh, this seems like a cop out.

    It’s so easy for people to shout bubble on the internet without actually putting their own money on the line. Talk is cheap - it doesn’t matter how many times you say it, I think you don’t have conviction if you’re not willing to put your own skin in the game. (Which is fine, you don’t have to put your money on the line. But it just annoys me when everyone cries “bubble” from the sidelines without actually getting in the ring.)

    After all, “a bubble is just a bull market you don’t have a position in.”

    • zoul 14 hours ago

      Believe it or not, many people just don’t care about the stock market. But they may still care about the economy that could crash badly if the AI bubble gets too big before it pops.

      • Esophagus4 13 hours ago

        People find all kinds of things to worry about if it gives them something to do, I guess.

        In the same way that my elderly grandmother binge watches CNN to have something to worry about.

        But the commenter I responded to DID care about the stock market, despite your attempt to grandstand.

        And my point was, and still is, if you really believe it’s a bubble and you don’t actually have a short position, then you don’t actually believe it’s a bubble deep down.

        Talk is cheap - let’s see your positions.

        It would be like saying “I’ve got this great idea for a company, I’m sure it would do really well, but I don’t believe it enough to actually start a company.”

        Ok, then what does that actually say about your belief in your idea?

    • lawn 13 hours ago

      You can correctly identify a bubble without being able to identify when it'll burst (which is arguably the much harder problem).

      The statistically correct play is therefore not to do this (and just keep buying).

      • Esophagus4 12 hours ago

        Then no, you haven’t identified a bubble.

        You’ve just said, “I think something will go down at some point.” Which… like… sure, but in a pointlessly trivial way? Even a broken clock is right eventually?

        That’s not “identifying a bubble” that’s boring dinner small talk. “Wow, this Bitcoin thing is such a bubble huh!” “Yeah, sure is crazy!”

        And even more so, if you’re long into something you call a bubble, that by definition says either you don’t think it’s that much of a bubble, huh? Or you’re a goon for betting on something you believe is all hot air?

adventured a day ago

There is an exceptionally obvious solution for OpenAI & ChatGPT: ads.

In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.

One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.

Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).

Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).

  • thewebguyd a day ago

    > They would have to be the biggest fools in tech history to not flip that switch

    As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.

    If they don't, Google certainly will, as will Meta, and Microsoft.

    I wonder if their plan for the weird Sora 2 social network thing is ads.

    Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.

  • dreamcompiler a day ago

    Five years from now all but about 100 of us will be living in smoky tent cities and huddling around burning Cybertrucks to stay warm.

    But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.

  • singron 16 hours ago

    Will people use ChatGPT if it's stuffed full of ads? It seems like the belief that ads are turn-key is useful to their valuation, but if ads actually bomb, then they will take a huge hit.

  • jhallenworld a day ago

    Google didn't have inline ads until 2010, but they did have separate ads nearly from the beginning. I assume ads will be inline for OpenAI- I mean the only case they could be separate is in ChatGPT, but I doubt that will be their largest use case.

    • kridsdale1 16 hours ago

      I think it was actually about 5 years from founding to ads on Google.com.

  • gizajob a day ago

    ChatGPT chatting ads halfway through its answer is going to be totally rad.

    • silon42 9 hours ago

      Imagine the emails / reports with copy-pasted ads.

  • Spooky23 20 hours ago

    No way. It’s 2025, society is totally different, you have to think about what is the new normal. They are too big to fail at this point — so much of the S&P 500 valuation is tied to AI (Microsoft, Google, Tesla, etc) they are arguable strategic to the US.

    Fascist corporatism will throw them in for whatever Intel rescue plan Nvidia is forced to participate in. If the midterms flip congress or if we have another presidential election, maybe something will change.

    • jfyi 4 hours ago

      I agree. If OpenAI isn't strategic to the US, that damn sure is Altman's current goal. The moment he can close the sale on "we have to get there before China" ad revenue won't be a concern any more.

      I'd say it's a bit of a Hail Mary and could go either way, but that's as an outsider looking in. Who really knows?

  • JCM9 a day ago

    For using GenAI as search I’d agree with you but I don’t think it’s as easy/obvious for most other use cases.

    • flyinglizard a day ago

      I'm sure lots of ChatGPT interactions are for making buying decisions, and just how easy would it be to prioritize certain products to the top? This is where the real money is. With SEO, you were making the purchase decision and companies paid to get their wares in front of you; now with AI, it's making the buy decision mostly on its own.

deepnotderp a day ago

New hardware could greatly reduce inference and training costs and solve that issue

  • samtp a day ago

    That's extremely hopeful and also ignores the fact that new hardware will have incredibly high upfront costs.

  • leptons a day ago

    Great, so they just have to spend another ~$10 billion on new hardware to save how many billion in training costs? I don't see a path to profitability here, unless they massively raise their prices to consumers, and nobody really needs AI that badly.