Comment by bgirard

Comment by bgirard 16 hours ago

73 replies

It's a fun trope to repeat but that's not what OpenAI is doing. I get a ton of value from ChatGPT and Codex from my subscription. As long as the inference is not done at a lost this analogy doesn't hold. They're not paying me to use it. They are generating output that is very valuable to me. Much more than my subscription cost.

I've been able to help setup cross app automation for my partner's business, remodel my house, plan a trip of Japan and assist with the cultural barrier, vibe code apps, technical support and so much more.

bambax 2 minutes ago

The parent isn't arguing you're not getting a good value out of the product. It says that users' contributions don't cover production costs, which may or may not be true but doesn't have much to do with how much value they get from it.

bloppe 16 hours ago

To be fair, I would get a ton of value out of someone selling dollars for 20 cents apiece.

But ya, OAI is clearly making a ton of revenue. That doesn't mean it's a good business, though. Giving them a 20 year horizon, shareholders will be very upset unless the firm can deliver about a trillion in profit, not revenue, to justify the 100B (so far) in investment, and that would barely beat the long term s&p 500 average return.

But Altman himself has said he'll need much more investment in the coming years. And even if OAI became profitable by jacking up prices and flooding gpt with ads, the underlying technology is so commodified, they'd never be able to achieve a high margin, assuming they can turn a profit at all.

  • safety1st 30 minutes ago

    The whole US economy is so deep into La-La Land at this point that they don't really need to be a good business. There are already murmurings that they may pull off a trillion dollar IPO, I don't see why they wouldn't, Amazon was making it cool to lose money hand over fist during your IPO as far back as 1997. They have the President willing to pump up their joint ventures with executive orders, we may just see tech become more like the financial industry, where a handful of companies are dubbed "too big to fail" based on political connections, and get bailed out at the taxpayer's expense when things get too rough. None of these guys function according to the real rules of the economy or even the legal system at this point, they just make stuff up as they go along and if they're big enough or know someone big enough they often get away with it.

  • usef- 11 hours ago

    People did say the same thing about Youtube, which was unprofitable and extremely expensive to run in the early years. I remember thinking everyone would leave once ads were added.

    At youtube's ad income rate (~$13/year), the current (but growing) ~800 million chatgpt users would add ~$10 billion. At facebook's rate (~$40-50/year) $32-40 billion. Potentially, an assistant would be more integrated into your life than either of those two.

    The "audience retention" is the key question, not the profitability if they maintain their current audience. I've been surprised how many non-technical people I know don't want to try other models. "ChatGPT knows me".

    • adgjlsfhk1 9 hours ago

      the problem with the YouTube analogy is that media platforms have significant network affects that NN providers don't. OpenAI can't command a premium because every year that goes by the cost to train an equivalent model to theirs decreases.

      • usef- 9 hours ago

        Youtube didn't either at the time. The front page was widely seen as garbage, and everyone I knew watched videos because they were embedded or linked from external sites. "If they introduced ads, people will just switch to other video hosts, wont they?". Many of the cooler creators used Vimeo. It was the good recommendation algorithm that came later, that I think allowed an actual network effect, and I don't remember people predicting that.

        The field is too young to know what will keep users, but there are definitely things that plausibly could create a lock-in effect. I mentioned one ("ChatGPT knows me") which could grow over time as people have shared more of themselves with ChatGPT. There's also pilots of multi-person chats, and the social elements in Sora. Some people already feel compelled to stick to the "person" they're comfortable talking to. The chance of OpenAI finding something isn't zero.

    • hansmayer an hour ago

      Minor difference : YT does not cost literally a human trip to Mars and back to operate

    • bloppe 6 hours ago

      The network effects aren't the same. All the viewers watch youtube because it has all the content, and all the creators post on youtube because it has all the viewers.

      How can a model achieve this kind of stickiness? By "knowing you"? I don't think that's the same at all. Personally, one of the reasons I prefer Claude is that it doesn't pretend to know me. I can control the context better.

    • usef- 7 hours ago

      I suspect some of the downvoters hate the idea of ads, which is understandable.

      But a lot of HN users use gmail, which has the same model. And there are plenty of paid email providers which seem far less popular (I use one). Ads didn't end up being a problem for most people provided they were kept independent of the content itself.

      • skydhash 3 hours ago

        1. Gmail is free

        2. I’ve never seen ads on the Gmail webapp (It sure does data collection)

  • littlestymaar 15 hours ago

    I'd be a little bit more nuanced:

    I think there's something off with their plans right now: it's pretty clear at this point that they can't own the technological frontier, Google is just too close already and from a purely technological PoV they are much better suited to have the best tech in the medium term. (There's no moat and Google has way more data and compute available, and also tons of cash to burn without depending on external funding).

    But ChatGPT is an insane brand and for most (free) customers I don't think model capabilities (aka “intelligence”) are that important. So if they stopped training frontier models right now and focus on driving their costs low by optimizing their inference compute budget while serving ads, they can make a lot of money from their user base.

    But that would probably mean losing most of its paying customers over the long run (companies won't be buying mediocre token at a premium for long) and more importantly it would require abandoning the AGI bullshit narrative, which I'm not sure Altman is willing to do. (And even if he was, how to do that without collapsing from lack of liquidity due to investors feeling betrayed is an open question).

    • array_key_first 12 hours ago

      Being an insane brand means literally nothing if people can trivially switch to competitors, which they can.

      There isn't even a tenth of enough money if you group together all of advertising. Like, the entire industry. Ads is a bad, bad plan that wont work. Advertising is also extremely overvalued. And even at it's overvalued price tag, it's nowhere near enough.

      • whalee 11 hours ago

        People could trivially switch their search engine to Bing or Yahoo, but they don't.

        If ads are so overpriced, how big is your short position on google? Also ads are extremely inefficient in terms of conversion. Ads rendered by an intelligent, personalized system will be OOM more efficient, negating most of the "overvalue".

        I'm not saying they should serve ads. It's a terrible strategy for other reasons.

      • sophia01 11 hours ago

        It's Coca Cola vs Pepsi. Yes some might even say Pepsi has been shown to taste better, but people still buy loads of Coke.

        Of course the tech savvy enterprises will use the best models. But the plumber down the road doesn't care whether she asks Gemini or ChatGPT about the sizing of some fittings.

      • pjaoko 11 hours ago

        > Being an insane brand means literally nothing if people can trivially switch to competitors, which they can.

        Logically speaking, yes it is easy to switch between OAI and Gemini, or Coke and Pepsi. But brand loyalty is more about emotions (comfort, familiarity,..) rather logical reasoning.

    • bloppe 14 hours ago

      The best way to drive inference cost down right now is to use TPUs. Either that or invest tons of additional money and manpower into silicon design like Google did, but they already have a 10 year lead there.

      • littlestymaar 5 hours ago

        > The best way to drive inference cost down right now is to use TPUs

        TPUs are cool, but the best leverage remains to reduce your (active) parameters count.

    • TheOtherHobbes 14 hours ago

      Altman's main interest is Altman. ChatGPT will be acquihired, most people will be let go, the brand will become a shadow of its former self, and Altman will emerge with a major payday and no obvious dent in his self-made reputation as a leading AGIthinkfluenceretc.

      I don't think ads are that easy, because the hard part of ads isn't taking money and serving up ad slop, it's providing convincing tracking and analytics.

      As soon as ad slop appears a lot of customers will run - not all, but enough to make monetisation problematic.

      • a_victorp 6 hours ago

        This! Most people that don't work on adtech have no idea how hard it is to: 1. Build a platform that offers new advertising inventory that advertisers can buy 2. Convince advertisers to advertise on your platform 3. Show advertisers that their advertising campaigns in your platform are more successful than in the several other places they can advertise

    • po 8 hours ago

      as long as the business model is:

      - users want the best/smartest LLM

      - the best performance for inference is found by spending more and more tokens (deep thinking)

      - pricing is based on cost per token

      Then the inference providers/hyperscalers will take all of the margin available to app makers (and then give it to Nvidia apparently). It is a bad business to be in, and not viable for OpenAI at their valuation.

      • littlestymaar 5 hours ago

        What I'm saying ils that I'm not sure the first point is true.

        I think they all have become sufficiently good for most people to stick to what they are used to (especially in terms of tone/“personality” + the memory shared between conversations).

    • riffraff 15 hours ago

      > But ChatGPT is an insane brand

      I mean, so was netscape.

      • cmiles8 14 hours ago

        This. Netscape was THE browser in the early phases of the Internet. Then Microsoft just packaged IE into Windows and it was game over. The brand means nothing long term. If Google broadly incorporates Gemini into all the Google-owned things everyone already has then it’s game over for OpenAI.

        The mass commoditization of the tech is rapidly driving AI to be a feature, not a product. And Google is very strongly positioned to take advantage of that. Microsoft too, and of course they have a relationship with OpenAI but that’s fraying.

        • cruffle_duffle 10 hours ago

          To be completely fair the later versions of Netscape were increasingly giant bloated piles of crap while IE slowly caught up and surpassed in terms of speed and features. The first versions IE were only good for downloading Netscape.

          Netscape, to a large degree, killed itself.

          Not to say IE turned into anything good though. But it did have its hayday.

      • littlestymaar 15 hours ago

        Maybe, I was too young to remember that.

        • littlestymaar an hour ago

          What's up with the flock of downvotes? I'd never got a comment with so many as this one… Is being younger than 45 not allowed in here?

felixfurtak 16 hours ago

All of which you will be able to do with your bundled assistant in the not-to-distant future.

OpenAI is a basket case:

- Too expensive and inconvenient to compete with commoditized, bundled assistants (from Google/ Microsoft/Apple)

- Too closed to compete with cheap, customizable open-source models

- Too dependent on partners

- Too late to establish its own platform lock-in

It echoes what happened to:

- Netscape (squeezed by Microsoft bundling + open protocols)

- BlackBerry (squeezed by Apple ecosystem + open Android OS)

- Dropbox (squeezed by iCloud, Google Drive, OneDrive + open tools like rclone)

When you live between giants and open-source, your margin collapses from both sides.

  • deathhand 14 hours ago

    So why does Salesforce still prosper? They are just a fancy database.

    • felixfurtak 14 hours ago

      Good question. Salesforce does well because they provide the application layer to the data.

      The WWW in the 1990s was an explosion of data. To the casual observer, the web-browser appeared to be the internet. But it wasn't and in itself could never make money (See Netscape). The internet was the data.

      The people who build the infrastructure for the WWW (Worldcom, Nortel, Cisco, etc.) found the whole enterprise to be an extremely loss-making activity. Many of them failed.

      Google succeeded because it provided an application layer of search that helped people to navigate the WWW and ultimately helped people make sense of it. It helped people to connect with businesses. Selling subtle advertising along the way is what made them successful.

      Facebook did the same with social media. It allowed people to connect with other people and monetized that.

      Over time, as they became more dominant, the advertising got less subtle and then the income really started to flow.

      Salesforce is similar in that it helps businesses connect with and do business with each other. They just use a subscription model, rather than advertising. This works because the businesses that use it can see a direct link to it and their profitability.

    • array_key_first 12 hours ago

      Because they lock you in. ChatGPT has no lock in, in fact none of the LLMs do just because of how they work.

      Salesforce doesn't make a good product, and certainly not the best product. It doesn't matter, you don't need to if you can convince idiots with money to invest in you. And then the switching cost is too much, too late.

      That business model is a dying one and all the software companies know it. That's why Microsoft has spent the last 15 years opening up their ecosystems. As automation increases, switching cost decreases. You cant rely on it.

    • jasondigitized 14 hours ago

      Because they locked-in a ton of enterprise customers and have an army of certified consultants who build custom solutions for you.

    • esafak 9 hours ago

      If it was 'just' a database it would never have got off the ground. It is obviously not a database; there is an application around it.

mrwrong 23 minutes ago

> I've been able to help setup cross app automation for my partner's business, remodel my house, plan a trip of Japan and assist with the cultural barrier, vibe code apps, technical support and so much more.

you could have done all of this without a chatbot.

hansmayer an hour ago

> It's a fun trope to repeat but that's not what OpenAI is doing.

This is literally what OpenAI is doing. They are bleeding cash, i.e. spending more than they earn. How useful it is to you is not relevant in the context of the sustainability. You know what is also super useful to some people? Private yachts and jets. It does not mean they are good for the society as a whole. But even leaving out the hollistic view for a moment - their business model is not sustainable unless they manage to convince the politics to declare them national infrastructure or something like that, and have taxpayers continue to finance them, which is what they already probed for in the last months. Out of interest, why would you want ChatGPT plan your trip to Japan? Isn't planning it yourself a part of the excitement?

tartoran 15 hours ago

There's no doubt you're getting a lot of value from OpenAI, I am too. And yes the subscription is a lot more value than what you pay for. That's because they're burning investor's money and it's not something that is sustainable. Once the money runs out they'll have to jack up prices and that's the moment of truth, we'll see what users are willing to pay for what. Google or another company may be able to provide all that much cheaper.

  • HDThoreaun 10 hours ago

    Inference is profitable for openai as far as I can tell. They dont need to jack up prices much, what they really need is users who are paying/consuming ads. Theyre burning money on free tier users and data center expansion so they can serve more users.

    • NBJack 4 hours ago

      This assumes your model is static and never needs to be improved or updated.

      Inference is cheap because the final model, despite its size, is ridiculously less resource intensive to use than it is to produce.

      ChatGPT in its latest form isn't bad by any means, but it is falling behind. And that requires significant overhead, both to train and to iterate on model architecture. It is often a variable cost as well.

      • HDThoreaun an hour ago

        As long as revenue rises faster than training costs and inference remains profitable I dont think this is an issue. Eventually theyll be able to profitably amortize training across all the users.

rglullis 16 hours ago

> They're not paying me to use it.

Of course they are.

> As long as the inference is not done at a loss.

If making money on inference alone was possible, there would be a dozen different smaller providers who'd be taking the open weights models and offering that as service. But it seems that every provider is anchored at $20/month, so you can bet that none of them can go any lower.

  • FeepingCreature 14 hours ago

    > If making money on inference alone was possible, there would be a dozen different smaller providers who'd be taking the open weights models and offering that as service.

    There are! Look through the provider list for some open model on https://openrouter.ai . For instance, DeepSeek 3.1 has a dozen providers. It would not make any sense to offer those below cost because you have neither moat nor branding.

  • threeducks 2 hours ago

    You need a certain level of batch parallelism to make inference efficient, but you also need enough capacity to handle request floods. Being a small provider is not easy.

  • dragonwriter 16 hours ago

    > If making money on inference alone was possible

    Maybe, but arguably a major reason you can't make money on inference right now is that the useful life of models is too short, so you can't amortize the development costs across much time because there is so much investment in the field that everyone is developing new models (shortening useful life in a competitive market) and everyone is simultaneously driving up the costs of inputs needed for developing models (increasing the costs that have to be amortized over the short useful life). Perversely, the AI bubble popping and resolving those issues may make profitability much easier for the survivors that have strong revenue streams.

  • rprend 12 hours ago

    They do make money on inference.

  • HDThoreaun 10 hours ago

    The open models suck. AWS hosts them for less than closed models cost but no ones uses them, because they suck.

    • rglullis 3 hours ago

      It's not the open models that suck, it's the infrastructure around them. None of current "open weights providers" have:

         - good tools for agentic workflows
         - no tools for context management
         - infrastructure for input token caching
      
      These are solvable without having to pay anything to OpenAI/Anthropic/Google.
      • threeducks 2 hours ago

        Why would the open weights providers need their own tools for agentic workflows when you can just plug their OpenAI-compatible API URL into existing tools?

        Also, there are many providers of open source models with caching (Moonshot AI, Groq, DeepSeek, FireWorks AI, MiniMax): https://openrouter.ai/docs/guides/best-practices/prompt-cach...

        • rglullis 41 minutes ago

          > when you can just plug their OpenAI-compatible API URL into existing tools?

          Only the self-hosting diehards will bother with that. Those that want to compete with Claude Code, Gemini CLI, Codex et caterva will have to provide the whole package and do it a price point that is competitive even with low volumes - which is hard to do because the big LLM providers are all subsidizing their offerings.

munk-a 16 hours ago

As a developer - ChatGPT doesn't hold a candle compared to claude for coding related tasks and under performs for arbitrary format document parsing[1]. It still has value and can handle a lot of tasks that would amaze someone in 2020 - but it is simply falling behind and spending much more doing so.

1. It actually under performs Claude, Gemini and even some of the Grok models for accuracy with our use case of parsing PDFs and other rather arbitrarily formatted files.

cush 11 hours ago

> I get a ton of value from ChatGPT and Codex from my subscription

I think that’s what they’re saying. OpenAI is selling you a $1 product for $0.2

Tokens are too cheap right now and nobody is working on a path to dial up the cost

  • esafak 9 hours ago

    Predictions are supposedly profitable but not enough to amortize everything else. I don't see how they would justify their investments even if predictions cost them nothing.

mirthflat83 16 hours ago

Well, don't you think you're getting a ton of value because they're selling each of their dollars for 0.2 dollars?

jfb 15 hours ago

That the product is useful does not mean the supplier of the product has a good business; and of course, vice versa. OpenAI has a terrible business at the moment, and the question is, do they have a plausible path to a good one?

steveBK123 16 hours ago

If the subscription cost 5x as much would you still pay and feel you are getting such a great value?

  • atonse 11 hours ago

    I pay $100/month for Claude Max, and I've already said it, I would go up to $500 a month and wouldn't hesitate for a second. I'd probably start to hesitate for $1,000 maybe, only cuz I know I wouldn't be able to use it enough to maximize that value. But I might still suck it up and pay for it (I don't use it enough yet to need the $200/month but if I started hitting limits faster, I would upgrade), or at that point start looking for alternatives.

    It's worth that much to me in the time saved. But I'm a business owner, so I think the calculus might be quite different (since I can find ways to recoup those costs) from an individual, who pays out of their main income.

    I outlined examples of how I used CC/AI a couple months ago [1]. Since then I've used it even more, to help reduce our cloud bills.

    1: https://news.ycombinator.com/item?id=45382337

    • steveBK123 an hour ago

      Right I am sure some find it is worth 5-10x the cost.

      The challenge is that if the numbers are accurate they need 5-10x to break even on inference compute costs, before getting into training costs and all the other actual overhead of running a company like compensation.

      Will everyone be willing to pay 5-10x? Probably no.

      Will half of users pay 10-20x? Or a quarter pay 20x++?

      Or we end up with ads … which already seem to be in motion

    • mrweasel 26 minutes ago

      95% of ChatGPT users aren't paying customers, if they won't pay $10 per month, there's zero chance of them paying $100 or $500.

      That's not to say that there aren't many, like you, for whom $500 is a perfectly good deal, there's just not nearly enough for OpenAI to ever turn a profit.

    • viking123 4 hours ago

      I mean Claude is good for business use-cases, other than that it's completely censored cuck garbage and the CEO is worse than the pope. With Grok you can actually roleplay without it wagging its finger at you. OH MY GOSH YOU SAID BOOB!

      Normies literally see no difference between GPT and Claude, just that Claude is much more expensive and CEO is even more of a dummie than Altman.

  • dosinga 15 hours ago

    If there are no free alternatives, yes. 100 USD a month for ChatGPT seems great value

PantaloonFlames 12 hours ago

You are mostly missing the point. You’re saying you get value out of what OpenAI is offering you. Thats not at issue here.

The question is, does OpenAI get value out of the exchange?

You touched on it ever so briefly: “as long as inference is not done at a loss”. That is it, isn’t it? Or more generally, As long as OpenAI is making money . But they are not.

There’s the rub.

It’s not only about whether you think giving them your money is a good exchange. It needs to be a good exchange for both sides, for the business to be viable.

ReptileMan 16 hours ago

>. As long as the inference is not done at a lost this analogy doesn't hold.

I think that there were some article here that claimed that even inference is done at loss - and talking about per subscriber. I think it was for their 200$ subscription.

In a way we will be in a deal with it situation soon where they will just impose metered models and not subscription.

csomar 5 hours ago

That's not the parent point though? His point is that if the models are not largely available, and then are better competitors; then what's the point of ChatGPT? Maybe you decide to stick with ChatGPT for whatever reason, but people will move to cheaper and better alternatives.