bgirard 14 hours ago

It's a fun trope to repeat but that's not what OpenAI is doing. I get a ton of value from ChatGPT and Codex from my subscription. As long as the inference is not done at a lost this analogy doesn't hold. They're not paying me to use it. They are generating output that is very valuable to me. Much more than my subscription cost.

I've been able to help setup cross app automation for my partner's business, remodel my house, plan a trip of Japan and assist with the cultural barrier, vibe code apps, technical support and so much more.

  • bloppe 14 hours ago

    To be fair, I would get a ton of value out of someone selling dollars for 20 cents apiece.

    But ya, OAI is clearly making a ton of revenue. That doesn't mean it's a good business, though. Giving them a 20 year horizon, shareholders will be very upset unless the firm can deliver about a trillion in profit, not revenue, to justify the 100B (so far) in investment, and that would barely beat the long term s&p 500 average return.

    But Altman himself has said he'll need much more investment in the coming years. And even if OAI became profitable by jacking up prices and flooding gpt with ads, the underlying technology is so commodified, they'd never be able to achieve a high margin, assuming they can turn a profit at all.

    • usef- 9 hours ago

      People did say the same thing about Youtube, which was unprofitable and extremely expensive to run in the early years. I remember thinking everyone would leave once ads were added.

      At youtube's ad income rate (~$13/year), the current (but growing) ~800 million chatgpt users would add ~$10 billion. At facebook's rate (~$40-50/year) $32-40 billion. Potentially, an assistant would be more integrated into your life than either of those two.

      The "audience retention" is the key question, not the profitability if they maintain their current audience. I've been surprised how many non-technical people I know don't want to try other models. "ChatGPT knows me".

      • adgjlsfhk1 7 hours ago

        the problem with the YouTube analogy is that media platforms have significant network affects that NN providers don't. OpenAI can't command a premium because every year that goes by the cost to train an equivalent model to theirs decreases.

      • bloppe 4 hours ago

        The network effects aren't the same. All the viewers watch youtube because it has all the content, and all the creators post on youtube because it has all the viewers.

        How can a model achieve this kind of stickiness? By "knowing you"? I don't think that's the same at all. Personally, one of the reasons I prefer Claude is that it doesn't pretend to know me. I can control the context better.

      • usef- 5 hours ago

        I suspect some of the downvoters hate the idea of ads, which is understandable.

        But a lot of HN users use gmail, which has the same model. And there are plenty of paid email providers which seem far less popular (I use one). Ads didn't end up being a problem for most people provided they were kept independent of the content itself.

        • skydhash an hour ago

          1. Gmail is free

          2. I’ve never seen ads on the Gmail webapp (It sure does data collection)

    • littlestymaar 13 hours ago

      I'd be a little bit more nuanced:

      I think there's something off with their plans right now: it's pretty clear at this point that they can't own the technological frontier, Google is just too close already and from a purely technological PoV they are much better suited to have the best tech in the medium term. (There's no moat and Google has way more data and compute available, and also tons of cash to burn without depending on external funding).

      But ChatGPT is an insane brand and for most (free) customers I don't think model capabilities (aka “intelligence”) are that important. So if they stopped training frontier models right now and focus on driving their costs low by optimizing their inference compute budget while serving ads, they can make a lot of money from their user base.

      But that would probably mean losing most of its paying customers over the long run (companies won't be buying mediocre token at a premium for long) and more importantly it would require abandoning the AGI bullshit narrative, which I'm not sure Altman is willing to do. (And even if he was, how to do that without collapsing from lack of liquidity due to investors feeling betrayed is an open question).

      • array_key_first 10 hours ago

        Being an insane brand means literally nothing if people can trivially switch to competitors, which they can.

        There isn't even a tenth of enough money if you group together all of advertising. Like, the entire industry. Ads is a bad, bad plan that wont work. Advertising is also extremely overvalued. And even at it's overvalued price tag, it's nowhere near enough.

      • bloppe 12 hours ago

        The best way to drive inference cost down right now is to use TPUs. Either that or invest tons of additional money and manpower into silicon design like Google did, but they already have a 10 year lead there.

        • littlestymaar 3 hours ago

          > The best way to drive inference cost down right now is to use TPUs

          TPUs are cool, but the best leverage remains to reduce your (active) parameters count.

      • TheOtherHobbes 12 hours ago

        Altman's main interest is Altman. ChatGPT will be acquihired, most people will be let go, the brand will become a shadow of its former self, and Altman will emerge with a major payday and no obvious dent in his self-made reputation as a leading AGIthinkfluenceretc.

        I don't think ads are that easy, because the hard part of ads isn't taking money and serving up ad slop, it's providing convincing tracking and analytics.

        As soon as ad slop appears a lot of customers will run - not all, but enough to make monetisation problematic.

        • a_victorp 4 hours ago

          This! Most people that don't work on adtech have no idea how hard it is to: 1. Build a platform that offers new advertising inventory that advertisers can buy 2. Convince advertisers to advertise on your platform 3. Show advertisers that their advertising campaigns in your platform are more successful than in the several other places they can advertise

      • po 6 hours ago

        as long as the business model is:

        - users want the best/smartest LLM

        - the best performance for inference is found by spending more and more tokens (deep thinking)

        - pricing is based on cost per token

        Then the inference providers/hyperscalers will take all of the margin available to app makers (and then give it to Nvidia apparently). It is a bad business to be in, and not viable for OpenAI at their valuation.

        • littlestymaar 3 hours ago

          What I'm saying ils that I'm not sure the first point is true.

          I think they all have become sufficiently good for most people to stick to what they are used to (especially in terms of tone/“personality” + the memory shared between conversations).

      • riffraff 13 hours ago

        > But ChatGPT is an insane brand

        I mean, so was netscape.

  • felixfurtak 14 hours ago

    All of which you will be able to do with your bundled assistant in the not-to-distant future.

    OpenAI is a basket case:

    - Too expensive and inconvenient to compete with commoditized, bundled assistants (from Google/ Microsoft/Apple)

    - Too closed to compete with cheap, customizable open-source models

    - Too dependent on partners

    - Too late to establish its own platform lock-in

    It echoes what happened to:

    - Netscape (squeezed by Microsoft bundling + open protocols)

    - BlackBerry (squeezed by Apple ecosystem + open Android OS)

    - Dropbox (squeezed by iCloud, Google Drive, OneDrive + open tools like rclone)

    When you live between giants and open-source, your margin collapses from both sides.

    • deathhand 12 hours ago

      So why does Salesforce still prosper? They are just a fancy database.

      • felixfurtak 12 hours ago

        Good question. Salesforce does well because they provide the application layer to the data.

        The WWW in the 1990s was an explosion of data. To the casual observer, the web-browser appeared to be the internet. But it wasn't and in itself could never make money (See Netscape). The internet was the data.

        The people who build the infrastructure for the WWW (Worldcom, Nortel, Cisco, etc.) found the whole enterprise to be an extremely loss-making activity. Many of them failed.

        Google succeeded because it provided an application layer of search that helped people to navigate the WWW and ultimately helped people make sense of it. It helped people to connect with businesses. Selling subtle advertising along the way is what made them successful.

        Facebook did the same with social media. It allowed people to connect with other people and monetized that.

        Over time, as they became more dominant, the advertising got less subtle and then the income really started to flow.

        Salesforce is similar in that it helps businesses connect with and do business with each other. They just use a subscription model, rather than advertising. This works because the businesses that use it can see a direct link to it and their profitability.

      • array_key_first 10 hours ago

        Because they lock you in. ChatGPT has no lock in, in fact none of the LLMs do just because of how they work.

        Salesforce doesn't make a good product, and certainly not the best product. It doesn't matter, you don't need to if you can convince idiots with money to invest in you. And then the switching cost is too much, too late.

        That business model is a dying one and all the software companies know it. That's why Microsoft has spent the last 15 years opening up their ecosystems. As automation increases, switching cost decreases. You cant rely on it.

      • jasondigitized 12 hours ago

        Because they locked-in a ton of enterprise customers and have an army of certified consultants who build custom solutions for you.

      • esafak 7 hours ago

        If it was 'just' a database it would never have got off the ground. It is obviously not a database; there is an application around it.

  • tartoran 13 hours ago

    There's no doubt you're getting a lot of value from OpenAI, I am too. And yes the subscription is a lot more value than what you pay for. That's because they're burning investor's money and it's not something that is sustainable. Once the money runs out they'll have to jack up prices and that's the moment of truth, we'll see what users are willing to pay for what. Google or another company may be able to provide all that much cheaper.

    • HDThoreaun 8 hours ago

      Inference is profitable for openai as far as I can tell. They dont need to jack up prices much, what they really need is users who are paying/consuming ads. Theyre burning money on free tier users and data center expansion so they can serve more users.

      • NBJack 2 hours ago

        This assumes your model is static and never needs to be improved or updated.

        Inference is cheap because the final model, despite its size, is ridiculously less resource intensive to use than it is to produce.

        ChatGPT in its latest form isn't bad by any means, but it is falling behind. And that requires significant overhead, both to train and to iterate on model architecture. It is often a variable cost as well.

  • rglullis 14 hours ago

    > They're not paying me to use it.

    Of course they are.

    > As long as the inference is not done at a loss.

    If making money on inference alone was possible, there would be a dozen different smaller providers who'd be taking the open weights models and offering that as service. But it seems that every provider is anchored at $20/month, so you can bet that none of them can go any lower.

    • threeducks 31 minutes ago

      You need a certain level of batch parallelism to make inference efficient, but you also need enough capacity to handle request floods. Being a small provider is not easy.

    • FeepingCreature 12 hours ago

      > If making money on inference alone was possible, there would be a dozen different smaller providers who'd be taking the open weights models and offering that as service.

      There are! Look through the provider list for some open model on https://openrouter.ai . For instance, DeepSeek 3.1 has a dozen providers. It would not make any sense to offer those below cost because you have neither moat nor branding.

    • dragonwriter 14 hours ago

      > If making money on inference alone was possible

      Maybe, but arguably a major reason you can't make money on inference right now is that the useful life of models is too short, so you can't amortize the development costs across much time because there is so much investment in the field that everyone is developing new models (shortening useful life in a competitive market) and everyone is simultaneously driving up the costs of inputs needed for developing models (increasing the costs that have to be amortized over the short useful life). Perversely, the AI bubble popping and resolving those issues may make profitability much easier for the survivors that have strong revenue streams.

    • rprend 10 hours ago

      They do make money on inference.

    • HDThoreaun 8 hours ago

      The open models suck. AWS hosts them for less than closed models cost but no ones uses them, because they suck.

      • rglullis 2 hours ago

        It's not the open models that suck, it's the infrastructure around them. None of current "open weights providers" have:

           - good tools for agentic workflows
           - no tools for context management
           - infrastructure for input token caching
        
        These are solvable without having to pay anything to OpenAI/Anthropic/Google.
  • cush 9 hours ago

    > I get a ton of value from ChatGPT and Codex from my subscription

    I think that’s what they’re saying. OpenAI is selling you a $1 product for $0.2

    Tokens are too cheap right now and nobody is working on a path to dial up the cost

    • esafak 7 hours ago

      Predictions are supposedly profitable but not enough to amortize everything else. I don't see how they would justify their investments even if predictions cost them nothing.

  • munk-a 14 hours ago

    As a developer - ChatGPT doesn't hold a candle compared to claude for coding related tasks and under performs for arbitrary format document parsing[1]. It still has value and can handle a lot of tasks that would amaze someone in 2020 - but it is simply falling behind and spending much more doing so.

    1. It actually under performs Claude, Gemini and even some of the Grok models for accuracy with our use case of parsing PDFs and other rather arbitrarily formatted files.

  • mirthflat83 14 hours ago

    Well, don't you think you're getting a ton of value because they're selling each of their dollars for 0.2 dollars?

  • jfb 13 hours ago

    That the product is useful does not mean the supplier of the product has a good business; and of course, vice versa. OpenAI has a terrible business at the moment, and the question is, do they have a plausible path to a good one?

  • steveBK123 14 hours ago

    If the subscription cost 5x as much would you still pay and feel you are getting such a great value?

    • atonse 9 hours ago

      I pay $100/month for Claude Max, and I've already said it, I would go up to $500 a month and wouldn't hesitate for a second. I'd probably start to hesitate for $1,000 maybe, only cuz I know I wouldn't be able to use it enough to maximize that value. But I might still suck it up and pay for it (I don't use it enough yet to need the $200/month but if I started hitting limits faster, I would upgrade), or at that point start looking for alternatives.

      It's worth that much to me in the time saved. But I'm a business owner, so I think the calculus might be quite different (since I can find ways to recoup those costs) from an individual, who pays out of their main income.

      I outlined examples of how I used CC/AI a couple months ago [1]. Since then I've used it even more, to help reduce our cloud bills.

      1: https://news.ycombinator.com/item?id=45382337

      • viking123 2 hours ago

        I mean Claude is good for business use-cases, other than that it's completely censored cuck garbage and the CEO is worse than the pope. With Grok you can actually roleplay without it wagging its finger at you. OH MY GOSH YOU SAID BOOB!

        Normies literally see no difference between GPT and Claude, just that Claude is much more expensive and CEO is even more of a dummie than Altman.

    • dosinga 13 hours ago

      If there are no free alternatives, yes. 100 USD a month for ChatGPT seems great value

  • PantaloonFlames 10 hours ago

    You are mostly missing the point. You’re saying you get value out of what OpenAI is offering you. Thats not at issue here.

    The question is, does OpenAI get value out of the exchange?

    You touched on it ever so briefly: “as long as inference is not done at a loss”. That is it, isn’t it? Or more generally, As long as OpenAI is making money . But they are not.

    There’s the rub.

    It’s not only about whether you think giving them your money is a good exchange. It needs to be a good exchange for both sides, for the business to be viable.

  • ReptileMan 14 hours ago

    >. As long as the inference is not done at a lost this analogy doesn't hold.

    I think that there were some article here that claimed that even inference is done at loss - and talking about per subscriber. I think it was for their 200$ subscription.

    In a way we will be in a deal with it situation soon where they will just impose metered models and not subscription.

  • csomar 3 hours ago

    That's not the parent point though? His point is that if the models are not largely available, and then are better competitors; then what's the point of ChatGPT? Maybe you decide to stick with ChatGPT for whatever reason, but people will move to cheaper and better alternatives.

umanwizard 13 hours ago

This analogy only really works for companies whose gross margin is negative, which as far as I know isn’t the case for OpenAI (though I could be wrong).

It’s an especially good analogy if there is no plausible path to positive gross margin (e.g. the old MoviePass) which I think is even less likely to be true for OpenAI.

  • techblueberry 13 hours ago

    Why is it that I feel like your confidence in OpenAI's path to profitability exceeds Sam Altman's?

    • umanwizard 12 hours ago

      I'm not confident at all. I didn't say "there is definitely a path". I said the existence of such a path is plausible. I'm sure Sam Altman believes that too, or he'd have changed jobs ages ago.

eli_gottlieb 13 hours ago

We should perhaps say profit when we are talking about revenue - cost and revenue when we only mean the first term in the subtraction.

postflopclarity 14 hours ago

very clever! I hadn't seen anybody make this point before in any of these threads /s

obviously the nature of OpenAIs revenue is very different than selling $1 for $0.2 because their customers are buying an actual service, not anything with resale value or obviously fungible for $

  • runako 13 hours ago

    FWIW the selling $1 for $0.2 is widely applied to any business that is selling goods below cost.

    For example: free shipping at Amazon does not have resale value and is not obviously fungible, but everyone understands they are eating a cost that otherwise would be borne by their customers. The suggestion is that OpenAI is doing similar, though it is harder to tease out because their books are opaque.

  • array_key_first 10 hours ago

    They're not selling a service, they're selling access to a service. You can access a more or less equivalent service from multiple companies.

    The value of an LLM isn't an LLM. That's entirely 100% fungible. The value is exclusively what it produces.

    If other people can produce the same thing, your LLM value approaches 0.

    • rprend 9 hours ago

      They sell a product, not a model. ChatGPT is a product, GPT5 is a technology.

      If you hope that ChatGPT will be worthless because the underlying technology will commodify, then you are naive and will be disappointed.

      If that logic made sense, why has it never happened before? Servers and computers have been commodified for decades! Salesforce is just a database, social media is just a relational database, Uber is just a GPS wrapper, AWS is just a server.

      People pay money, setup subscriptions, and download apps to solve a problem, and once they solve that problem they rarely switch. ChatGPT is the fifth most visited website in the world! Facebook and Deepseek making opensource models means you can make your own ChatGPT, just like you can make your own Google, and nobody will use it, just like nobody uses the dozens of “better” search engines out there.

m3kw9 9 hours ago

You sell dollar 1 penny, they sell it for more like 70. Different skill level

signatoremo 14 hours ago

Can you? What are you selling? Who are you and why should I believe in you? What would I get in return?

  • stavros 14 hours ago

    He can. He's selling dollars. He's a person who sells dollars for fewer dollars. You'd get dollars.