stephc_int13 17 hours ago

Everyone is trying to compare AI companies with something that happened in the past, but I don't think we can predict much from that.

GPUs are not railroads or fiber optics.

The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.

Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.

On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.

  • wood_spirit 17 hours ago

    Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027. It won’t retain much value in the same way as the infrastructure of previous bubbles did?

    • christina97 16 hours ago

      The A100 came out 5.5 years ago and is still the staple for many AI/ML workloads. Even AI hardware just doesn’t depreciate that quickly.

      • Ianjit 4 hours ago

        Users are waiting for Blackwell. Then Rubin. CRWV depreciates GPUs over 6 years. Rails last a lot longer.

      • littlestymaar 3 hours ago

        Unless you care about FLOP/Watt, which big players definitely do.

      • oblio 10 hours ago

        Don't they degrade physically from being run at full blast 24/7 for so many years?

      • dzhiurgis 14 hours ago

        This. There’s even a market for them being built (DRW).

    • layoric 17 hours ago

      > Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.

      I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.

      Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.

      • palmotea 16 hours ago

        >> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.

        > I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.

        I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.

      • spiderice 17 hours ago

        Unfortunately changing 2027 to 2030 doesn't make the math much better

      • skywhopper 16 hours ago

        Unfortunately the chips themselves probably won’t physically last much longer than that under the workloads they are being put to. So, yes, they won’t be totally obsolete as technology in 2028, but they may still have to be replaced.

      • [removed] 7 hours ago
        [deleted]
    • potatolicious 17 hours ago

      Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.

      Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.

      • 9rx 16 hours ago

        > Yep, we are (unfortunately) still running on railroad infrastructure built a century ago.

        That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.

        When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.

      • hyperbovine 13 hours ago

        > Effectively every single H100 in existence now will be e-waste in 5 years or less.

        This is definitely not true, the A100 came out just over 5 years ago and still goes for low five figures used on eBay.

      • fooker 16 hours ago

        > Effectively every single H100 in existence now will be e-waste in 5 years or less.

        This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.

        If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.

      • Spooky23 14 hours ago

        How was your trip down the third Avenue El? Did your goods arrive via boxcar to 111 8th Ave?

        • selimthegrim 11 hours ago

          At the rate they are throwing obstacles at the promised subway which they got rid of the 3rd Ave El for maybe his/her grandkids will finish the trip.

      • SJC_Hacker 16 hours ago

        > Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.

        Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.

    • mcswell 13 hours ago

      "...all the best compute in 2025 will be lacklustre in 2027": How does the compute (I assume you mean on PCs) of 2025 compare with the compute of 2023?

      Oh wait, the computer I'm typing this on was manufactured in 2020...

      • brianwawok 13 hours ago

        Neato. How’s that 1999 era laptop? Because 25 year old trains are still running and 25 year old train track is still almost new. It’s not the same and you know it.

        • 1oooqooq 11 hours ago

          last month HN was talking about a win95 with floppy drivers handling rail in Germany no less

    • Spooky23 14 hours ago

      Unlike 1875, we have Saudi and other tillion/billionaires willing commit almost any amount to own the future of business.

      • rchaud 13 hours ago

        Except they behave less like shrewd investors and more like bandwagon jumpers looking to buy influence or get rich quick. Crypto, Twitter, ridesharing, office sharing and now AI. None of these have been the future of business.

        Business looks a lot like what it has throughout history. Building physical transport infrastructure, trade links, improving agricultural and manufacturing productivity and investing in military advancements. In the latter respect, countries like Turkey and Iran are decades ahead of Saudi in terms of building internal security capacity with drone tech for example.

        • Spooky23 13 hours ago

          Agreed - I don’t think they are particularly brilliant as a category. Hereditary kleptocracy has limits.

          But… I don’t think there’s an example in modern history of the this much capital moving around based on whim.

          The “bet on red” mentality has produced some odd leaders with absolute authority in their domain. One of the most influential figures on the US government claims to believe that he is saving society from the antichrist. Another thinks he’s the protagonist in a sci-fi novel.

          We have the madness of monarchy with modern weapons and power. Yikes.

    • Analemma_ 17 hours ago

      Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)

      The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.

      • falcor84 17 hours ago

        I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.

        • spwa4 17 hours ago

          I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.

      • tim333 15 hours ago

        OpenAI is now valued at $500bn though. I doubt the investors are too wrecked yet.

        It may be like looking at the early Google and saying they are spending loads on compute and haven't even figured how to monetize search, the investors are doomed.

      • CompoundEyes 13 hours ago

        A really did few days ago gpt-3.5-fast is a great model for certain tasks and cost wise via the API. Lots of solutions being built on the today’s latest are for tomorrow’s legacy model — if it works just pin the version.

      • fooker 16 hours ago

        > And the Nvidia chips used to train it have barely retained any value either

        Oh, I'd love to get a cheap H100! Where can I find one? You'll find it costs almost as much used as it's new.

      • cj 17 hours ago

        > money on fire forever just to jog in place.

        Why?

        I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?

        I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.

        We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?

      • mattmanser 17 hours ago

        But is it a bit like a game of musical chairs?

        At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.

  • conartist6 14 hours ago

    It's not that the investments just won't pay off, it's that the global markets are likely to crash like happened with the subprime mortgage crisis.

    • vitaflo 13 hours ago

      This is much closer to the dotcom boom than the subprime stuff. The dotcom boom/bust affected tech more than anything else. It didn’t involve consumers like the housing crash did.

      • CodingJeebus 3 hours ago

        We are starting to see larger economic exposure to AI.

        Banks are handing out huge loans to the neocloud companies that are being collateralized with GPUs. These loans could easily go south if the bottom falls out of the GPU market. Hopefully it’s a very small amount of liquidity tied up in those loans.

        Tech stocks make up a significant part of the stock market now. Where the tech stocks go, the market will follow. Everyday consumers invested in index funds will definitely see a hit to their portfolios if AI busts.

      • bobxmax 13 hours ago

        The dot com boom involved silly things like Pets.com IPOing pre-revenue. Claude code hit $500m in ARR in 3 months.

        The fact people don't see the difference between the two is unreal. Hacker news has gone full r* around this topic, you find better nuance even on Reddit than here.

      • digdugdirk 13 hours ago

        But it does involve a ton of commercial real estate investment, as well as a huge shakeup in the energy market. People may not lose their homes, but we'll all be paying for this one way or another.

    • mothballed 14 hours ago

      The fed could still push the real value of stocks quite a bit by destroying the USD, if they want, by pinning interest rates near 0 and forcing a rush to the exits to buy stock and other asset classes.

      • mcny 13 hours ago

        The point still stands though. All these other companies can pivot to some thing else if AI fails but what will OpenAI do?

  • JCM9 17 hours ago

    Businesses are different but the fundamentals of business and finance stay consistent. In every bubble that reality is unavoidable, no matter how much people say/wish “but this time is different.”

  • redwood 14 hours ago

    I'm reminded of the quote "If you owe the bank $100 that's your problem. If you owe the bank $100 million, that's the bank's problem." - J. Paul Getty

    Nvidia may well be at the mercy of them! Hence the recent circular dealing

  • 01100011 9 hours ago

    The one thing smaller companies might have is allocated power budgets from power companies. Part of the mad dash to build datacenters right now is just to claim the power so your competitors can't. Now I do think the established players hold an edge here, but I don't think OpenAI/Anthropic/etc are without some bargaining power(hah).

  • bee_rider 13 hours ago

    The past/present company they remind me of the most is semiconductor fabs. Significant generation-to-generation R&D investment, significant hardware and infrastructure investment, quite winner-takes-all on the high end, obsoleted in a couple years at most.

    The main differences are these models are early in their development curve so the jumps are much bigger, and they are entirely digital so they get “shipped” much faster, and open weights seem to be possible. None of those factors seem to make it a more attractive business to be in.

  • LarsDu88 14 hours ago

    If you build the actual datacenter, less than half the cost is the actual compute. The other half is the actual datacenter infrastructure, power infrastructure, and cooling.

    So in that sense it's not that much different from Meta and Google which also used server infrastructure that depreciated over time. The difference is that I believe Meta and Google made money hand over fist even in their earliest days.

    • Lalo-ATX 12 hours ago

      Last time i ran the numbers -

      Data center facilities are ~$10k per kW

      IT gear is like $20k-$50k per kW

      Data center gear is good for 15-30 years. IT is like 2-6ish.

      Would love to see updated numbers. Got any?

  • EasyMark 11 hours ago

    In the end Revenues > Costs or you have an issue. That "startup" money will eventually be gone, and you're back to MIMO Money In vs Money Out and if it's not > , you will go bankrupt.

  • lossolo 17 hours ago

    The funniest thing about all this is that the biggest difference between LLMs from Anthropic, Google, OpenAI, Alibaba is not model architecture or training objectives, which are broadly similar but it's the dataset. What people don't realize is how much of that data comes from massive undisclosed scrapes + synthetic data + countless hours of expert feedback shaping the models. As methodologies converge, the performance gap between these systems is already narrowing and will continue to diminish over time.

  • yieldcrv 17 hours ago

    Just because they have ongoing costs after purchasing them doesn't mean it's different than something else we've seen? What are you trying to articulate exactly, this is a simple business and can get costs under control eventually, or not

simonw 18 hours ago

I think the most interesting numbers in this piece (ignoring the stock compensation part) are:

$4.3 billion in revenue - presumably from ChatGPT customers and API fees

$6.7 billion spent on R&D

$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.

Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?

  • bfirsh 17 hours ago

    Free usage usually goes in sales and marketing. It's effectively a cost of acquiring a customer. This also means it is considered an operating expense rather than a cost of goods sold and doesn't impact your gross margin.

    Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.

    (Source: I run an inference company.)

    • singron 9 hours ago

      I think it makes the most sense this way, but I've seen it accounted for in other ways. E.g. if free users produce usage data that's valuable for R&D, then they could allocate a portion of the costs there.

      Also, if the costs are split, there usually has to be an estimation of how to allocate expenses. E.g. if you lease a datacenter that's used for training as well as paid and free inference, then you have to decide a percentage to put in COGS, S&M, and R&D, and there is room to juice the numbers a little. Public companies are usually much more particular about tracking this, but private companies might use a proxy like % of users that are paid.

      OpenAI has not been forthcoming about their financials, so I'd look at any ambiguity with skepticism. If it looked good, they would say it.

  • adamhartenz 17 hours ago

    Marketing != advertising. Although this budget probably does include some traditional advertising. It is most likely about building the brand and brand awareness, as well as partnerships etc. I would imagine the sales team is probably quite big, and host all kinds of events. But I would say a big chunk of this "sales and marketing" budget goes into lobbying and government relations. And they are winning big time on that front. So it is money well spent from their perspective (although not from ours). This is all just an educated guess from my experience with budgets from much smaller companies.

    • echelon 17 hours ago

      I agree - they're winning big and booking big revenue.

      If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.

      They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.

      ChatGPT is already a new default "pane of glass" for normal people.

      Is this all really so unreasonable?

      I certainly want exposure to their stock.

      • runako 16 hours ago

        > If you discount R&D and "sales and marketing"

        If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).

        If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.

        Both of these costs are likely to increase rather than decrease over time.

        > ChatGPT is already a new default "pane of glass" for normal people.

        OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.

  • delaminator 16 hours ago

    We gave ChatGPT advertising on bus-stops here in the UK.

    Two people in a cafe having a meet-up, they are both happy, one is holding a phone and they are both looking at it.

    And it has a big ChatGPT logo in the top right corner of the advertisement - transparent just the black logo with ChatGPT written underneath.

    That's it. No text or anything telling you what the product is or does. Just it will make you happy during conversations with friends somehow.

  • gmerc 17 hours ago

    Stop R&D and the competition is at parity with 10x cheaper models in 3-6 months.

    Stop training and your code model generates tech debt after 3-6 month

    • chermi 15 hours ago

      It's pretty well accepted now that for pre-training LLMs the curve is S not an exponential, right? Maybe it's all in RL post-training now, but my understanding(?) is that it's not nearly as expensive as pre-training. I don't think 3-6 months is the time to 10X improvement anymore (however that's measured), it seems closer to a year and growing assuming the plateau is real. I'd love to know if there are solid estimates on "doubling times" these days.

      With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.

      Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.

      All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.

      • necovek 9 hours ago

        Someone brought up an interesting point: to get the latest data (news, scientific breakthroughs...) into the model, you need to constantly retrain it.

      • Spooky23 14 hours ago

        Google has the best story imo. Gemini > Azure - it will accelerate GCP growth.

    • Spivak 12 hours ago

      Also R&D, for tax purposes, likely includes everyone at the company who touches code so there's probably a lot of operational cost being hidden in that number.

  • diggan 18 hours ago

    > $2 billion on sales and marketing - anyone got any idea what this is?

    Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".

    • JCM9 18 hours ago

      Most folks that are not an engineer building is likely classified as “sales and marketing.” “Developer advocates” “solutions architects” and all that stuff included.

    • infecto 18 hours ago

      This will include the people cost of sales and marketing teams.

    • chermi 16 hours ago

      So probably just write-offs of tokens they give away?

  • hedayet 17 hours ago

    > $2 billion on sales and marketing - anyone got any idea what this is?

    enterprise sales are expensive. And selling to the US government is on a very different level.

  • rkharsan64 7 hours ago

    I see multiple banner ads promoting ChatGPT on my way to work. (India)

  • lanthissa 16 hours ago

    you see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.

    you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.

  • abaymado 17 hours ago

    > $2 billion on sales and marketing - anyone got any idea what this is?

    I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.

    • gizajob 16 hours ago

      Seems like it’ll take billions more down the drain to serve them.

    • what 9 hours ago

      There’s a bunch of users here that are probably paid by them too.

      • rightbyte 4 hours ago

        I'd prefer the use of the Heroes 3 nomenclature on crowd sizes to be more precise. "Bunch" sounds too small. I would say a 'horde' i.e. 50 to 99.

  • xmprt 17 hours ago

    Free users typically fall into sales and marketing. The idea is that if they cut off the entire free tier, they would have still made the same revenue off of paying customers by spending $X on inference and not counting the inference spend on free users.

  • eterm 17 hours ago

    > ? I don't remember seeing many ads for ChatGPT

    FWIW I got spammed non-stop with chatGPT adverts on reddit.

  • lemonlearnings 10 hours ago

    I have seen a tonnes of Chat GPT ads on Reddit. Usually with image generation of a dog in Japanese cartoon style.

    • fennecbutt 2 hours ago

      This seems to be a common ad template for reddit ads, it's not just oai I've seen loads of ads use the this is fine template.

      • lemonlearnings an hour ago

        For clarity it wasnt a meme template (not "this is fine" dog or any other). It was a picture of a real dog and next to it an AI generated version of the same dog.

        I just loaded up reddit and ad was there. Bunny this time:

        https://imgur.com/a/VfnOJNH

    • necovek 9 hours ago

      The dog sitting in a house on fire proclaiming "this is fine" is an old meme, not an OpenAI generated image.

      Oh, not that dog? :)

  • epolanski 13 hours ago

    I've seen some OpenAI ads on Italian tv and they made no sense to me, they tried hard to be apple like, but realistically nobody knew what they were about.

    • joering2 13 hours ago

      Italian advertising is weird in general. Month ago leaving Venice we pulled over on a gas station and I started just going thru pages on some magazine. At some point I see advertising on what looks like old fashioned shoes - and owner of the company holding his son with sign "from generation to generation". Only thing - the ~3 year old boy is completely naked wearing only shoes with his little pee pee sticking out. It shocked me and unsure if it was just my American domestication or there was really something wrong with it. I took a picture and wanted to send it to my friends in USA to show them how Italian advertising looks like, before getting sweats that if I were caught with that picture in the US, I would get in some deep trouble. I quickly deleted it, just in case. Crazy story..

      • necovek 8 hours ago

        Not crazy, it's just a cultural thing.

        US (and maybe the whole of Anglosaxon world) is a bit mired in this let's consider everything the worst case scenario: no, having a photo of your friend's naked kiddo they shared being funny at the beach or in the garden in your messenger app is not child pornography. The fact that there are extremely few people who might see it as sexual should not influence the overall population as much as it does.

        For me, I wouldn't blink an eye to such an ad, but due to my exposure to US culture, I do feel uneasy about having photos like the above in my devices (to the point of also having a thought pass my mind when it's of my own kids mucking about).

        I resist it because I believe it's the wrong cultural standard to adhere to: nakedness is not by default sexual, and especially with small kids before they develop any significant sexual characteristics.

      • matwood 6 hours ago

        If that made you uncomfortable, you better avoid the beaches in Italy and the rest of Europe.

      • epolanski 13 hours ago

        Nudity in general is not weird in Europe, let alone children's.

  • Jallal 17 hours ago

    I'm pretty sure I saw some ChatGPT ads on Duolingo. Also, never forget that the regular dude do not use ad blockers. The tech community often doesn't realize how polluted the Internet/Mobile apps are.

  • hu3 13 hours ago

    OpenAI keeps spamming me with ads on instagram and reddit.

    Pretty sure I'm not a cheap audience to target ads at, for multiple reasons.

  • wood_spirit 17 hours ago

    Speculating but they pay to be integrated as the default ai integration in various places the same way google has paid to be the default search engine on things like the iPhone?

  • [removed] 17 hours ago
    [deleted]
  • actuallyalys 13 hours ago

    I’ve seen some on electronic street-level signs in Atlanta when I visited. So there is some genuine advertising.

  • zurfer 17 hours ago

    Inference etc should go in this bucket: "Operating losses reached US$7.8 billion"

    That also includes their office and their lawyers etc , so hard to estimate without more info.

  • infecto 18 hours ago

    Hard to know where it is in this breakdown but I would expect them to have the proper breakdowns. We know on the inference side it’s profitable but not to what scale.

  • patrickhogan1 12 hours ago

    Sales people out in the field selling to enterprises + free credits to get people hooked.

  • Our_Benefactors 18 hours ago

    > $2 billion on sales and marketing

    Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.

  • [removed] 17 hours ago
    [deleted]
  • plaidfuji 2 hours ago

    I’m also curious about your last question. Cost of goods sold would not fall into R&D or sales as far as I know.

    So curious, in fact, that I asked Gemini to reconstruct their income statement from the info in this article :)

    There seems to be an assumption that the 20% payment to MS is the cost of compute for inference. I would bet that’s at a significant discount - but who knows how much…

    Line Item | Amount (USD) | Calculation / Note

    Revenue $4.3 Billion Given.

    Cost of Revenue (COGS) ($0.86 Billion) Assumed to be the 20% of revenue paid to Microsoft ($4.3B * 0.20) for compute/cloud services to run inference.

    Gross Profit $3.44 Billion Revenue - Cost of Revenue. This 80% gross margin is strong, typical of a software-like business.

    Operating Expenses

    Research & Development ($6.7 Billion) Given. This is the largest expense, focused on training new models.

    Sales & Ads ($2.0 Billion) Given. Reflects an aggressive push for customer acquisition.

    Stock-Based Compensation ($2.5 Billion) Given. A non-cash expense for employee equity.

    General & Administrative ($0.04 Billion) Implied figure to balance the reported operating loss.

    Total Operating Expenses ($11.24 Billion) Sum of all operating expenses.

    Operating Loss ($7.8 Billion) Confirmed. Gross Profit - Total Operating Expenses.

    Other (Non-Operating) Income / Expenses ($5.7 Billion) Calculated as Net Loss - Operating Loss. This is primarily the non-cash loss from the "remeasurement of convertible interest rights."

    Net Loss ($13.5 Billion) Given. The final "bottom line" loss.

    • vessenes an hour ago

      Thanks for doing the prompting work here.

      One thing I read - with $6.7bn R&D on $3.4bn in Gross Profit, you need a model to be viable for only one year to pay back.

      Another thing, with only $40mm / 5 months in G&A, basically the entire company is research, likely with senior execs nearly completely equity comped. That’s an amazingly lean admin for this much spend.

      On sales & ads - I too find this number surprisingly high. I guess they’re either very efficient (no need to pitch me, I already pay), or they’re so inefficient they don’t hit up channels I’m adjacent to. The team over there is excellent, so my priors would be on the first.

      As doom-saying journalists piece this over, it’s good to think of a few numbers:

      Growth is high. So, June was up over $1bn in revenues by all accounts. Possibly higher. If you believe that customers are sticky (i.e. you can stop sales and not lose customers), which I generally do, then if they keep R&D at this pace, a forward looking annual cashflow looks like:

      $12bn in revs, $9.6bn in gross operating margin, $13.5bn in R&D, so net cash impact of -$4bn.

      If you think they can grow to 1.5bn customers and won’t open up new paying lines of business then you’d have $20-25bn in revs -> maybe $4bn in sales -> +2-3bn in free cashflow, with the ability to take a breather and make that +15-18bn in free cashflow as needed. A lot of that R&D spend is on training which is probably more liquid than employees, as well.

      Upshot - they’re going to keep spending more cash as they get it. I would expect all these numbers to double in a year. The race is still on, and with a PE investment hat on, these guys still look really good to me - the first iconic consumer tech brand in many years, an amazing team, crazy fast growth, an ability to throw off billions in cash when they want to, and a shot at AGI/ASI. What’s not to like?

hmate9 18 hours ago

$2.5B in stock comp for about 3,000 employees. that’s roughly $830k per person in just six months. Almost 60% of their revenue went straight back to staff.

  • darth_avocado 18 hours ago

    They have to compete with Zuckerberg throwing $100M comps to poach people. I think $830k per person is nothing in comparison.

    • toephu2 7 hours ago

      Zuck isn't throwing 100M comps at many people (maybe 1 or 2 at most?), that's a myth that was debunked.

      • jzl 7 hours ago

        It’s debatable that it was debunked. There was squirrelly wording about some specific claims. One person was reported to have been offered a package worth a billion dollars, which even if exaggerated was probably not exaggerated by 10x. The numbers line up when you consider that AI startup founders and early employees stand to potentially make well into 9 figures if not higher, and Meta is trying to cut them off at the pass. Obviously these kinds of offers, whatever they really look like, include significant conditions and performance requirements.

    • munk-a 18 hours ago

      Both numbers are entirely ludicrous - highly skilled people are certainly quite valuable. But it's insane that these companies aren't just training up more internally. The 50x developer is a pervasive myth in our industry and it's one that needs to be put to rest.

      • __turbobrew__ 17 hours ago

        The ∞x engineer exists in my opinion. There are some things that can only be executed by a few people that no body else could execute. Like you could throw 10000 engineers at a problem and they might not be able to solve that problem, but a single other person could solve that problem.

        I have known several people who have went to OAI and I would firmly say they are 10x engineers, but they are just doing general infra stuff that all large tech companies have to do, so I wouldn’t say they are solving problems that only they can solve and nobody else.

      • lovecg 17 hours ago

        Do other professionals (lawyers, finance etc.) argue for reducing their own compensation with the same fervor that software engineers like to do? The market is great for us, let’s enjoy it while it lasts. The alternative is all those CEOs colluding and pushing the wages down, why is that any better?

      • bitexploder 18 hours ago

        The 50x distinguished engineer is real though. Companies and fortunes are won and lost on strategic decisions.

        • kridsdale1 10 hours ago

          Dave Cutler is a perfect example. Produced trillions of dollars in value with his code.

      • belval 13 hours ago

        > it's insane that these companies aren't just training up more internally

        Adding headcount to a fast growing company *to lower wages* is a sure way to kill your culture, lower the overall quality bar and increase communication overheads significantly.

        Yes they are paying a lot of their employees and the pool will grow, but adding bodies to a team that is running well in hopes that it will automatically lead to a bump in productivity is the part that is insane. It never works.

        What will happen is a completely new team (team B) will be formed and given ownership of a component that was previously owned by team A under the guise of "we will just agree on interfaces". Team B will start doing their thing and meeting with Team A representative regularly but integration issues will still arise, except that instead of a tight core of 10-20 developers, you now have 40. They will add a ticketing to track change better, now issues in Team's B service, which could have been addressed in an hour by the right engineer on team A, will take 3 days to get resolved as ticket get triaged/prioritized. Lo and behold, Team C as now appeared and will be owning a sub-component of Team B. Now when Team A has issue with Team B's service, they cut a ticket, but the oncall on Team B investigates and finds that it's actually an issue with Team C's service, they cut their own ticket.

        Suddenly every little issue takes days and weeks to get resolved because the original core of 10-20 developers is no longer empowered to just move fast. They eventually leave because they feel like their impact and influence has diminished (Team C's manager is very good at politics), Team A is hollowed out and you now have wall-to-wall mediocrity with 120 headcounts and nothing is ever anyone's fault.

        I had a director that always repeated that communication between N people is inherently N² and thus hiring should always weight in that the candidate being "good" is not enough, they have to pull their weight and make up for the communication overhead that they add to the team.

        • kridsdale1 9 hours ago

          Have worked in BigCo three times scaling teams from 5 to 50 people. This post is bang on.

      • a4isms 17 hours ago

        > The 50x developer is a pervasive myth in our industry

        Doesn't it depend upon how you measure the 50x? If hiring five name-brand AI researchers gets you a billion dollars in funding, they're probably each worth 1,000x what I'm worth to the business.

      • hadlock 17 hours ago

        You have to out-pay to keep your talent from walking out the door. California does not have non-competes. With the number of AI startups in SF you don't need to relocate or even change your bus route in most cases.

      • xur17 17 hours ago

        If it's an all out race between the different AI providers, then it's logical for OpenAI to hire employees that are pre-trained rather than training up more internally.

      • saagarjha 10 hours ago

        I think the unfortunate reality is that training someone to reach the frontier is time taken away from actually pushing it. The opportunity cost alone is worth millions to them.

      • causalmodels 17 hours ago

        These numbers aren't that crazy when contextualized with the capex spend. One hundred million is nothing compared to a six hundred billion dollar data center buildout.

        Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.

      • epolanski 13 hours ago

        50x devs are not a myth.

        In any case the talent is very scarce in AI/ML, the one able to push through good ideas so prices are going to be high for years.

      • xnx 18 hours ago

        > training up more internally

        Why would employees stay after getting trained if they have a better offer?

      • [removed] 15 hours ago
        [deleted]
      • charcircuit 18 hours ago

        It's not a myth and with how much productivity AI tools can give others, there can be an order of magnitude difference than outside of AI.

      • gmerc 17 hours ago

        Zuck decided it's cheaper than building another Llama

    • saagarjha 10 hours ago

      Zuckerberg is not throwing $100 million at any random OpenAI employee. Also FWIW OpenAI competes on offers in the other direction.

  • tomasphan 18 hours ago

    That’s how it should be, spread the wealth.

    • Hamuko 18 hours ago

      It doesn't seem that spread out.

      • lemonlearnings 10 hours ago

        3000x One person with 830k is comfortable living. Probably gets spent into general economy.

        1x Person with billions probably gets spent in a way that fucks everyone over.

    • onlyrealcuzzo 18 hours ago

      Spreading illiquid wealth *

      • gk1 18 hours ago

        They’ve had multiple secondary sales opportunities in the past few years, always at a higher valuation. By this point, if someone who’s been there >2 years hasn’t taken money off the table it’s most likely their decision.

        I don’t work there but know several early folks and I’m absolutely thrilled for them.

        • chermi 15 hours ago

          Secondaries open to all shareholds are on upward trend across start-ups. I think it's a fantastic trend.

      • BhavdeepSethi 18 hours ago

        Funny since they have a tender offer that hits their accounts on Oct 7.

      • yieldcrv 17 hours ago

        private secondary markets are pretty liquid for momentum tech companies, there is an entire cottage industry of people making trusts to circumvent any transfer restrictions

        employees are very liquid if they want to be, or wait a year for the next 10x in valuation

      • Der_Einzige 18 hours ago

        Oh no, "greedy" AI researchers defrauding way greedier VCs and billionaires!

  • manquer 16 hours ago

    Stock compensation is not cash out, it just dilutes the other shareholders, so current cash flow should not have anything do to the amount of stock issued[1]

    While there is some flexibility in how options are issued and accounted for (see FASB - FAS 123), typically industry uses something like a 4 year vesting with 1 year cliffs.

    Every accounting firm and company is different, most would normally account for it for entire period upfront the value could change when it is vests, and exercised.

    So even if you want to compare it to revenue, then it should be bare minimum with the revenue generated during the entire period say 4 years plus the valuation of the IP created during the tenure of the options.

    ---

    [1] Unless the company starts buying back options/stock from employees from its cash reserves, then it is different.

    Even secondary sales that OpenAI is being reported to be facilitating for staff worth $6.6Billion has no bearing on its own financials directly, i.e. one third party(new investor) is buying from another third party(employee), company is only facilitating the sales for morale, retention and other HR reasons.

    There is secondary impact, as in theory that could be shares the company is selling directly to new investor instead and keeping the cash itself, but it is not spending any existing cash it already has or generating, just forgoing some of the new funds.

  • [removed] 18 hours ago
    [deleted]
  • gizajob 16 hours ago

    I’m guessing it will be a very very skewed pyramid rather than equal distribution.

  • skybrian 16 hours ago

    It's not cashflow, though, and it's not really stock yet, I don't think? They haven't yet reorganized away from being a nonprofit.

    If all goes well, someday it will dilute earnings.

  • varenc 17 hours ago

    It's a bit misleading to frame stock comp as "60% of revenue" since their expenses are way larger than their revenue. R&D was $6.7B which would be 156% of revenue by the same math.

    A better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.

  • kibwen 18 hours ago

    Sounds like they could improve that bottom line by firing all their staff and replacing them with AI. Maybe they can get a bulk discount on Claude?

  • datadrivenangel 18 hours ago

    if Meta is throwing 10s of million at hot AI staffers, than 1.6M average stock comp starts looking less insane, a lot of that may also have been promised at a lower valuation given how wild OpenAI's valuation is.

JCM9 18 hours ago

These numbers are pretty ugly. You always expect new tech to operate at a loss initially but the structure of their losses is not something one easily scales out of. In fact it gets more painful as they scale. Unless something fundamentally changes and fast this is gonna get ugly real quick.

  • spacebanana7 18 hours ago

    The real answer is in advertising/referral revenue.

    My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.

    • ecommerceguy 12 hours ago

      Insurance agents—unlike many tech-focused sales jobs—are licensed and regulated, requiring specific training, background checks, and ongoing compliance to sell products that directly affect customers’ financial stability and wellbeing. Mortgage brokers also adhere to licensing and compliance regulations, and their market expertise, negotiation ability, and compliance duties are not easily replaced by AI tools or platforms.

      t. perplexity ai

      • stogot an hour ago

        Yeah, I don’t want my mortgage recommendations to come from a prompt injection

    • lkramer 18 hours ago

      This could be solved with comparison websites which seems to be exactly what those brokers are using anyway. I had a broker proudly declare that he could get me the best deal, which turned out to be exactly the same as what moneysavingexperts found for me. He wanted £150 for the privilege of searching some DB + god knows how much commission he would get on top of that...

      • spacebanana7 18 hours ago

        Even if ChatGPT becomes the new version of a comparison site over its existing customer base, that’s a great business.

  • anthonypasq 18 hours ago

    they could keep the current model in chatGPT the same forver and 99% of users wouldnt know or care, and unless you think hardware isnt going to improve, the cost of that will basically decrease to 0.

    • impossiblefork 17 hours ago

      For programming it's okay, for maths it's almost okay. For things like stories and actually dealing with reality, the models aren't even close to okay.

      I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstood sentences, generated crazy things, lost track of everything-- completely beyond how bad I thought it could possibly be.

      I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.

      This extends to analyzing discussions. It simply misunderstands what people say. If you try to do this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.

      When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.

    • davidcbc 13 hours ago

      This just doesn't match with the claims that people are using it as a replacement for Google. If your facts are out of date you're useless as a search engine

      • treyd 12 hours ago

        Which is why there's so much effort to build RAG workflows so that you can progressively add to the pool of information that the chatbot has access to, beyond what's baked into the underlying model(s).

    • jampa 18 hours ago

      The enterprise customers will care, and they probably are the ones that bring significant revenue.

    • toshinoriyagi 17 hours ago

      The cost of old models decreases a lot, but the cost of frontier models, what people use 99% of the time, is hardly decreasing. Plus, many of the best models rely on thinking or reasoning, which use 10-100x as many tokens for the same prompt. That doesn't work on a fixed cost monthly subscription.

      • anthonypasq 17 hours ago

        im not sure that you read what i just said. Almost no one using chatgpt would care if they were still talking to gpt5 2 years from now. If compute per watt doubles in the next 2 years, then the cost of serving gpt5 just got cut in half. purely on the hardware side, not to mention we are getting better at making smaller models smarter.

  • whizzter 17 hours ago

    I've said it before and I'll say it again.. if I was able to know the time it takes for bubbles to pop I would've shorted many of the players long ago.

    • Esophagus4 11 hours ago

      Eh, this seems like a cop out.

      It’s so easy for people to shout bubble on the internet without actually putting their own money on the line. Talk is cheap - it doesn’t matter how many times you say it, I think you don’t have conviction if you’re not willing to put your own skin in the game. (Which is fine, you don’t have to put your money on the line. But it just annoys me when everyone cries “bubble” from the sidelines without actually getting in the ring.)

      After all, “a bubble is just a bull market you don’t have a position in.”

      • zoul 7 hours ago

        Believe it or not, many people just don’t care about the stock market. But they may still care about the economy that could crash badly if the AI bubble gets too big before it pops.

        • Esophagus4 6 hours ago

          People find all kinds of things to worry about if it gives them something to do, I guess.

          In the same way that my elderly grandmother binge watches CNN to have something to worry about.

          But the commenter I responded to DID care about the stock market, despite your attempt to grandstand.

          And my point was, and still is, if you really believe it’s a bubble and you don’t actually have a short position, then you don’t actually believe it’s a bubble deep down.

          Talk is cheap - let’s see your positions.

          It would be like saying “I’ve got this great idea for a company, I’m sure it would do really well, but I don’t believe it enough to actually start a company.”

          Ok, then what does that actually say about your belief in your idea?

      • lawn 6 hours ago

        You can correctly identify a bubble without being able to identify when it'll burst (which is arguably the much harder problem).

        The statistically correct play is therefore not to do this (and just keep buying).

        • Esophagus4 5 hours ago

          Then no, you haven’t identified a bubble.

          You’ve just said, “I think something will go down at some point.” Which… like… sure, but in a pointlessly trivial way? Even a broken clock is right eventually?

          That’s not “identifying a bubble” that’s boring dinner small talk. “Wow, this Bitcoin thing is such a bubble huh!” “Yeah, sure is crazy!”

          And even more so, if you’re long into something you call a bubble, that by definition says either you don’t think it’s that much of a bubble, huh? Or you’re a goon for betting on something you believe is all hot air?

  • adventured 18 hours ago

    There is an exceptionally obvious solution for OpenAI & ChatGPT: ads.

    In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.

    One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.

    Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).

    Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).

    • thewebguyd 18 hours ago

      > They would have to be the biggest fools in tech history to not flip that switch

      As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.

      If they don't, Google certainly will, as will Meta, and Microsoft.

      I wonder if their plan for the weird Sora 2 social network thing is ads.

      Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.

    • dreamcompiler 17 hours ago

      Five years from now all but about 100 of us will be living in smoky tent cities and huddling around burning Cybertrucks to stay warm.

      But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.

    • singron 9 hours ago

      Will people use ChatGPT if it's stuffed full of ads? It seems like the belief that ads are turn-key is useful to their valuation, but if ads actually bomb, then they will take a huge hit.

    • jhallenworld 17 hours ago

      Google didn't have inline ads until 2010, but they did have separate ads nearly from the beginning. I assume ads will be inline for OpenAI- I mean the only case they could be separate is in ChatGPT, but I doubt that will be their largest use case.

      • kridsdale1 9 hours ago

        I think it was actually about 5 years from founding to ads on Google.com.

    • gizajob 16 hours ago

      ChatGPT chatting ads halfway through its answer is going to be totally rad.

      • silon42 2 hours ago

        Imagine the emails / reports with copy-pasted ads.

    • JCM9 18 hours ago

      For using GenAI as search I’d agree with you but I don’t think it’s as easy/obvious for most other use cases.

      • flyinglizard 17 hours ago

        I'm sure lots of ChatGPT interactions are for making buying decisions, and just how easy would it be to prioritize certain products to the top? This is where the real money is. With SEO, you were making the purchase decision and companies paid to get their wares in front of you; now with AI, it's making the buy decision mostly on its own.

    • Spooky23 13 hours ago

      No way. It’s 2025, society is totally different, you have to think about what is the new normal. They are too big to fail at this point — so much of the S&P 500 valuation is tied to AI (Microsoft, Google, Tesla, etc) they are arguable strategic to the US.

      Fascist corporatism will throw them in for whatever Intel rescue plan Nvidia is forced to participate in. If the midterms flip congress or if we have another presidential election, maybe something will change.

  • deepnotderp 18 hours ago

    New hardware could greatly reduce inference and training costs and solve that issue

    • samtp 17 hours ago

      That's extremely hopeful and also ignores the fact that new hardware will have incredibly high upfront costs.

    • leptons 17 hours ago

      Great, so they just have to spend another ~$10 billion on new hardware to save how many billion in training costs? I don't see a path to profitability here, unless they massively raise their prices to consumers, and nobody really needs AI that badly.

otterley 16 hours ago

That headline can't be correct. Income is revenues minus expenses (and a few other things). You can't have both an income and a loss at the same time.

It's $4.3B in revenue.

  • sotix an hour ago

    I can only speak as a US CPA, but revenue and income are used interchangeably. You're thinking of net income or profit. It's certainly preferable to use the term revenue instead of income in my opinion however to avoid misunderstandings like this.

  • rsynnott 4 hours ago

    Yeah, I initially thought maybe they were talking about _gross_ income, but, nah, it’s just revenue.

  • croes 12 hours ago

    Every indebted person can tell you that you can have an income and loss at the same time. Income is revenue.

    • otterley 9 hours ago

      We’re talking about a business here. Accounting terms are standard across the industry, and the meaning of income is well understood.

      Unfortunately, journalistic standards vary across the Internet. The WSJ or Financial Times would not make such a mistake.

cs702 18 hours ago

Correction: 4.3B in revenues.

Other than Nvidia and the cloud providers (AWS, Azure, GCP, Oracle, etc.), no one is earning a profit with AI, so far.

Nvidia and the cloud providers will do well only if capital spending on AI, per year, remains at current rates.

  • Ianjit 2 hours ago

    Are Azure/GCP making a profit with AI? ORCL definatly isn't, FCF will go heavily negative.

  • whizzter 17 hours ago

    I really hope NVidia doesn't get too comfortable with the AI incomes, would be sad to see all progress in gaming disappear.

    • ares623 13 hours ago

      Personally I hope gaming gets back to a more sustainable state with regards to graphics. (i.e. lower production costs because you don’t need 1000 employees to build out a realistic world)

    • FridgeSeal 10 hours ago

      What progress in gaming would that be?

      2 generations of cards that amount to “just more of a fire hazard” and “idk bro just tell them to use more DLSS slop” to paper over actual card performance deficiencies.

      We have 3 generations of cards where 99% of games fall approximately into one of 2 categories:

      - indie game that runs on a potato

      - awfully optimised AAA-shitshow, which isn’t GPU bottlenecked most of the time anyway.

      There is the rare exception (Cyberpunk 2077), but they’re few and far between.

zurfer 18 hours ago

The $13.5B net loss doesn't mean they are in trouble, it's a lot of accounting losses. Actual cash burn in H1 2025 was $2.5B. With ~$17.5B on hand (based on last funding), that’s about 3.5 years of runway at current pace.

  • fred_is_fred 17 hours ago

    Deprecation only gets worse for them as they build-out, not better.

    • dwaltrip 16 hours ago

      It gets worse until we hit the ceiling on what current tech is capable of.

      Then they can stop burning cash on enormous training runs and have a shot at becoming profitable.

      • FridgeSeal 10 hours ago

        They survive through inertia and “new model novelty”.

        The minute they lose that (not just them, the whole sector), they’re toast.

        I suspect they know this too, hence Sam-Altman admitting it’s a bubble so that he can try to ride it down without blowing up.

throwacct 18 hours ago

At this point, every LLM startup out there is just trying to stay in the game long enough before VC money runs out or others fold. This is basically a war of attrition. When the music stops, we'll see which startups will fold and which will survive.

  • hgomersall 17 hours ago

    Will any survive?

    • whizzter 17 hours ago

      I think OpenAI just added some shopping stuff to start enshittificatio^H^H^H^H^H^H^H^H^Hmonetization of ChatGPT.

      • spiderice 16 hours ago

        Apparently ^H is a shortcut for backspace. Good to know!

        • nick__m 11 hours ago

          it's because ^ represents the CTRL key. In ASCII, CTRl subtract 0x40, H is 0x48 and backspace is 0x08...

  • misiti3780 11 hours ago

    its like the ride sharing wars, except the valuations are an order of magnitude larger

evanb 13 hours ago

I thought all this machine learning stuff was about minimizing the loss?

andruby 17 hours ago

Too bad the market can stay irrational longer than I can stay solvent. I feel like a stock market correction is well overdue, but I’ve been thinking that for a while now

  • stevenwoo 9 hours ago

    I would be bankrupt multiple times over if I moved on just Tesla stock with a rational mindset.

xnx 18 hours ago

The only way OpenAI survives is that "ChatGPT" gets stuck in peoples heads as being the only or best AI tool.

If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.

  • the_duke 14 hours ago

    Google has massive levers to push their own product onto users, like how they did it with Chrome. Just integrate it everywhere, have it installed by default on all Android phones, plaster Google results with adds.

  • lbreakjai 17 hours ago

    Do people at large even care, or do they use "chatGPT" as a generic term for LLM?

    • hamdingers 13 hours ago

      Of course they don't, but when they want to use an LLM they're going to type "chatgpt" into the address bar or app store and that's a tremendous advantage.

  • glenneroo 17 hours ago

    Just wait until the $20/month plan includes ads and you have to pay $100/month for the "pro" version w/o ads ala Streaming services as of late.

runako 16 hours ago

I am not willing to render my personal verdict here yet.

Yet it is certainly true that at ~700m MAUs it is hard to say the product has not reached scale yet. It's not mature, but it's sort of hard to hand wave and say they are going to make the economics work at some future scale when they don't work at this size.

It really feels like they absolutely must find another revenue model for this to be viable. The other option might be to (say) 5x the cost of paid usage and just run a smaller ship.

  • apinstein 16 hours ago

    It’s not a hand wave…

    The cost to serve a particular level of AI drops by like 10x a year. AI has gotten good enough that next year people can continue to use the current gen AI but at that point it will be profitable. Probably 70%+ gross margin.

    Right now it’s a race for market share.

    But once that backs off, prices will adjust to profitability. Not unlike the Uber/Lyft wars.

    • runako 15 hours ago

      The "hand wave" comment was more to preempt the common pushback that X has to get to scale for the economics to work. My contention is that 700m MAUs is "scale" so they need another lever to get to profit.

      > AI has gotten good enough that next year people can continue to use the current gen AI

      This is problematic because by next year, an OSS model will be as good. If they don't keep pushing the frontier, what competitive moat do they have to extract a 70% gross margin?

      If ChatGPT slows the pace of improvement, someone will certainly fund a competitor to build a clone that uses an OSS model and sets pricing at 70% less than ChatGPT. The curse of betting on being a tech leader is that your business can implode if you stop leading.

      Similarly, this is very similar to the argument that PCs were "good enough" in any given year and that R&D could come down. The one constant seems to be people always want more.

      > Not unlike the Uber/Lyft wars

      Uber & Lyft both push CapEx onto their drivers. I think a more apt model might be AWS MySQL vs Oracle MySQL, or something similar. If the frontier providers stagnate, I fully expect people to switch to e.g. DeepSeek 6 for 10% the price.

      • babelfish 12 hours ago

        The thing is consumers don't care about OSS models. Any non-technical person just wants to "use AI", and think of ChatGPT for that.

        • runako 12 hours ago

          Right, the model is a commodity to most users. So all things equal, a ChatGPT clone that costs (say) 70% less will steal share.

          Flipping it again: if the model is a commodity that lets one "use AI," why would anyone pay 2x or 3x as more to use ChatGPT?

xhrpost 17 hours ago

> OpenAI paid Microsoft 20% of its revenue under an existing agreement.

Wow that's a great deal MSFT made, not sure what it cost them. Better than say a stock dividend which would pay out of net income (if any), even better than a bond payment probably, this is straight off the top of revenue.

  • manquer 16 hours ago

    Is it a great deal?

    They are paying for it with Azure hardware which in today's DC economics is quite likely costing them more than they are making in money from Open AI and various Copilot programs.

yalogin 12 hours ago

I don’t think they care, worst case scenario they will just go public and dump it on the market.

However the revenue generation aspect for llms is still in its infancy. The most obvious path for OpenAI is to become a search competitor to google, which is what perplexity states it is. So they will try to out do perplexity. All these companies will go vertical and become all encompassing.

  • cool_dude85 11 hours ago

    I think trying to compete with Google in search is a big problem. First you have to deal with all the anticompetitive stuff they can do, since they control email and the browser and youtube etc. Second they could probably stand to cut the price of advertising by 5 times and still be turning a profit. Will ads in ChatGPT be profitable competing against Google search ads at 1/5 the price, hypothetically?

    • almogo 9 hours ago

      If they're better - yes. ChatGPT is a very different product from Google Search. Return on Ad Spend could be significantly higher than even Google/Meta/ByteDance can offer.

codegeek 18 hours ago

I am curious to see how this compares against where Amazon was in 2000. I think Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.

If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.

  • crystal_revenge 17 hours ago

    > Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.

    Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.

    Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.

    > If the revenue keeps going up and losses keep going down

    That would require better than "dogshit" unit economics [0]

    0. https://pluralistic.net/2025/09/27/econopocalypse/#subprime-...

  • pavlov 18 hours ago

    Amazon's loss in 2000 was 6% of sales. OpenAI's loss in 2025 is 314% of sales.

    https://s2.q4cdn.com/299287126/files/doc_financials/annual/0...

    "Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.

    "We served 20 million customers in 2000, up from 14 million in 1999.

    "• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.

    "• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.

    "• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."

  • JCM9 18 hours ago

    Fundamentally different business models.

    Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.

    OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.

    • Fade_Dance 17 hours ago

      OpenAI is raising at 500 billion and has partnerships with all of the trillion dollar tech corporations. They simply aren't going to have trouble with working capital for their core business for the foreseeable future, even if AI dies down as a narrative. If the hype does die down, in many ways it makes their job easier (the ridiculous compensation numbers would go way down, development could happen at a more sane pace, and the whole industry would lean up). They're not even at the point where they're considering an IPO, which could raise tens of billions in an instant, even assuming AI valuations get decimated.

      The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).

      • stackskipton 16 hours ago

        Depends on raise terms but most raises are not 100% guaranteed. I was at a company that said, we have raised 100 Million in Series B (25 over 4 years) but Series B investors decided in year 2 of 4 year payout that it was over, cancelled remaining payouts and company folded. It was asked "Hey, you said we had 100 Million?" and come to find out, every year was an option.

        Alot of finances for non public company is funny numbers. It's based on numbers the company can point to but amount of asterisks in those numbers is mind-blowing.

    • Analemma_ 17 hours ago

      Not to mention nobody bothered chasing Amazon-- by the time potential competitors like Walmart realized what was up, it was way too late and Amazon had a 15-year head start. OpenAI had a head start with models for a bit, but now their models are basically as good (maybe a little better, maybe a little worse) than the ones from Anthropic and Google, so they can't stay still for a second. Not to mention switching costs are minimal: you just can't have much of a moat around a product which is fundamentally a "function (prompt: String): String", it can always be abstracted away, commoditized, and swapped out for a competitor.

      • robertjpayne 13 hours ago

        This right here. AI has no moat and none of these companies has a product that isn't easily replaced by another provider.

        Unless one of these companies really produces a leapfrog product or model that can't be replicated within a short timeframe I don't see how this changes.

        Most of OpenAI's users are freeloaders and if they turn off the free plan they're just going to divert those users to Google.

        • aurareturn 4 hours ago

          AI has no moat - yet here I'm been paying for ChatGPT Plus since the very start.

      • mike_hearn 3 hours ago

        Well, web search is also function(query: String): String in a sense, and that has one heck of a moat.

      • Sateeshm 7 hours ago

        It's not just the head start, it's the network effect.

t4TLLLSZ185x 7 hours ago

People in this comment section focus on brand ads too much.

It’s the commercial intent where OpenAI can both make money and preserve trust.

I already don’t Google anymore. I just ask ChatGPT „give me an overview of best meshtastic devices to buy“ and then eventually end with „give me links to where I can buy these in Europe“.

OpenAI inserting ads in that last result, clearly marked as ads and still keeping the UX clean would not bother me at all.

And commercial queries are what, 40-50% of all Google revenue?

  • tpetry 5 hours ago

    But its not clear whether the ad approach will work. It works for Google so great because the ads mimic real results very near so many people dont see them as an ad and click them.

VirgilShelton 11 hours ago

I'm old and have been on the Internet since the Prodigy days in 90. Open Ai has the best start of any company I can remember. Even better than Google back in 98 when they were developing their algo and giving free non-monetized search results to Yahoo.

These guys have had my $20 bucks a month since Plus was live, they will indeed be more than fine.

  • btbuildem 10 hours ago

    Exactly. Early on their adoption curve was like nothing I've ever seen before.

    I am such a miser, I skimp, steal what I can, use the free alternatives majority of the time. If they got me to pay, they've got everyone else's money already.

    • VirgilShelton 10 hours ago

      Yup! I'm also super cheap and use open source everything but I do have a Mac Book Pro and will never buy a PC again. So when it's worth it, the wallet is coming out and OpenAI has not only my little $20 bucks a month but will have my investment dollars once they go public.

  • Mistletoe 10 hours ago

    Do you really find it is worth it vs. the free Google Gemini? What do you use it for? I can't imagine needing more than Google Gemini 2.5 Flash or Pro, but I don't use it for programming or anything.

    • VirgilShelton 10 hours ago

      The best part is memory. If you use it daily like I do for everything from programming tasks to SEO and digital marketing, to budget stuff for investing and bill reminders. It will really start to understand what you want and get your voice right when it writes a blog for you or you work on an idea with it.

rester324 13 hours ago

What a nice f@$%ing bubble this is. This will end very badly for many

thinkindie 16 hours ago

Today I've tested Claude Code with small refactorings here and there in a medium sized project. I was surprised by the amount of token that every command was generating, even if the output was few lines updated for a bunch of files.

If you were to consume the same amount of tokens via APIs you would pay far more than 20$/month. Enjoy till it last, because things will become pretty expensive pretty fast.

  • Ianjit 2 hours ago

    Provide verbose answers, increases tokens. Demand is measured in tokens, so it looks like demand is sky rocketing. Valuation goes up.

    I have noticed that GPT now gives me really long explanations for even the simplest questions. Thankfully there is a stop button.

SeanAnderson 17 hours ago

I dunno. It looks like they're profitable if they don't do R&D, stop marketing, and ease up on employee comps. That's not the worst place to be. Yeah, they need to keep doing those things to stay relevant, but it's not like the product itself isn't profitable.

  • jplusequalt 13 hours ago

    So they're profitable if they put themselves at a disadvantage against Google, Meta, etc.?

    • SeanAnderson 13 hours ago

      Yes... but there were concerns previously that inference was so costly that the subscriptions/API billing weren't covering basic operating expenses. That's clearly not the case. People are willing to pay them enough that they can afford to run the models. That's a really positive sign.

      • Ianjit 2 hours ago

        Free user inference is probably accounted for in Sales and Marketing. Adjusted COGS.

  • ares623 13 hours ago

    So if they stop doing what got them there they’ll be profitable?

    If I stop buying grocery and paying electricity bills I can finish up my mortgage in no time.

    • SeanAnderson 13 hours ago

      I can see why you'd make that analogy, but that wasn't quite what I was trying to say. I just meant that not all expenses are created equal.

      Plenty of companies have high burn rates due to high R&D costs. It can make them look unprofitable on paper, but it's a tactic used to scale quicker, get economies of scale, higher leverage in negotiating, etc. It's not a requirement that they invest in R&D indefinitely. In contrast, if a company is paying a heavy amount of interest on loans (think: WeWork), it's not nearly as practical for them to cut away at their spending to find profitability.

      • ares623 12 hours ago

        Apologies for the snark.

        I don't think they can stop the 3 things you mentioned though.

        - Stopping R&D means their top engineers and scientists will go elsewhere

        - Stopping marketing means they will slowly lose market share. I don't care for marketing personally but I can appreciate its importance in a corporation

        - Stopping/reducing compensation will also make them lose people

        The costs are an inherent part of the company. It can't exist without it. Sure, they can adjust some levers a little bit here and there, but not too much or it all comes crumbling down.