OpenAI's H1 2025: $4.3B in income, $13.5B in loss
(techinasia.com)490 points by breadsniffer 18 hours ago
490 points by breadsniffer 18 hours ago
Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027. It won’t retain much value in the same way as the infrastructure of previous bubbles did?
The A100 came out 5.5 years ago and is still the staple for many AI/ML workloads. Even AI hardware just doesn’t depreciate that quickly.
Unless you care about FLOP/Watt, which big players definitely do.
> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.
>> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
> I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.
Unfortunately the chips themselves probably won’t physically last much longer than that under the workloads they are being put to. So, yes, they won’t be totally obsolete as technology in 2028, but they may still have to be replaced.
Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago.
That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.
> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This is definitely not true, the A100 came out just over 5 years ago and still goes for low five figures used on eBay.
> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.
If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.
At the rate they are throwing obstacles at the promised subway which they got rid of the 3rd Ave El for maybe his/her grandkids will finish the trip.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.
Neato. How’s that 1999 era laptop? Because 25 year old trains are still running and 25 year old train track is still almost new. It’s not the same and you know it.
Except they behave less like shrewd investors and more like bandwagon jumpers looking to buy influence or get rich quick. Crypto, Twitter, ridesharing, office sharing and now AI. None of these have been the future of business.
Business looks a lot like what it has throughout history. Building physical transport infrastructure, trade links, improving agricultural and manufacturing productivity and investing in military advancements. In the latter respect, countries like Turkey and Iran are decades ahead of Saudi in terms of building internal security capacity with drone tech for example.
Agreed - I don’t think they are particularly brilliant as a category. Hereditary kleptocracy has limits.
But… I don’t think there’s an example in modern history of the this much capital moving around based on whim.
The “bet on red” mentality has produced some odd leaders with absolute authority in their domain. One of the most influential figures on the US government claims to believe that he is saving society from the antichrist. Another thinks he’s the protagonist in a sci-fi novel.
We have the madness of monarchy with modern weapons and power. Yikes.
Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)
The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.
I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.
I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.
A really did few days ago gpt-3.5-fast is a great model for certain tasks and cost wise via the API. Lots of solutions being built on the today’s latest are for tomorrow’s legacy model — if it works just pin the version.
> money on fire forever just to jog in place.
Why?
I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?
I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.
We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?
But is it a bit like a game of musical chairs?
At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.
It's not that the investments just won't pay off, it's that the global markets are likely to crash like happened with the subprime mortgage crisis.
This is much closer to the dotcom boom than the subprime stuff. The dotcom boom/bust affected tech more than anything else. It didn’t involve consumers like the housing crash did.
We are starting to see larger economic exposure to AI.
Banks are handing out huge loans to the neocloud companies that are being collateralized with GPUs. These loans could easily go south if the bottom falls out of the GPU market. Hopefully it’s a very small amount of liquidity tied up in those loans.
Tech stocks make up a significant part of the stock market now. Where the tech stocks go, the market will follow. Everyday consumers invested in index funds will definitely see a hit to their portfolios if AI busts.
The dot com boom involved silly things like Pets.com IPOing pre-revenue. Claude code hit $500m in ARR in 3 months.
The fact people don't see the difference between the two is unreal. Hacker news has gone full r* around this topic, you find better nuance even on Reddit than here.
But it does involve a ton of commercial real estate investment, as well as a huge shakeup in the energy market. People may not lose their homes, but we'll all be paying for this one way or another.
The fed could still push the real value of stocks quite a bit by destroying the USD, if they want, by pinning interest rates near 0 and forcing a rush to the exits to buy stock and other asset classes.
The one thing smaller companies might have is allocated power budgets from power companies. Part of the mad dash to build datacenters right now is just to claim the power so your competitors can't. Now I do think the established players hold an edge here, but I don't think OpenAI/Anthropic/etc are without some bargaining power(hah).
The past/present company they remind me of the most is semiconductor fabs. Significant generation-to-generation R&D investment, significant hardware and infrastructure investment, quite winner-takes-all on the high end, obsoleted in a couple years at most.
The main differences are these models are early in their development curve so the jumps are much bigger, and they are entirely digital so they get “shipped” much faster, and open weights seem to be possible. None of those factors seem to make it a more attractive business to be in.
If you build the actual datacenter, less than half the cost is the actual compute. The other half is the actual datacenter infrastructure, power infrastructure, and cooling.
So in that sense it's not that much different from Meta and Google which also used server infrastructure that depreciated over time. The difference is that I believe Meta and Google made money hand over fist even in their earliest days.
The funniest thing about all this is that the biggest difference between LLMs from Anthropic, Google, OpenAI, Alibaba is not model architecture or training objectives, which are broadly similar but it's the dataset. What people don't realize is how much of that data comes from massive undisclosed scrapes + synthetic data + countless hours of expert feedback shaping the models. As methodologies converge, the performance gap between these systems is already narrowing and will continue to diminish over time.
I think the most interesting numbers in this piece (ignoring the stock compensation part) are:
$4.3 billion in revenue - presumably from ChatGPT customers and API fees
$6.7 billion spent on R&D
$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.
Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?
Free usage usually goes in sales and marketing. It's effectively a cost of acquiring a customer. This also means it is considered an operating expense rather than a cost of goods sold and doesn't impact your gross margin.
Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.
(Source: I run an inference company.)
I think it makes the most sense this way, but I've seen it accounted for in other ways. E.g. if free users produce usage data that's valuable for R&D, then they could allocate a portion of the costs there.
Also, if the costs are split, there usually has to be an estimation of how to allocate expenses. E.g. if you lease a datacenter that's used for training as well as paid and free inference, then you have to decide a percentage to put in COGS, S&M, and R&D, and there is room to juice the numbers a little. Public companies are usually much more particular about tracking this, but private companies might use a proxy like % of users that are paid.
OpenAI has not been forthcoming about their financials, so I'd look at any ambiguity with skepticism. If it looked good, they would say it.
Marketing != advertising. Although this budget probably does include some traditional advertising. It is most likely about building the brand and brand awareness, as well as partnerships etc. I would imagine the sales team is probably quite big, and host all kinds of events. But I would say a big chunk of this "sales and marketing" budget goes into lobbying and government relations. And they are winning big time on that front. So it is money well spent from their perspective (although not from ours). This is all just an educated guess from my experience with budgets from much smaller companies.
I agree - they're winning big and booking big revenue.
If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.
They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.
ChatGPT is already a new default "pane of glass" for normal people.
Is this all really so unreasonable?
I certainly want exposure to their stock.
> If you discount R&D and "sales and marketing"
If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).
If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.
Both of these costs are likely to increase rather than decrease over time.
> ChatGPT is already a new default "pane of glass" for normal people.
OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.
We gave ChatGPT advertising on bus-stops here in the UK.
Two people in a cafe having a meet-up, they are both happy, one is holding a phone and they are both looking at it.
And it has a big ChatGPT logo in the top right corner of the advertisement - transparent just the black logo with ChatGPT written underneath.
That's it. No text or anything telling you what the product is or does. Just it will make you happy during conversations with friends somehow.
It's pretty well accepted now that for pre-training LLMs the curve is S not an exponential, right? Maybe it's all in RL post-training now, but my understanding(?) is that it's not nearly as expensive as pre-training. I don't think 3-6 months is the time to 10X improvement anymore (however that's measured), it seems closer to a year and growing assuming the plateau is real. I'd love to know if there are solid estimates on "doubling times" these days.
With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.
Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.
All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.
> $2 billion on sales and marketing - anyone got any idea what this is?
Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".
I see multiple banner ads promoting ChatGPT on my way to work. (India)
you see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.
you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.
> $2 billion on sales and marketing - anyone got any idea what this is?
I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.
I have seen a tonnes of Chat GPT ads on Reddit. Usually with image generation of a dog in Japanese cartoon style.
This seems to be a common ad template for reddit ads, it's not just oai I've seen loads of ads use the this is fine template.
For clarity it wasnt a meme template (not "this is fine" dog or any other). It was a picture of a real dog and next to it an AI generated version of the same dog.
I just loaded up reddit and ad was there. Bunny this time:
Italian advertising is weird in general. Month ago leaving Venice we pulled over on a gas station and I started just going thru pages on some magazine. At some point I see advertising on what looks like old fashioned shoes - and owner of the company holding his son with sign "from generation to generation". Only thing - the ~3 year old boy is completely naked wearing only shoes with his little pee pee sticking out. It shocked me and unsure if it was just my American domestication or there was really something wrong with it. I took a picture and wanted to send it to my friends in USA to show them how Italian advertising looks like, before getting sweats that if I were caught with that picture in the US, I would get in some deep trouble. I quickly deleted it, just in case. Crazy story..
Not crazy, it's just a cultural thing.
US (and maybe the whole of Anglosaxon world) is a bit mired in this let's consider everything the worst case scenario: no, having a photo of your friend's naked kiddo they shared being funny at the beach or in the garden in your messenger app is not child pornography. The fact that there are extremely few people who might see it as sexual should not influence the overall population as much as it does.
For me, I wouldn't blink an eye to such an ad, but due to my exposure to US culture, I do feel uneasy about having photos like the above in my devices (to the point of also having a thought pass my mind when it's of my own kids mucking about).
I resist it because I believe it's the wrong cultural standard to adhere to: nakedness is not by default sexual, and especially with small kids before they develop any significant sexual characteristics.
Speculating but they pay to be integrated as the default ai integration in various places the same way google has paid to be the default search engine on things like the iPhone?
I’ve seen some on electronic street-level signs in Atlanta when I visited. So there is some genuine advertising.
Sales people out in the field selling to enterprises + free credits to get people hooked.
> $2 billion on sales and marketing
Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.
I’m also curious about your last question. Cost of goods sold would not fall into R&D or sales as far as I know.
So curious, in fact, that I asked Gemini to reconstruct their income statement from the info in this article :)
There seems to be an assumption that the 20% payment to MS is the cost of compute for inference. I would bet that’s at a significant discount - but who knows how much…
Line Item | Amount (USD) | Calculation / Note
Revenue $4.3 Billion Given.
Cost of Revenue (COGS) ($0.86 Billion) Assumed to be the 20% of revenue paid to Microsoft ($4.3B * 0.20) for compute/cloud services to run inference.
Gross Profit $3.44 Billion Revenue - Cost of Revenue. This 80% gross margin is strong, typical of a software-like business.
Operating Expenses
Research & Development ($6.7 Billion) Given. This is the largest expense, focused on training new models.
Sales & Ads ($2.0 Billion) Given. Reflects an aggressive push for customer acquisition.
Stock-Based Compensation ($2.5 Billion) Given. A non-cash expense for employee equity.
General & Administrative ($0.04 Billion) Implied figure to balance the reported operating loss.
Total Operating Expenses ($11.24 Billion) Sum of all operating expenses.
Operating Loss ($7.8 Billion) Confirmed. Gross Profit - Total Operating Expenses.
Other (Non-Operating) Income / Expenses ($5.7 Billion) Calculated as Net Loss - Operating Loss. This is primarily the non-cash loss from the "remeasurement of convertible interest rights."
Net Loss ($13.5 Billion) Given. The final "bottom line" loss.
Thanks for doing the prompting work here.
One thing I read - with $6.7bn R&D on $3.4bn in Gross Profit, you need a model to be viable for only one year to pay back.
Another thing, with only $40mm / 5 months in G&A, basically the entire company is research, likely with senior execs nearly completely equity comped. That’s an amazingly lean admin for this much spend.
On sales & ads - I too find this number surprisingly high. I guess they’re either very efficient (no need to pitch me, I already pay), or they’re so inefficient they don’t hit up channels I’m adjacent to. The team over there is excellent, so my priors would be on the first.
As doom-saying journalists piece this over, it’s good to think of a few numbers:
Growth is high. So, June was up over $1bn in revenues by all accounts. Possibly higher. If you believe that customers are sticky (i.e. you can stop sales and not lose customers), which I generally do, then if they keep R&D at this pace, a forward looking annual cashflow looks like:
$12bn in revs, $9.6bn in gross operating margin, $13.5bn in R&D, so net cash impact of -$4bn.
If you think they can grow to 1.5bn customers and won’t open up new paying lines of business then you’d have $20-25bn in revs -> maybe $4bn in sales -> +2-3bn in free cashflow, with the ability to take a breather and make that +15-18bn in free cashflow as needed. A lot of that R&D spend is on training which is probably more liquid than employees, as well.
Upshot - they’re going to keep spending more cash as they get it. I would expect all these numbers to double in a year. The race is still on, and with a PE investment hat on, these guys still look really good to me - the first iconic consumer tech brand in many years, an amazing team, crazy fast growth, an ability to throw off billions in cash when they want to, and a shot at AGI/ASI. What’s not to like?
$2.5B in stock comp for about 3,000 employees. that’s roughly $830k per person in just six months. Almost 60% of their revenue went straight back to staff.
They have to compete with Zuckerberg throwing $100M comps to poach people. I think $830k per person is nothing in comparison.
It’s debatable that it was debunked. There was squirrelly wording about some specific claims. One person was reported to have been offered a package worth a billion dollars, which even if exaggerated was probably not exaggerated by 10x. The numbers line up when you consider that AI startup founders and early employees stand to potentially make well into 9 figures if not higher, and Meta is trying to cut them off at the pass. Obviously these kinds of offers, whatever they really look like, include significant conditions and performance requirements.
Both numbers are entirely ludicrous - highly skilled people are certainly quite valuable. But it's insane that these companies aren't just training up more internally. The 50x developer is a pervasive myth in our industry and it's one that needs to be put to rest.
The ∞x engineer exists in my opinion. There are some things that can only be executed by a few people that no body else could execute. Like you could throw 10000 engineers at a problem and they might not be able to solve that problem, but a single other person could solve that problem.
I have known several people who have went to OAI and I would firmly say they are 10x engineers, but they are just doing general infra stuff that all large tech companies have to do, so I wouldn’t say they are solving problems that only they can solve and nobody else.
Do other professionals (lawyers, finance etc.) argue for reducing their own compensation with the same fervor that software engineers like to do? The market is great for us, let’s enjoy it while it lasts. The alternative is all those CEOs colluding and pushing the wages down, why is that any better?
The 50x distinguished engineer is real though. Companies and fortunes are won and lost on strategic decisions.
Dave Cutler is a perfect example. Produced trillions of dollars in value with his code.
> it's insane that these companies aren't just training up more internally
Adding headcount to a fast growing company *to lower wages* is a sure way to kill your culture, lower the overall quality bar and increase communication overheads significantly.
Yes they are paying a lot of their employees and the pool will grow, but adding bodies to a team that is running well in hopes that it will automatically lead to a bump in productivity is the part that is insane. It never works.
What will happen is a completely new team (team B) will be formed and given ownership of a component that was previously owned by team A under the guise of "we will just agree on interfaces". Team B will start doing their thing and meeting with Team A representative regularly but integration issues will still arise, except that instead of a tight core of 10-20 developers, you now have 40. They will add a ticketing to track change better, now issues in Team's B service, which could have been addressed in an hour by the right engineer on team A, will take 3 days to get resolved as ticket get triaged/prioritized. Lo and behold, Team C as now appeared and will be owning a sub-component of Team B. Now when Team A has issue with Team B's service, they cut a ticket, but the oncall on Team B investigates and finds that it's actually an issue with Team C's service, they cut their own ticket.
Suddenly every little issue takes days and weeks to get resolved because the original core of 10-20 developers is no longer empowered to just move fast. They eventually leave because they feel like their impact and influence has diminished (Team C's manager is very good at politics), Team A is hollowed out and you now have wall-to-wall mediocrity with 120 headcounts and nothing is ever anyone's fault.
I had a director that always repeated that communication between N people is inherently N² and thus hiring should always weight in that the candidate being "good" is not enough, they have to pull their weight and make up for the communication overhead that they add to the team.
Have worked in BigCo three times scaling teams from 5 to 50 people. This post is bang on.
These numbers aren't that crazy when contextualized with the capex spend. One hundred million is nothing compared to a six hundred billion dollar data center buildout.
Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.
It's not a myth and with how much productivity AI tools can give others, there can be an order of magnitude difference than outside of AI.
3000x One person with 830k is comfortable living. Probably gets spent into general economy.
1x Person with billions probably gets spent in a way that fucks everyone over.
They’ve had multiple secondary sales opportunities in the past few years, always at a higher valuation. By this point, if someone who’s been there >2 years hasn’t taken money off the table it’s most likely their decision.
I don’t work there but know several early folks and I’m absolutely thrilled for them.
Funny since they have a tender offer that hits their accounts on Oct 7.
private secondary markets are pretty liquid for momentum tech companies, there is an entire cottage industry of people making trusts to circumvent any transfer restrictions
employees are very liquid if they want to be, or wait a year for the next 10x in valuation
Oh no, "greedy" AI researchers defrauding way greedier VCs and billionaires!
Stock compensation is not cash out, it just dilutes the other shareholders, so current cash flow should not have anything do to the amount of stock issued[1]
While there is some flexibility in how options are issued and accounted for (see FASB - FAS 123), typically industry uses something like a 4 year vesting with 1 year cliffs.
Every accounting firm and company is different, most would normally account for it for entire period upfront the value could change when it is vests, and exercised.
So even if you want to compare it to revenue, then it should be bare minimum with the revenue generated during the entire period say 4 years plus the valuation of the IP created during the tenure of the options.
---
[1] Unless the company starts buying back options/stock from employees from its cash reserves, then it is different.
Even secondary sales that OpenAI is being reported to be facilitating for staff worth $6.6Billion has no bearing on its own financials directly, i.e. one third party(new investor) is buying from another third party(employee), company is only facilitating the sales for morale, retention and other HR reasons.
There is secondary impact, as in theory that could be shares the company is selling directly to new investor instead and keeping the cash itself, but it is not spending any existing cash it already has or generating, just forgoing some of the new funds.
It's a bit misleading to frame stock comp as "60% of revenue" since their expenses are way larger than their revenue. R&D was $6.7B which would be 156% of revenue by the same math.
A better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.
if Meta is throwing 10s of million at hot AI staffers, than 1.6M average stock comp starts looking less insane, a lot of that may also have been promised at a lower valuation given how wild OpenAI's valuation is.
These numbers are pretty ugly. You always expect new tech to operate at a loss initially but the structure of their losses is not something one easily scales out of. In fact it gets more painful as they scale. Unless something fundamentally changes and fast this is gonna get ugly real quick.
The real answer is in advertising/referral revenue.
My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.
Insurance agents—unlike many tech-focused sales jobs—are licensed and regulated, requiring specific training, background checks, and ongoing compliance to sell products that directly affect customers’ financial stability and wellbeing. Mortgage brokers also adhere to licensing and compliance regulations, and their market expertise, negotiation ability, and compliance duties are not easily replaced by AI tools or platforms.
t. perplexity ai
This could be solved with comparison websites which seems to be exactly what those brokers are using anyway. I had a broker proudly declare that he could get me the best deal, which turned out to be exactly the same as what moneysavingexperts found for me. He wanted £150 for the privilege of searching some DB + god knows how much commission he would get on top of that...
Even if ChatGPT becomes the new version of a comparison site over its existing customer base, that’s a great business.
they could keep the current model in chatGPT the same forver and 99% of users wouldnt know or care, and unless you think hardware isnt going to improve, the cost of that will basically decrease to 0.
For programming it's okay, for maths it's almost okay. For things like stories and actually dealing with reality, the models aren't even close to okay.
I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstood sentences, generated crazy things, lost track of everything-- completely beyond how bad I thought it could possibly be.
I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.
This extends to analyzing discussions. It simply misunderstands what people say. If you try to do this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.
When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.
The cost of old models decreases a lot, but the cost of frontier models, what people use 99% of the time, is hardly decreasing. Plus, many of the best models rely on thinking or reasoning, which use 10-100x as many tokens for the same prompt. That doesn't work on a fixed cost monthly subscription.
im not sure that you read what i just said. Almost no one using chatgpt would care if they were still talking to gpt5 2 years from now. If compute per watt doubles in the next 2 years, then the cost of serving gpt5 just got cut in half. purely on the hardware side, not to mention we are getting better at making smaller models smarter.
Eh, this seems like a cop out.
It’s so easy for people to shout bubble on the internet without actually putting their own money on the line. Talk is cheap - it doesn’t matter how many times you say it, I think you don’t have conviction if you’re not willing to put your own skin in the game. (Which is fine, you don’t have to put your money on the line. But it just annoys me when everyone cries “bubble” from the sidelines without actually getting in the ring.)
After all, “a bubble is just a bull market you don’t have a position in.”
People find all kinds of things to worry about if it gives them something to do, I guess.
In the same way that my elderly grandmother binge watches CNN to have something to worry about.
But the commenter I responded to DID care about the stock market, despite your attempt to grandstand.
And my point was, and still is, if you really believe it’s a bubble and you don’t actually have a short position, then you don’t actually believe it’s a bubble deep down.
Talk is cheap - let’s see your positions.
It would be like saying “I’ve got this great idea for a company, I’m sure it would do really well, but I don’t believe it enough to actually start a company.”
Ok, then what does that actually say about your belief in your idea?
Then no, you haven’t identified a bubble.
You’ve just said, “I think something will go down at some point.” Which… like… sure, but in a pointlessly trivial way? Even a broken clock is right eventually?
That’s not “identifying a bubble” that’s boring dinner small talk. “Wow, this Bitcoin thing is such a bubble huh!” “Yeah, sure is crazy!”
And even more so, if you’re long into something you call a bubble, that by definition says either you don’t think it’s that much of a bubble, huh? Or you’re a goon for betting on something you believe is all hot air?
There is an exceptionally obvious solution for OpenAI & ChatGPT: ads.
In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.
One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.
Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).
Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).
> They would have to be the biggest fools in tech history to not flip that switch
As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.
If they don't, Google certainly will, as will Meta, and Microsoft.
I wonder if their plan for the weird Sora 2 social network thing is ads.
Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.
Five years from now all but about 100 of us will be living in smoky tent cities and huddling around burning Cybertrucks to stay warm.
But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.
Google didn't have inline ads until 2010, but they did have separate ads nearly from the beginning. I assume ads will be inline for OpenAI- I mean the only case they could be separate is in ChatGPT, but I doubt that will be their largest use case.
I think it was actually about 5 years from founding to ads on Google.com.
I'm sure lots of ChatGPT interactions are for making buying decisions, and just how easy would it be to prioritize certain products to the top? This is where the real money is. With SEO, you were making the purchase decision and companies paid to get their wares in front of you; now with AI, it's making the buy decision mostly on its own.
No way. It’s 2025, society is totally different, you have to think about what is the new normal. They are too big to fail at this point — so much of the S&P 500 valuation is tied to AI (Microsoft, Google, Tesla, etc) they are arguable strategic to the US.
Fascist corporatism will throw them in for whatever Intel rescue plan Nvidia is forced to participate in. If the midterms flip congress or if we have another presidential election, maybe something will change.
New hardware could greatly reduce inference and training costs and solve that issue
Correction: 4.3B in revenues.
Other than Nvidia and the cloud providers (AWS, Azure, GCP, Oracle, etc.), no one is earning a profit with AI, so far.
Nvidia and the cloud providers will do well only if capital spending on AI, per year, remains at current rates.
What progress in gaming would that be?
2 generations of cards that amount to “just more of a fire hazard” and “idk bro just tell them to use more DLSS slop” to paper over actual card performance deficiencies.
We have 3 generations of cards where 99% of games fall approximately into one of 2 categories:
- indie game that runs on a potato
- awfully optimised AAA-shitshow, which isn’t GPU bottlenecked most of the time anyway.
There is the rare exception (Cyberpunk 2077), but they’re few and far between.
Deprecation only gets worse for them as they build-out, not better.
They survive through inertia and “new model novelty”.
The minute they lose that (not just them, the whole sector), they’re toast.
I suspect they know this too, hence Sam-Altman admitting it’s a bubble so that he can try to ride it down without blowing up.
its like the ride sharing wars, except the valuations are an order of magnitude larger
Correct. That's how Silicon Valley has worked for years.
The only way OpenAI survives is that "ChatGPT" gets stuck in peoples heads as being the only or best AI tool.
If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.
I am not willing to render my personal verdict here yet.
Yet it is certainly true that at ~700m MAUs it is hard to say the product has not reached scale yet. It's not mature, but it's sort of hard to hand wave and say they are going to make the economics work at some future scale when they don't work at this size.
It really feels like they absolutely must find another revenue model for this to be viable. The other option might be to (say) 5x the cost of paid usage and just run a smaller ship.
It’s not a hand wave…
The cost to serve a particular level of AI drops by like 10x a year. AI has gotten good enough that next year people can continue to use the current gen AI but at that point it will be profitable. Probably 70%+ gross margin.
Right now it’s a race for market share.
But once that backs off, prices will adjust to profitability. Not unlike the Uber/Lyft wars.
The "hand wave" comment was more to preempt the common pushback that X has to get to scale for the economics to work. My contention is that 700m MAUs is "scale" so they need another lever to get to profit.
> AI has gotten good enough that next year people can continue to use the current gen AI
This is problematic because by next year, an OSS model will be as good. If they don't keep pushing the frontier, what competitive moat do they have to extract a 70% gross margin?
If ChatGPT slows the pace of improvement, someone will certainly fund a competitor to build a clone that uses an OSS model and sets pricing at 70% less than ChatGPT. The curse of betting on being a tech leader is that your business can implode if you stop leading.
Similarly, this is very similar to the argument that PCs were "good enough" in any given year and that R&D could come down. The one constant seems to be people always want more.
> Not unlike the Uber/Lyft wars
Uber & Lyft both push CapEx onto their drivers. I think a more apt model might be AWS MySQL vs Oracle MySQL, or something similar. If the frontier providers stagnate, I fully expect people to switch to e.g. DeepSeek 6 for 10% the price.
> OpenAI paid Microsoft 20% of its revenue under an existing agreement.
Wow that's a great deal MSFT made, not sure what it cost them. Better than say a stock dividend which would pay out of net income (if any), even better than a bond payment probably, this is straight off the top of revenue.
I don’t think they care, worst case scenario they will just go public and dump it on the market.
However the revenue generation aspect for llms is still in its infancy. The most obvious path for OpenAI is to become a search competitor to google, which is what perplexity states it is. So they will try to out do perplexity. All these companies will go vertical and become all encompassing.
I think trying to compete with Google in search is a big problem. First you have to deal with all the anticompetitive stuff they can do, since they control email and the browser and youtube etc. Second they could probably stand to cut the price of advertising by 5 times and still be turning a profit. Will ads in ChatGPT be profitable competing against Google search ads at 1/5 the price, hypothetically?
I am curious to see how this compares against where Amazon was in 2000. I think Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.
> Amazon had similar issues and were operating at massive losses until circa 2005ish when they started turning things around with e-commerce really picking up.
Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.
Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.
> If the revenue keeps going up and losses keep going down
That would require better than "dogshit" unit economics [0]
0. https://pluralistic.net/2025/09/27/econopocalypse/#subprime-...
Amazon's loss in 2000 was 6% of sales. OpenAI's loss in 2025 is 314% of sales.
https://s2.q4cdn.com/299287126/files/doc_financials/annual/0...
"Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.
"We served 20 million customers in 2000, up from 14 million in 1999.
"• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.
"• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.
"• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."
Fundamentally different business models.
Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.
OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.
OpenAI is raising at 500 billion and has partnerships with all of the trillion dollar tech corporations. They simply aren't going to have trouble with working capital for their core business for the foreseeable future, even if AI dies down as a narrative. If the hype does die down, in many ways it makes their job easier (the ridiculous compensation numbers would go way down, development could happen at a more sane pace, and the whole industry would lean up). They're not even at the point where they're considering an IPO, which could raise tens of billions in an instant, even assuming AI valuations get decimated.
The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).
Depends on raise terms but most raises are not 100% guaranteed. I was at a company that said, we have raised 100 Million in Series B (25 over 4 years) but Series B investors decided in year 2 of 4 year payout that it was over, cancelled remaining payouts and company folded. It was asked "Hey, you said we had 100 Million?" and come to find out, every year was an option.
Alot of finances for non public company is funny numbers. It's based on numbers the company can point to but amount of asterisks in those numbers is mind-blowing.
Not to mention nobody bothered chasing Amazon-- by the time potential competitors like Walmart realized what was up, it was way too late and Amazon had a 15-year head start. OpenAI had a head start with models for a bit, but now their models are basically as good (maybe a little better, maybe a little worse) than the ones from Anthropic and Google, so they can't stay still for a second. Not to mention switching costs are minimal: you just can't have much of a moat around a product which is fundamentally a "function (prompt: String): String", it can always be abstracted away, commoditized, and swapped out for a competitor.
This right here. AI has no moat and none of these companies has a product that isn't easily replaced by another provider.
Unless one of these companies really produces a leapfrog product or model that can't be replicated within a short timeframe I don't see how this changes.
Most of OpenAI's users are freeloaders and if they turn off the free plan they're just going to divert those users to Google.
AI has no moat - yet here I'm been paying for ChatGPT Plus since the very start.
Well, web search is also function(query: String): String in a sense, and that has one heck of a moat.
People in this comment section focus on brand ads too much.
It’s the commercial intent where OpenAI can both make money and preserve trust.
I already don’t Google anymore. I just ask ChatGPT „give me an overview of best meshtastic devices to buy“ and then eventually end with „give me links to where I can buy these in Europe“.
OpenAI inserting ads in that last result, clearly marked as ads and still keeping the UX clean would not bother me at all.
And commercial queries are what, 40-50% of all Google revenue?
I'm old and have been on the Internet since the Prodigy days in 90. Open Ai has the best start of any company I can remember. Even better than Google back in 98 when they were developing their algo and giving free non-monetized search results to Yahoo.
These guys have had my $20 bucks a month since Plus was live, they will indeed be more than fine.
Yup! I'm also super cheap and use open source everything but I do have a Mac Book Pro and will never buy a PC again. So when it's worth it, the wallet is coming out and OpenAI has not only my little $20 bucks a month but will have my investment dollars once they go public.
The best part is memory. If you use it daily like I do for everything from programming tasks to SEO and digital marketing, to budget stuff for investing and bill reminders. It will really start to understand what you want and get your voice right when it writes a blog for you or you work on an idea with it.
Today I've tested Claude Code with small refactorings here and there in a medium sized project. I was surprised by the amount of token that every command was generating, even if the output was few lines updated for a bunch of files.
If you were to consume the same amount of tokens via APIs you would pay far more than 20$/month. Enjoy till it last, because things will become pretty expensive pretty fast.
Provide verbose answers, increases tokens. Demand is measured in tokens, so it looks like demand is sky rocketing. Valuation goes up.
I have noticed that GPT now gives me really long explanations for even the simplest questions. Thankfully there is a stop button.
I dunno. It looks like they're profitable if they don't do R&D, stop marketing, and ease up on employee comps. That's not the worst place to be. Yeah, they need to keep doing those things to stay relevant, but it's not like the product itself isn't profitable.
So they're profitable if they put themselves at a disadvantage against Google, Meta, etc.?
Yes... but there were concerns previously that inference was so costly that the subscriptions/API billing weren't covering basic operating expenses. That's clearly not the case. People are willing to pay them enough that they can afford to run the models. That's a really positive sign.
I can see why you'd make that analogy, but that wasn't quite what I was trying to say. I just meant that not all expenses are created equal.
Plenty of companies have high burn rates due to high R&D costs. It can make them look unprofitable on paper, but it's a tactic used to scale quicker, get economies of scale, higher leverage in negotiating, etc. It's not a requirement that they invest in R&D indefinitely. In contrast, if a company is paying a heavy amount of interest on loans (think: WeWork), it's not nearly as practical for them to cut away at their spending to find profitability.
Apologies for the snark.
I don't think they can stop the 3 things you mentioned though.
- Stopping R&D means their top engineers and scientists will go elsewhere
- Stopping marketing means they will slowly lose market share. I don't care for marketing personally but I can appreciate its importance in a corporation
- Stopping/reducing compensation will also make them lose people
The costs are an inherent part of the company. It can't exist without it. Sure, they can adjust some levers a little bit here and there, but not too much or it all comes crumbling down.
On that $13.5B. How much of their massive spend on datacenters is obscured through various forms of Special Purpose Vehicles financing? (https://news.ycombinator.com/item?id=45448199)
You can also read more about it on the FT Alphaville blog from Financial Times (free to sign-up):
OpenAI’s era-defining money furnace
https://www.ft.com/content/908dc05b-5fcd-456a-88a3-eba1f77d3...
Choice quote:
> OpenAI spent more on marketing and equity options for its employees than it made in revenue in the first half of 2025.
Everyone is trying to compare AI companies with something that happened in the past, but I don't think we can predict much from that.
GPUs are not railroads or fiber optics.
The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.
Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.
On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.