OpenAI's H1 2025: $4.3B in income, $13.5B in loss
(techinasia.com)418 points by breadsniffer 12 hours ago
418 points by breadsniffer 12 hours ago
What is OpenAI's competitive moat? There's no product stickiness here.
What prevents people from just using Google, who can build AI stuff into their existing massive search/ads/video/email/browser infrastructure?
Normal, non-technical users can't tell the difference between these models at all, so their usage numbers are highly dependent on marketing. Google has massive distribution with world-wide brands that people already know, trust, and pay for, especially in enterprise.
Google doesn't have to go to the private markets to raise capital, they can spend as much of their own money as they like to market the living hell out of this stuff, just like they did with Chrome. The clock is running on OpenAI. At some point OpenAI's investors are going to want their money back.
I'm not saying Google is going to win, but if I had to bet on which company's money runs out faster, I'm not betting against Google.
Consumer brand quality is so massively underrated by tech people.
ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users and growing much faster than Google.
I think your points on Google being well positioned are apt for capitalization reasons, but only one company has consumer mindshare on "AI" and its the one with "ai" in its name.
I’ve got “normie” friends who I’d bet don’t even know that what Google has at the top of their search results is “AI” results and instead assume it’s just some extension of the normal search results we’ve all gotten used to (knowledge graph)
Every one of them refers to using “ChatGPT” when talking about AI.
How likely is it to stay that way? No idea, but OpenAI has clearly captured a notable amount of mindshare in this new era.
> They have 700 million weekly users and growing much faster than Google.
Years old company growing faster than decades old company!
2.5 billion people use Gmail. I assume people check their mail (and, more importantly, receive mail) much more often than weekly.
ChatGPT has a lot of growing to do to catch up, even if it's faster
I read that as OpenAI’s WAU is showing a steeper increase than Google ever did. Not saying it’s factually accurate, just that it’s not a fixed point-in-time comparison :)
> ChatGPT has a phenomenal brand.
If by "phenomenal" you mean "the premier slop and spam provider", then yes.
ChatGPT (and all the competitors) are trivially sticky products: I have a lot of ongoing conversations in there, that I pick up all the time. Add more long term memory stuff — a direction I am sure they will keep pushing — and all of the sudden there is a lot of personal data that you rely on it having, that make the product better and that most people will never care to replicate/transfer. Just being the product that people use makes you the product that people will use. "the other app doesn't know me" is the moat. The data that people put in it is the moat.
What prevents people from just using Google, who can build AI stuff into their existing massive search/ads/video/email/browser infrastructure?
Google have never had a viable competitor. Their moat on Search and Ads has been so incredibly hard to beat that no one has even come close. That has given them an immense amount of money from search ads. That means they've appeared to be impossible to beat, but if you look at literally all their other products they aren't top in anything else despite essentially unlimited resources.
A company becoming a viable competitor to Google Search and/or Ads is not something we can easily predict the outcome of. Many companies in the past who have had a 'monopoly' have utterly fallen apart at the first sign of real competition. We even have a term for it that YC companies love to scatter around their pitch decks - 'disruption'. If OpenAI takes even just 5% of the market Google will need to either increase their revenue by $13bn (hard, or they'd have done that already) or they'll need to start cutting things. Or just make $13bn less profit I guess. I don't think that would go down well though.
>non-technical users can't tell the difference between these models at all
My non-tech friend said she prefer ChatGPT more than Gemini, most due to its tone.
So non-tech people may not know the different in technical detail, but they sure can have bias.
I have a non-techy friend who used 4o for that exact reason. Compared to most readily available chatbots, 4o just provides more engaging answers to non-techy questions. He likes to have extended conversations about philosophy and consciousness with it. I showed him R1, and he was fascinated by the reasoning process. Makes sense, given the sorts of questions he likes to ask it.
I think OpenAI is pursuing a different market from Google right now. ChatGPT is a companion, Gemini is a tool. That's a totally arbitrary divide, though. Change out the system prompts and the web frontend. Ta-daa, you're in a different market segment now.
As history showed us numerous times, it doesn't even have to be the best to win. It rarely is, really. See the most pervasive programming languages for that.
ChatGpt has won. I talk to all teens living nearby and they all use chatgpt and not Google.
The teens, they don't know what is OpenAI, they don't know what is Gemini. They sure know what is ChatGPT.
All of these teens use Google Docs instead of OpenAI Docs, Google Meet instead of OpenAI Meet, Gmail instead of OpenAI Mail, etc.
I'm sure that far fewer people to go gemini.google.com than to chatgpt.com, but Google has LLMs seamlessly integrated in each of these products, and it's a part of people's workflows at school and at work.
For a while, I was convinced that OpenAI had won and that Google won't be able to recover, but this lack of vertical integration is becoming a liability. It's probably why OpenAI is trying to branch into weird stuff, like running a walled-garden TikTok clone.
Also keep in mind that unlike OpenAI, Google isn't under pressure to monetize AI products any time soon. They can keep subsidizing them until OpenAI runs out of other people's money. I'm not saying OpenAI has no path forward, but it's not all that clear-cut.
As a counter, you can buy a hell of a lot of brand for $8 billion dollars though.
You can give your most active 50,000 users $160,000 each, for example.
You can run campaign ads in every billboard, radio, tv station and every facebook feed tarring and feathering ChatGPT
Hell, for only $200m you could just get the current admin to force ChatGPT to sell to Larry Ellison, and deport Sam Altman to Abu Dahbi like Nermal from Garfield.
So many options!
Users' chat history is the moat. The more you use it, the more it knows about you and can help you in ways that are customized to particular user. That makes it sticky, more so than web search. Also brand recognition, ChatGPT is the default general purpose LLM choice for most people. Everyone and their mom is using it.
I'm saying Google is going to win. They're not beholden to their current architecture as much as other shovelmakers and can pivot their TPU to offer the best inference perf/$. They also hold about as much personal data as anyone else and have plenty of stuff to train on in-house. I work for a competitor and even I think there's a good chance google "wins"(there's never a winner because the race never ends).
The problem is that Google is horrible at product. They have been so spot on at search it's covered up all the other issues around products. YT is great, but they bought that. The Pixel should the Android phone, but they do a poor job marketing. They should be leading AI, but stumbled multiple times in the rollout. They normally get the tech right, and then fumble the productizing and marketing.
I think we are all forgetting that Google is a massive bureaucracy that has to move out of its own way to get anything done. The younger companies have a distinct advantage here. Hence the cycle of company growth and collapse.I think openai and the like have a very good chance here.
yeah ... poly market and other makers seem to be betting that Google by year's end or sometime next year or so will have teh best gen ai models on the market ... but I've been using Claude sonnet 4.5 with GitHub Copilot and swear by it.
anyways would be nice to really see some apples-to-apples benchmarks of the TPU vs Nvidia hardware but how would that work given CUDA is not hardware agnostic?
It's a distinctive brand, pleasant user experience, and a trustworthy product, like every other commodified technology on the planet.
That's all that matters now. We've passed the "good enough" bar for llms for the majority of consumer use cases.
From here out it's like selling cellphones and laptops
Yeah, anyone saying "Normal, non-technical users can't tell the difference between these models at all" isn't talking to that many normal, non-technical users.
> There's no product stickiness here.
Very few of those 700,000,000 active users have ever heard of Claude or DeepSeek or ________. Gemini maybe.
> ChatGPT has more consumer trust than Google at this point
That trust is gone the moment they start selling ad space. Where would they put the ads? In the answers? That would force more people to buy a subscription, just to avoid having the email to your boss contain a sponsored message. The numbers for Q2 looks promising, sells are going up. And speaking of sales, Jif peanut butter is on sale this week.
If OpenAI plan on making money with ads then all the investments made by Nvidia, Microsoft and Softbank starts to look incredibly stupid. Smartest AI in the world, but we can only make money buy showing you gambling ads.
I also wonder if this means that even paid tiers will get ads. Google's ad revenue is only ~$30 per user per year, yet there is no paid, ad-free Google Premium, even though lots of users would gladly pay way more than $30/year have an ad-free experience. There's no Google Premium because Google's ad revenue isn't uniformly distributed across users; it's heavily skewed towards the wealthiest users, exactly the users most likely to purchase an ad-free experience. In order to recoup the lost ad revenue from those wealthy users, Google would have to charge something exorbitant, which nobody would be willing to pay.
I fear the same will happen with chatbots. The users paying $20 or $200/month for premium tiers of ChatGPT are precisely the ones you don't want to exclude from generating ad revenue.
"Lots of users would gladly pay way more than $30/year have an ad-free experience"? Outside of ads embedded in Google Maps, a free and simple install of Ublock Origin essentially eliminates ads in Search, YouTube, etc. I'd expect that just like Facebook, people would be very unwilling to pay for Google to eliminate ads, since right now they aren't even willing to add a browser extension.
Anecdata, but my nontechnical friends have never heard of uBlock origin. They all know about ad-free youtube.
It worked for YouTube, I don’t see why the assumption of paid gpt models will follow google and not YouTube, particularly when users are conditioned to pay for gpt already.
The average is $x. But that's global which means in some places like the US it is 10x. And in other less wealthy areas it is 0.1x.
There is also the strange paradox that the people who are willing to pay are actually the most desirable advertising targets (because they clearly have $ to spend). So my guess is that for that segment, the revenue is 100x.
> But as long as OpenAI remains the go-to for the average consumer, they be fine.
This is like the argument of a couple of years ago "as long as Tesla remains ahead of the Chinese technology...". OpenAI can definitely become a profitable company but I dont see anything to say they will have a moat and monopoly.
Huh? They're actively removing personality from current models as much as possible.
I think this is directionally right but to nitpick…Google has way more trust than OpenAI right now and it’s not close.
Acceleration is felt, not velocity.
Yeah, I agree with you.
Between Android, Chrome, YouTube, Gmail (including mx.google.com), Docs/Drive, Meet/Chat, and Google Search, claiming that Google "isn't more trusted" is just ludicrous. People may not be happy they have to trust Alphabet. But they certainly do.
And even when they insist they're Stallman, their friends do, their family does, their coworkers do, the businesses they interact with do, the schools they send their children to do.
Like it or not, Google has wormed their way into the fabric of modern life.
Chrome and Google Search are still the gateway to the internet outside China. Android has over 75% market share of all mobile(!). YouTube is somewhat uniquely the video internet with Instagram and Tiktok not really occupying the same mindshare for "search" and long form.
People can say they don't "trust" Google but the fact is that if the world didn't trust Google, it never would have gotten to where it is and it would quickly unravel from here.
Sent from my Android (begrudgingly)
With search you dont fully trust Google. You trust Google to find good results most of the time them trust those results based on other factors.
But with AI you now have all trust in one place. For Google and OpenAI their AI bullshits. It will only be trusted by fools. Luckily for the corporations there is no end of fools to fool.
I agree with you, and my impression of the trust-level of Google is pretty much zero.
The only thing I trust google to do is abandon software and give me a terrible support experience
And to charge you for stuff you don't want and don't need as if you are using it every day through tied sales. Hm... wasn't that illegal?
The moment they start mixing ads into responses Ill stop using them. Open models are good enough, its just more convenient to use chatgpt right now, but that can change.
People said the same thing about so many other online services since the 90s. The issue is that you're imagining ChatGPT as it exists right now with your current use case but just with ads inserted into their product. That's not really how these things go... instead OpenAI will wait until their product becomes so ingrained in everyday usage that you can't just decide to stop using them. It is possible, although not certain, that their product becomes ubiquitous and using LLMs someway somehow just becomes a normal way of doing your job, or using your computer, or performing menial and ordinary tasks. Using an LLM will be like using email, or using Google maps, or some other common tool we don't think much of.
That's when services start to insert ads into their product.
> People said the same thing about so many other online services since the 90s.
And this leads to something I genuinely don't understand - because I don't see ads. I use adblocker, and don't bother with media with too many ads because there's other stuff to do. It's just too easy to switch off a show and start up a steam game or something. It's not the 90s anymore, people have so many options for things.
Idk, maybe I am wrong, but I really think there is something very broken in the ad world as a remenant from the era where google/facebook were brand new and the signal to noise ratio for advertisers was insanely high and interest rates were low. Like a bunch of this activity is either bots or kids, and the latter isn't that easy to monetize.
Except it's hard to imagine a world where chatgpt is heads and shoulders over the other llms in capability. Google has no problem keeping up and let's not forget that China has state-sponsored programs for AI development.
And if/when they reach that point, the average consumer will see the ad as an irksome fly. That's it.
Except that I have switched to Gemini and not missed anything from OpenAI
> moment they start mixing ads into responses Ill stop using them
Do you currently pay for it?
Why do people always think that just because you have a lot of users it automatically translates to ad revenue? Yahoo has been one of the most trafficked site for decades and could never generate any reasonable amount of ad revenue.
The other side of the coin is that running an LLM will never be as cheap as search engine.
There is also the IMO not exactly settled question of whether an advertiser is comfortable handing over its marketing to an AI.
Can any AI be sensibly and reliably instructed not to do product placement in inappropriate contexts?
> ads in the future.
It boggles my mind that people still think advertising can be a major part of the economy.
If AI is propping up the economy right now [0] how is it possible that the rest of the economy can possibly fund AI through profit sharing? That's fundamentally what advertising is: I give you a share of my revenue (hopefully from profits) in order to help increase my market share. The limit of what advertising spend can be is percent of profits minus some epsilon (for a functioning economy at least).
Advertising cannot be the lions share of any economy because it derives it's value from the rest of the economy.
Advertising is also a major bubble because my one assumption there (that it's a share of profits) is generally not the case. Unprofitable companies giving away a share of their revenue to other companies making those companies profitable is not sustainable.
Advertising could save AI if AI was a relatively small part of the US (or world) economy and could benefit by extracting a share of the profits from other companies. But if most your GDP is from AI how can it possibly cannibalize other companies in a sustainable way?
0. https://www.techspot.com/news/109626-ai-bubble-only-thing-ke...
You've run a false equivalency in your argument. Growth is not representative of the entire economy. The economy is, in aggregate, much more than tech - they have the biggest public companies which skews how people think. No exclusive sector makes up "most" of the economy, in fact the highest sector, which is finance only makes up 21% of the US economy.
https://www.statista.com/statistics/248004/percentage-added-...
> Growth is not representative of the entire economy
Our entire economy is based on debt, it cannot function without growth. This is demonstrated by the fact that:
> in fact the highest sector, which is finance only makes up 21% of the US economy
Every cent earned by the finance sector boils down from being derived from debt (i.e. growth has to pay it off). You just pointed out the largest sector of our economy relies on rapid growth, and the majority of growth right now is coming from AI. AI, therefore, cannot derive the majority of it's value by cannibalizing the growth of other sectors because no other sector has sufficient growth the fund both AI, itself and the debt that needs to be repaid to make the entire thing make sense.
US GDP is 30T so that revenue is less than 1% of it. But 1% of GDP us still eye popping amount. But remember in the non Google world that is split up into Yellow Pages and TV ads and etc. and possibly many ventures that were not possible because of lack of targeted ads didnt come to fruition.
Google is tightly integrated vertically. It is going to be very hard to dislodge that.
Right now Gemini gives a youtube link in every response. That means they have already monetised their product using ads.
Its a completely optional purchase, and there's no clear way for ads to be included without it muddying up the actual answer.
"The most popular brand of bread in America is........BUTTERNUT (AD)"
Its a sinkhole that they are destroying our environment for. Its not sustainable on a massive scale, and I expect to see Sam Altman join his 30 under 30 cohorts SBF and such eventually.
They should be concerned with open weight models that don’t run on consumer hardware. The larger models from Qwen (Qwen Max) and ZLM (GLM and GLM air) perform not too far from Claude Sonnet 4 and GPT-5. ZLM offers a $3 plan that is decently generous. I can pretty much replace it over Sonnet 4 in Claude Code (I swear, Anthropic has been nerfing Sonnet 4 for people on the Pro plan).
You can run Qwen3-coder for free upto 1000 requests a day. Admittedly not state of the art but works as good of 5o-mini
I believe regular people will not change from chatGPT if it has some ads. I know people who use "alternative" wrappers that have ads because they aren't tech savvy, and I agree with the OP that this could be a significant amount of money We aren't 700 million people that use it.
Definitely don’t argue against that, once people get into a habit of using something, it takes quite a bit to get away from it. Just that an American startup can literally run ZLM models themselves (open weight with permissive license) as a competitor to ChatGPT is pretty wild to think about
One of the side effects of having a chat interface, is that there is no moat around it. Using it is natural.
Changing from Windows to Mac or iOS to Android requires changing the User Interface. All of these chat applications have essentially the same interface. Changing between ChatGPT and Claude is essentially like buying a different flavor of potato chip. There is some brand loyalty and user preference, but there is very little friction.
If they overnight were able to capture as much revenue per user as Meta (about 50 bucks a year) they'd bring in a bucket of cash immediately.
But selling that much ad inventory overnight - especially if they want new formats vs "here's a video randomly inserted in your conversation" sorta stuff - is far from easy.
Their compute costs could easily go down as technology advances. That helps.
But can they ramp up the advertising fast enough to bring in sufficient profit before cheaper down-market alternatives become common?
They lack the social-network lock-in effect of Meta, or the content of ESPN, and it remains to be seen if they will have the "but Google has better results than Bing" stickiness of Google.
What is OpenAI's moat? There's plenty of competitors running their own models and tools. Sure, they have the ChatGPT name, but I don't see them massively out-competing the entire market unless the future model changes drastically improve over the 3->4->5 trajectory.
Google got a massive leg up on the rest be having a better service. When Bing first came out, I was not impressed with what I got, and never really bothered going back to it.
Search quality isn't what it used to be, but the inertia is still paying dividends. That same inertia also applied to Google ads.
I'm not nearly so convinced OpenAI has the same leg up with ChatGPT. ChatGPT hasn't become a verb quite like google or Kleenex, and it isn't an indispensable part of a product.
Google has always been much better than the competition. Even today with their enshittification, competitors still aren’t as good.
The only thing that has changed that status quo is the rise of audiovisual media and sites closing up so that Google can’t index them, which means web search lost a lot of relevance.
google's moat is a combination of it being free and either being equal to or outright better than competitors
It's Sam.
From what I understand he was the only one crazy enough to demand hundreds of GPUs for months to get ChatGPT going. Which at the time sounded crazy.
So yeah Sam is the guy with the guts and vision to stay ahead.
Past performance is no guarantee of future results.
You might see Sam as a Midas who can turn anything into gold. But history shows that very few people sustain that pattern.
This! The cost of training models inevitably goes down over time as FLOPS/$ and PB/$ increases relentlessly thanks to the exponential gains of Moore's law. Eventually we will end up with laptops and phones being Good Enough to run models locally. Once that happens, any competitor in the space that decides to actively support running locally will have operating costs that are a mere fraction of OpenAI's current business.
The pop of this bubble is going to be painful for a lot of people. Being too early to a market is just as bad as being too late, especially for something that can become a commodity due to a lack of moat.
> increases relentlessly thanks to the exponential gains of Moore's law
Moore's so-called "law" hasn't been true for years.
Chinese AI defeated American companies because they spent effort to optimize the software.
Bad news on the Moore's Law front.
https://cap.csail.mit.edu/death-moores-law-what-it-means-and...
You just said that everyone will be able to run a powerful AI locally and then you said this would lead to a pop of the bubble.
Well, which is it? That AI is going to have huge demands for chips that it is going to get much bigger or is the bubble going to pop? You can’t have both.
My opinion is that local LLMs will do a bulk of the low value interference such as your personal life mundane tasks. But cloud AI will be reserved for work and for advanced research purposes.
> google.com, youtube, chrome, android, gmail, google map etc
Of those, it's 50/50. The acquisitions were YT, Android, Maps. Search was obviously Google's original product, Chrome was an in-house effort to rejuvenate the web after IE had caused years of stagnation, and Gmail famously started as a 20% project.
There are of course criticisms that Google has not really created any major (say, billion-user) in-house products in the past 15 years.
Chrome's engine was WebKit originally, which they then forked. Not an acquisition, but benefitted greatly from prior work.
By this point I imagine it's a novelty to find any code from the original acquisition in those products.
A lot of that money comes from search result ads. Sometimes I click on an ad to visit a site I search for instead of scrolling to the same link in the actual search results. Many companies bid on keywords for their own name to prevent others from taking a customer who is interested in you.
You use to be a useful site and be at the top of the search results for some keywords and now you have to pay.
It's a lot more complicated, but yes advertising works.
There is a saying in India, whats seen is what is sold.
Not the hidden best product.
Yes, they do. Advertising works. "Free with ads" isn't really free because on average you'll end up spending more money than you would otherwise. You're also paying more than if it was a subscription because the producer has to create both the product and also advertise it.
>> ... underestimating the money they will come from ads in the future.
I would like AI to focus on helping consumers discover the right products for their stated needs as opposed to just being shown (personalized) ads. As of now, I frequently have a hard time finding the things I need via Amazon search, Google, as well as ChatGPT.
The problem with this is that I have moved to Gemini with zero loss in functionality, and I’m pretty sure that Google is 100x better at ads than OpenAI.
>"Look at how much companies overspend on cloud just to not have to do IT work."
I think they are doing it for a different reasons. Some are legit like renting this supercomputer for a day and some are like everybody else is doing it. I am friends with the small company owner and they have sysadmin who picks nose and does nothing and then they pay a fortune to Amazon
Ok, but there will be users using even more insanely powerful datacenter computers that will be able to our-AI the local AI users.
Nvidia/Apple (hardware companies) are the only winner in this case
> They generated $4.3B in revenue without any advertising program
To be clear, they bought/aired a Superbowl advert. That is a pretty expensive. You might argue that "Superbowl advert" versus 4B+ in revenue is inconsequential, but you cannot say there is no advertising.Also, their press release said:
> $2 billion spent on sales and marketing
Vague. Is this advertising? Eh, not sure, but that is a big chunk of money.I think they mean OpenAI showing ads from other companies to users, not buying ads themselves.
Everyone is trying to compare AI companies with something that happened in the past, but I don't think we can predict much from that.
GPUs are not railroads or fiber optics.
The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.
Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.
On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.
Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027. It won’t retain much value in the same way as the infrastructure of previous bubbles did?
The A100 came out 5.5 years ago and is still the staple for many AI/ML workloads. Even AI hardware just doesn’t depreciate that quickly.
> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.
>> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.
> I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.
I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.
> changing 2027 to 2030 doesn't make the math much better
Could you show me?
Early turbines didn't last that long. Even modern ones are only rated for a few decades.
Unfortunately the chips themselves probably won’t physically last much longer than that under the workloads they are being put to. So, yes, they won’t be totally obsolete as technology in 2028, but they may still have to be replaced.
Neato. How’s that 1999 era laptop? Because 25 year old trains are still running and 25 year old train track is still almost new. It’s not the same and you know it.
Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.
> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This is definitely not true, the A100 came out just over 5 years ago and still goes for low five figures used on eBay.
> Effectively every single H100 in existence now will be e-waste in 5 years or less.
This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.
If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago.
That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.
When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.
At the rate they are throwing obstacles at the promised subway which they got rid of the 3rd Ave El for maybe his/her grandkids will finish the trip.
> Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.
Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.
Except they behave less like shrewd investors and more like bandwagon jumpers looking to buy influence or get rich quick. Crypto, Twitter, ridesharing, office sharing and now AI. None of these have been the future of business.
Business looks a lot like what it has throughout history. Building physical transport infrastructure, trade links, improving agricultural and manufacturing productivity and investing in military advancements. In the latter respect, countries like Turkey and Iran are decades ahead of Saudi in terms of building internal security capacity with drone tech for example.
Agreed - I don’t think they are particularly brilliant as a category. Hereditary kleptocracy has limits.
But… I don’t think there’s an example in modern history of the this much capital moving around based on whim.
The “bet on red” mentality has produced some odd leaders with absolute authority in their domain. One of the most influential figures on the US government claims to believe that he is saving society from the antichrist. Another thinks he’s the protagonist in a sci-fi novel.
We have the madness of monarchy with modern weapons and power. Yikes.
Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)
The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.
I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.
I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.
A really did few days ago gpt-3.5-fast is a great model for certain tasks and cost wise via the API. Lots of solutions being built on the today’s latest are for tomorrow’s legacy model — if it works just pin the version.
> money on fire forever just to jog in place.
Why?
I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?
I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.
We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?
But is it a bit like a game of musical chairs?
At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.
Not necessarily? That assumes that the first "good enough" model is a defensible moat - i.e., the first ones to get there becomes the sole purveyors of the Good AI.
In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.
And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.
More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.
It's not that the investments just won't pay off, it's that the global markets are likely to crash like happened with the subprime mortgage crisis.
The dot com boom involved silly things like Pets.com IPOing pre-revenue. Claude code hit $500m in ARR in 3 months.
The fact people don't see the difference between the two is unreal. Hacker news has gone full r* around this topic, you find better nuance even on Reddit than here.
But it does involve a ton of commercial real estate investment, as well as a huge shakeup in the energy market. People may not lose their homes, but we'll all be paying for this one way or another.
The fed could still push the real value of stocks quite a bit by destroying the USD, if they want, by pinning interest rates near 0 and forcing a rush to the exits to buy stock and other asset classes.
The one thing smaller companies might have is allocated power budgets from power companies. Part of the mad dash to build datacenters right now is just to claim the power so your competitors can't. Now I do think the established players hold an edge here, but I don't think OpenAI/Anthropic/etc are without some bargaining power(hah).
The past/present company they remind me of the most is semiconductor fabs. Significant generation-to-generation R&D investment, significant hardware and infrastructure investment, quite winner-takes-all on the high end, obsoleted in a couple years at most.
The main differences are these models are early in their development curve so the jumps are much bigger, and they are entirely digital so they get “shipped” much faster, and open weights seem to be possible. None of those factors seem to make it a more attractive business to be in.
If you build the actual datacenter, less than half the cost is the actual compute. The other half is the actual datacenter infrastructure, power infrastructure, and cooling.
So in that sense it's not that much different from Meta and Google which also used server infrastructure that depreciated over time. The difference is that I believe Meta and Google made money hand over fist even in their earliest days.
The funniest thing about all this is that the biggest difference between LLMs from Anthropic, Google, OpenAI, Alibaba is not model architecture or training objectives, which are broadly similar but it's the dataset. What people don't realize is how much of that data comes from massive undisclosed scrapes + synthetic data + countless hours of expert feedback shaping the models. As methodologies converge, the performance gap between these systems is already narrowing and will continue to diminish over time.
I think the most interesting numbers in this piece (ignoring the stock compensation part) are:
$4.3 billion in revenue - presumably from ChatGPT customers and API fees
$6.7 billion spent on R&D
$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.
Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?
Free usage usually goes in sales and marketing. It's effectively a cost of acquiring a customer. This also means it is considered an operating expense rather than a cost of goods sold and doesn't impact your gross margin.
Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.
(Source: I run an inference company.)
I think it makes the most sense this way, but I've seen it accounted for in other ways. E.g. if free users produce usage data that's valuable for R&D, then they could allocate a portion of the costs there.
Also, if the costs are split, there usually has to be an estimation of how to allocate expenses. E.g. if you lease a datacenter that's used for training as well as paid and free inference, then you have to decide a percentage to put in COGS, S&M, and R&D, and there is room to juice the numbers a little. Public companies are usually much more particular about tracking this, but private companies might use a proxy like % of users that are paid.
OpenAI has not been forthcoming about their financials, so I'd look at any ambiguity with skepticism. If it looked good, they would say it.
Marketing != advertising. Although this budget probably does include some traditional advertising. It is most likely about building the brand and brand awareness, as well as partnerships etc. I would imagine the sales team is probably quite big, and host all kinds of events. But I would say a big chunk of this "sales and marketing" budget goes into lobbying and government relations. And they are winning big time on that front. So it is money well spent from their perspective (although not from ours). This is all just an educated guess from my experience with budgets from much smaller companies.
I agree - they're winning big and booking big revenue.
If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.
They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.
ChatGPT is already a new default "pane of glass" for normal people.
Is this all really so unreasonable?
I certainly want exposure to their stock.
> If you discount R&D and "sales and marketing"
If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).
If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.
Both of these costs are likely to increase rather than decrease over time.
> ChatGPT is already a new default "pane of glass" for normal people.
OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.
We gave ChatGPT advertising on bus-stops here in the UK.
Two people in a cafe having a meet-up, they are both happy, one is holding a phone and they are both looking at it.
And it has a big ChatGPT logo in the top right corner of the advertisement - transparent just the black logo with ChatGPT written underneath.
That's it. No text or anything telling you what the product is or does. Just it will make you happy during conversations with friends somehow.
I see multiple banner ads promoting ChatGPT on my way to work. (India)
> $2 billion on sales and marketing - anyone got any idea what this is?
Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".
It's pretty well accepted now that for pre-training LLMs the curve is S not an exponential, right? Maybe it's all in RL post-training now, but my understanding(?) is that it's not nearly as expensive as pre-training. I don't think 3-6 months is the time to 10X improvement anymore (however that's measured), it seems closer to a year and growing assuming the plateau is real. I'd love to know if there are solid estimates on "doubling times" these days.
With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.
Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.
All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.
I have seen a tonnes of Chat GPT ads on Reddit. Usually with image generation of a dog in Japanese cartoon style.
you see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.
you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.
> $2 billion on sales and marketing - anyone got any idea what this is?
I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.
Italian advertising is weird in general. Month ago leaving Venice we pulled over on a gas station and I started just going thru pages on some magazine. At some point I see advertising on what looks like old fashioned shoes - and owner of the company holding his son with sign "from generation to generation". Only thing - the ~3 year old boy is completely naked wearing only shoes with his little pee pee sticking out. It shocked me and unsure if it was just my American domestication or there was really something wrong with it. I took a picture and wanted to send it to my friends in USA to show them how Italian advertising looks like, before getting sweats that if I were caught with that picture in the US, I would get in some deep trouble. I quickly deleted it, just in case. Crazy story..
Not crazy, it's just a cultural thing.
US (and maybe the whole of Anglosaxon world) is a bit mired in this let's consider everything the worst case scenario: no, having a photo of your friend's naked kiddo they shared being funny at the beach or in the garden in your messenger app is not child pornography. The fact that there are extremely few people who might see it as sexual should not influence the overall population as much as it does.
For me, I wouldn't blink an eye to such an ad, but due to my exposure to US culture, I do feel uneasy about having photos like the above in my devices (to the point of also having a thought pass my mind when it's of my own kids mucking about).
I resist it because I believe it's the wrong cultural standard to adhere to: nakedness is not by default sexual, and especially with small kids before they develop any significant sexual characteristics.
I’ve seen some on electronic street-level signs in Atlanta when I visited. So there is some genuine advertising.
Sales people out in the field selling to enterprises + free credits to get people hooked.
Speculating but they pay to be integrated as the default ai integration in various places the same way google has paid to be the default search engine on things like the iPhone?
> $2 billion on sales and marketing
Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.
People in this comment section focus on brand ads too much.
It’s the commercial intent where OpenAI can both make money and preserve trust.
I already don’t Google anymore. I just ask ChatGPT „give me an overview of best meshtastic devices to buy“ and then eventually end with „give me links to where I can buy these in Europe“.
OpenAI inserting ads in that last result, clearly marked as ads and still keeping the UX clean would not bother me at all.
And commercial queries are what, 40-50% of all Google revenue?
I think people are massively underestimating the money they will come from ads in the future.
They generated $4.3B in revenue without any advertising program to monetise their 700 million weekly active users, most of whom use the free product.
Google earns essentially all of its revenue from ads, $264B in 2024. ChatGPT has more consumer trust than Google at this point, and numerous ways of inserting sponsored results, which they’re starting to experiment with with the recent announcement of direct checkout.
The biggest concern IMO is how good the open weight models coming out of China are, on consumer hardware. But as long as OpenAI remains the go-to for the average consumer, they’ll be fine.