nojs 7 hours ago

I think people are massively underestimating the money they will come from ads in the future.

They generated $4.3B in revenue without any advertising program to monetise their 700 million weekly active users, most of whom use the free product.

Google earns essentially all of its revenue from ads, $264B in 2024. ChatGPT has more consumer trust than Google at this point, and numerous ways of inserting sponsored results, which they’re starting to experiment with with the recent announcement of direct checkout.

The biggest concern IMO is how good the open weight models coming out of China are, on consumer hardware. But as long as OpenAI remains the go-to for the average consumer, they’ll be fine.

  • asa400 4 hours ago

    What is OpenAI's competitive moat? There's no product stickiness here.

    What prevents people from just using Google, who can build AI stuff into their existing massive search/ads/video/email/browser infrastructure?

    Normal, non-technical users can't tell the difference between these models at all, so their usage numbers are highly dependent on marketing. Google has massive distribution with world-wide brands that people already know, trust, and pay for, especially in enterprise.

    Google doesn't have to go to the private markets to raise capital, they can spend as much of their own money as they like to market the living hell out of this stuff, just like they did with Chrome. The clock is running on OpenAI. At some point OpenAI's investors are going to want their money back.

    I'm not saying Google is going to win, but if I had to bet on which company's money runs out faster, I'm not betting against Google.

    • rajaman0 4 hours ago

      Consumer brand quality is so massively underrated by tech people.

      ChatGPT has a phenomenal brand. That's worth 100x more than "product stickiness". They have 700 million weekly users and growing much faster than Google.

      I think your points on Google being well positioned are apt for capitalization reasons, but only one company has consumer mindshare on "AI" and its the one with "ai" in its name.

      • sharkweek 4 hours ago

        I’ve got “normie” friends who I’d bet don’t even know that what Google has at the top of their search results is “AI” results and instead assume it’s just some extension of the normal search results we’ve all gotten used to (knowledge graph)

        Every one of them refers to using “ChatGPT” when talking about AI.

        How likely is it to stay that way? No idea, but OpenAI has clearly captured a notable amount of mindshare in this new era.

      • viking123 an hour ago

        Google has to be shitting its pants. No one knows what is "gemini", probably some stupid nerd thing. Normies knows ChatGPT and that is what matters.

      • parineum 3 hours ago

        > They have 700 million weekly users and growing much faster than Google.

        Years old company growing faster than decades old company!

        2.5 billion people use Gmail. I assume people check their mail (and, more importantly, receive mail) much more often than weekly.

        ChatGPT has a lot of growing to do to catch up, even if it's faster

        • boston_clone 25 minutes ago

          I read that as OpenAI’s WAU is showing a steeper increase than Google ever did. Not saying it’s factually accurate, just that it’s not a fixed point-in-time comparison :)

      • otabdeveloper4 an hour ago

        > ChatGPT has a phenomenal brand.

        If by "phenomenal" you mean "the premier slop and spam provider", then yes.

    • jstummbillig 12 minutes ago

      ChatGPT (and all the competitors) are trivially sticky products: I have a lot of ongoing conversations in there, that I pick up all the time. Add more long term memory stuff — a direction I am sure they will keep pushing — and all of the sudden there is a lot of personal data that you rely on it having, that make the product better and that most people will never care to replicate/transfer. Just being the product that people use makes you the product that people will use. "the other app doesn't know me" is the moat. The data that people put in it is the moat.

    • onion2k 8 minutes ago

      What prevents people from just using Google, who can build AI stuff into their existing massive search/ads/video/email/browser infrastructure?

      Google have never had a viable competitor. Their moat on Search and Ads has been so incredibly hard to beat that no one has even come close. That has given them an immense amount of money from search ads. That means they've appeared to be impossible to beat, but if you look at literally all their other products they aren't top in anything else despite essentially unlimited resources.

      A company becoming a viable competitor to Google Search and/or Ads is not something we can easily predict the outcome of. Many companies in the past who have had a 'monopoly' have utterly fallen apart at the first sign of real competition. We even have a term for it that YC companies love to scatter around their pitch decks - 'disruption'. If OpenAI takes even just 5% of the market Google will need to either increase their revenue by $13bn (hard, or they'd have done that already) or they'll need to start cutting things. Or just make $13bn less profit I guess. I don't think that would go down well though.

    • jofzar 2 hours ago

      > What is OpenAI's competitive moat? There's no product stickiness here.

      Would have agreed with you untill I saw the meltdown of people losing their "friend" when chatgpt 5 was released. Somehow openai has fallen into a "sticky" userbase.

    • mrheosuper 4 hours ago

      >non-technical users can't tell the difference between these models at all

      My non-tech friend said she prefer ChatGPT more than Gemini, most due to its tone.

      So non-tech people may not know the different in technical detail, but they sure can have bias.

      • ryukoposting 4 hours ago

        I have a non-techy friend who used 4o for that exact reason. Compared to most readily available chatbots, 4o just provides more engaging answers to non-techy questions. He likes to have extended conversations about philosophy and consciousness with it. I showed him R1, and he was fascinated by the reasoning process. Makes sense, given the sorts of questions he likes to ask it.

        I think OpenAI is pursuing a different market from Google right now. ChatGPT is a companion, Gemini is a tool. That's a totally arbitrary divide, though. Change out the system prompts and the web frontend. Ta-daa, you're in a different market segment now.

    • wiseowise an hour ago

      > Normal, non-technical users can't tell the difference between these models at all, so their usage numbers are highly dependent on marketing.

      When I think as a “normal” user, I can definitely see difference between them all.

      • louiskottmann 43 minutes ago

        As history showed us numerous times, it doesn't even have to be the best to win. It rarely is, really. See the most pervasive programming languages for that.

    • returnInfinity 4 hours ago

      ChatGpt has won. I talk to all teens living nearby and they all use chatgpt and not Google.

      The teens, they don't know what is OpenAI, they don't know what is Gemini. They sure know what is ChatGPT.

      • MountDoom 4 hours ago

        All of these teens use Google Docs instead of OpenAI Docs, Google Meet instead of OpenAI Meet, Gmail instead of OpenAI Mail, etc.

        I'm sure that far fewer people to go gemini.google.com than to chatgpt.com, but Google has LLMs seamlessly integrated in each of these products, and it's a part of people's workflows at school and at work.

        For a while, I was convinced that OpenAI had won and that Google won't be able to recover, but this lack of vertical integration is becoming a liability. It's probably why OpenAI is trying to branch into weird stuff, like running a walled-garden TikTok clone.

        Also keep in mind that unlike OpenAI, Google isn't under pressure to monetize AI products any time soon. They can keep subsidizing them until OpenAI runs out of other people's money. I'm not saying OpenAI has no path forward, but it's not all that clear-cut.

      • asa400 4 hours ago

        I'm not saying you're wrong, but people said the same thing about Yahoo, Excite, Lycos, etc. in 1999. Interesting times, then and now.

    • SamPatt an hour ago

      There is definitely stickiness if you're a frequent user. It has a history of hundreds of conversations. It knows a lot about me.

      Switching would be like coding with a brand new dev environment. Can I do it? Sure, but I don't want to.

    • raincole 3 hours ago

      Brand is the second most important moat (only shy to regulation capture).

      If normal people start saying "ChatGPT" to refer to AI they win, just like how google became a verb for search.

      It seems to be the case.

      • taurath 3 hours ago

        As a counter, you can buy a hell of a lot of brand for $8 billion dollars though.

        You can give your most active 50,000 users $160,000 each, for example.

        You can run campaign ads in every billboard, radio, tv station and every facebook feed tarring and feathering ChatGPT

        Hell, for only $200m you could just get the current admin to force ChatGPT to sell to Larry Ellison, and deport Sam Altman to Abu Dahbi like Nermal from Garfield.

        So many options!

    • loandbehold 4 hours ago

      Users' chat history is the moat. The more you use it, the more it knows about you and can help you in ways that are customized to particular user. That makes it sticky, more so than web search. Also brand recognition, ChatGPT is the default general purpose LLM choice for most people. Everyone and their mom is using it.

      • chrischen 4 hours ago

        Walling in my personal data would be a sure sign way to get me to not use OpenAI.

        • remarkEon 3 hours ago

          Is there a way, today, for me to move the project folders I have in the paid version to another AI product?

      • what 4 hours ago

        Gemini is probably the default general purpose LLM since its answers are inserted into every google result.

    • 01100011 3 hours ago

      I'm saying Google is going to win. They're not beholden to their current architecture as much as other shovelmakers and can pivot their TPU to offer the best inference perf/$. They also hold about as much personal data as anyone else and have plenty of stuff to train on in-house. I work for a competitor and even I think there's a good chance google "wins"(there's never a winner because the race never ends).

      • matwood 14 minutes ago

        The problem is that Google is horrible at product. They have been so spot on at search it's covered up all the other issues around products. YT is great, but they bought that. The Pixel should the Android phone, but they do a poor job marketing. They should be leading AI, but stumbled multiple times in the rollout. They normally get the tech right, and then fumble the productizing and marketing.

      • nwellinghoff 2 hours ago

        I think we are all forgetting that Google is a massive bureaucracy that has to move out of its own way to get anything done. The younger companies have a distinct advantage here. Hence the cycle of company growth and collapse.I think openai and the like have a very good chance here.

      • fooker 3 hours ago

        Is there publicly available evidence that TPU perf/$ is better than Blackwell ?

        I know it seems intuitively true but was surprised to not really find evidence for it.

        • gigatexal 3 hours ago

          yeah ... poly market and other makers seem to be betting that Google by year's end or sometime next year or so will have teh best gen ai models on the market ... but I've been using Claude sonnet 4.5 with GitHub Copilot and swear by it.

          anyways would be nice to really see some apples-to-apples benchmarks of the TPU vs Nvidia hardware but how would that work given CUDA is not hardware agnostic?

    • kristopolous 4 hours ago

      It's a distinctive brand, pleasant user experience, and a trustworthy product, like every other commodified technology on the planet.

      That's all that matters now. We've passed the "good enough" bar for llms for the majority of consumer use cases.

      From here out it's like selling cellphones and laptops

    • xz0r 4 hours ago

      Let me direct you to the reddit AMA where people were literally begging to bring back 4o.

      • hn_throwaway_99 43 minutes ago

        Yeah, anyone saying "Normal, non-technical users can't tell the difference between these models at all" isn't talking to that many normal, non-technical users.

    • fspeech 4 hours ago

      Chats have contexts. While search engines try to track you it is spookier because it is unclear to the user how the contexts are formed. In chats at least the contexts are transparent to both the provider and the user.

    • voidfunc 4 hours ago

      Brand. Brand. Brand!

      Literally nobody but nerds know what a Claude is among many others.

      ChatGPT has name recognition and that matters massively.

    • MuffinFlavored 3 hours ago

      > There's no product stickiness here.

      Very few of those 700,000,000 active users have ever heard of Claude or DeepSeek or ________. Gemini maybe.

    • beacon294 4 hours ago

      AI has been incredibly sticky. Look at the outrage, OpenAI couldn't even deprecate 4o or whatever because it's incredibly popular. Those people aren't leaving OAI if they're not even leaving a last gen model.

  • mrweasel 15 minutes ago

    > ChatGPT has more consumer trust than Google at this point

    That trust is gone the moment they start selling ad space. Where would they put the ads? In the answers? That would force more people to buy a subscription, just to avoid having the email to your boss contain a sponsored message. The numbers for Q2 looks promising, sells are going up. And speaking of sales, Jif peanut butter is on sale this week.

    If OpenAI plan on making money with ads then all the investments made by Nvidia, Microsoft and Softbank starts to look incredibly stupid. Smartest AI in the world, but we can only make money buy showing you gambling ads.

  • MontyCarloHall 6 hours ago

    I also wonder if this means that even paid tiers will get ads. Google's ad revenue is only ~$30 per user per year, yet there is no paid, ad-free Google Premium, even though lots of users would gladly pay way more than $30/year have an ad-free experience. There's no Google Premium because Google's ad revenue isn't uniformly distributed across users; it's heavily skewed towards the wealthiest users, exactly the users most likely to purchase an ad-free experience. In order to recoup the lost ad revenue from those wealthy users, Google would have to charge something exorbitant, which nobody would be willing to pay.

    I fear the same will happen with chatbots. The users paying $20 or $200/month for premium tiers of ChatGPT are precisely the ones you don't want to exclude from generating ad revenue.

    • alex43578 4 hours ago

      "Lots of users would gladly pay way more than $30/year have an ad-free experience"? Outside of ads embedded in Google Maps, a free and simple install of Ublock Origin essentially eliminates ads in Search, YouTube, etc. I'd expect that just like Facebook, people would be very unwilling to pay for Google to eliminate ads, since right now they aren't even willing to add a browser extension.

      • catlifeonmars 3 hours ago

        Anecdata, but my nontechnical friends have never heard of uBlock origin. They all know about ad-free youtube.

      • hsbauauvhabzb 4 hours ago

        It worked for YouTube, I don’t see why the assumption of paid gpt models will follow google and not YouTube, particularly when users are conditioned to pay for gpt already.

    • psadri 5 hours ago

      The average is $x. But that's global which means in some places like the US it is 10x. And in other less wealthy areas it is 0.1x.

      There is also the strange paradox that the people who are willing to pay are actually the most desirable advertising targets (because they clearly have $ to spend). So my guess is that for that segment, the revenue is 100x.

    • m11a 5 hours ago

      I’d agree. The biggest exception I can think of is X, which post-Musk has plans to reduce/remove ads. Though I don’t know how much this tanked their ad revenue and whether it was worth it.

    • grogers 4 hours ago

      Why would it be any different for youtube premium? I think Google just doesn't think enough people will pay for ad-free search, not that it would cannibalize their ad revenue.

      • nickff 4 hours ago

        YouTube's ads are much lower-cost than the 'premium' AdWords ones, because the 'intent' is lower, and targeting is worse.

    • josvdwest 5 hours ago

      Pretty sure the reason they don't have a paid tier is because engagement (and results) is better when you include ads. Like Facebook found in the early days

  • twelvechairs 7 hours ago

    > But as long as OpenAI remains the go-to for the average consumer, they be fine.

    This is like the argument of a couple of years ago "as long as Tesla remains ahead of the Chinese technology...". OpenAI can definitely become a profitable company but I dont see anything to say they will have a moat and monopoly.

    • muzani 6 hours ago

      They're the only ones making AI with a personality. Yeah, you don't need chocolate flavored protein shakes but if I'm taking it every day, I get sick of the vanilla flavor.

      • idiotsecant 5 hours ago

        Huh? They're actively removing personality from current models as much as possible.

  • iambateman 7 hours ago

    I think this is directionally right but to nitpick…Google has way more trust than OpenAI right now and it’s not close.

    Acceleration is felt, not velocity.

    • da_chicken 5 hours ago

      Yeah, I agree with you.

      Between Android, Chrome, YouTube, Gmail (including mx.google.com), Docs/Drive, Meet/Chat, and Google Search, claiming that Google "isn't more trusted" is just ludicrous. People may not be happy they have to trust Alphabet. But they certainly do.

      And even when they insist they're Stallman, their friends do, their family does, their coworkers do, the businesses they interact with do, the schools they send their children to do.

      • djtango 4 hours ago

        Like it or not, Google has wormed their way into the fabric of modern life.

        Chrome and Google Search are still the gateway to the internet outside China. Android has over 75% market share of all mobile(!). YouTube is somewhat uniquely the video internet with Instagram and Tiktok not really occupying the same mindshare for "search" and long form.

        People can say they don't "trust" Google but the fact is that if the world didn't trust Google, it never would have gotten to where it is and it would quickly unravel from here.

        Sent from my Android (begrudgingly)

      • lemonlearnings 3 hours ago

        With search you dont fully trust Google. You trust Google to find good results most of the time them trust those results based on other factors.

        But with AI you now have all trust in one place. For Google and OpenAI their AI bullshits. It will only be trusted by fools. Luckily for the corporations there is no end of fools to fool.

    • jacquesm 6 hours ago

      I really don't trust either. Google because of what they've already done, OpenAI because it has a guy at the helm who doesn't know how to spell the word 'ethics'.

      • renewiltord 6 hours ago

        That's mostly because LLMs think in terms of tokens not letters, so spelling is hard.

        • floren 5 hours ago

          He knows there's no "I" in "ethics"

    • chmod775 6 hours ago

      This really depends on where you are are. Some countries' populations, especially those known to value privacy, are extremely distrustful of anything associated with Facebook or Google.

    • anonymousiam 6 hours ago

      I agree with you, and my impression of the trust-level of Google is pretty much zero.

    • piskov 7 hours ago

      Google and trust are an oxymoron

  • hoppp 5 hours ago

    The moment they start mixing ads into responses Ill stop using them. Open models are good enough, its just more convenient to use chatgpt right now, but that can change.

    • Kranar 5 hours ago

      People said the same thing about so many other online services since the 90s. The issue is that you're imagining ChatGPT as it exists right now with your current use case but just with ads inserted into their product. That's not really how these things go... instead OpenAI will wait until their product becomes so ingrained in everyday usage that you can't just decide to stop using them. It is possible, although not certain, that their product becomes ubiquitous and using LLMs someway somehow just becomes a normal way of doing your job, or using your computer, or performing menial and ordinary tasks. Using an LLM will be like using email, or using Google maps, or some other common tool we don't think much of.

      That's when services start to insert ads into their product.

      • preommr 3 hours ago

        > People said the same thing about so many other online services since the 90s.

        And this leads to something I genuinely don't understand - because I don't see ads. I use adblocker, and don't bother with media with too many ads because there's other stuff to do. It's just too easy to switch off a show and start up a steam game or something. It's not the 90s anymore, people have so many options for things.

        Idk, maybe I am wrong, but I really think there is something very broken in the ad world as a remenant from the era where google/facebook were brand new and the signal to noise ratio for advertisers was insanely high and interest rates were low. Like a bunch of this activity is either bots or kids, and the latter isn't that easy to monetize.

      • byzantinegene 4 hours ago

        Except it's hard to imagine a world where chatgpt is heads and shoulders over the other llms in capability. Google has no problem keeping up and let's not forget that China has state-sponsored programs for AI development.

      • abnercoimbre 4 hours ago

        And if/when they reach that point, the average consumer will see the ad as an irksome fly. That's it.

      • outside1234 3 hours ago

        Except that I have switched to Gemini and not missed anything from OpenAI

    • beeflet 5 hours ago

      I agree, but the question is whether or not normal people will stop using them.

      • _aavaa_ 5 hours ago

        I think the empirical answer is no. Look at how many ads there are in everything and people still use it.

    • JumpCrisscross 5 hours ago

      > moment they start mixing ads into responses Ill stop using them

      Do you currently pay for it?

  • skanaley 6 hours ago

    "Please help me with my factorial homework."

      buyACoke 0        = 1
      buyACoke rightNow = youShould * buyACoke (rightNow `at` walmart)
        where
          youShould = rightNow
          at        = (-)
          walmart   = 1
  • raw_anon_1111 2 hours ago

    Why do people always think that just because you have a lot of users it automatically translates to ad revenue? Yahoo has been one of the most trafficked site for decades and could never generate any reasonable amount of ad revenue.

    The other side of the coin is that running an LLM will never be as cheap as search engine.

  • Rohansi 6 hours ago

    It'll be interesting to see the effect ads have on their trustworthiness. There's potential for it to end up worse than Google because sponsored content can blend in better and possibly not be reliably disclosed.

    • exasperaited 6 hours ago

      There is also the IMO not exactly settled question of whether an advertiser is comfortable handing over its marketing to an AI.

      Can any AI be sensibly and reliably instructed not to do product placement in inappropriate contexts?

      • typpilol 4 hours ago

        Also what effect will these extra instructions have on output?

        Every token of context can drastically change the output. That's a big issue right now with Claude and their long conversation reminders.

  • crystal_revenge 4 hours ago

    > ads in the future.

    It boggles my mind that people still think advertising can be a major part of the economy.

    If AI is propping up the economy right now [0] how is it possible that the rest of the economy can possibly fund AI through profit sharing? That's fundamentally what advertising is: I give you a share of my revenue (hopefully from profits) in order to help increase my market share. The limit of what advertising spend can be is percent of profits minus some epsilon (for a functioning economy at least).

    Advertising cannot be the lions share of any economy because it derives it's value from the rest of the economy.

    Advertising is also a major bubble because my one assumption there (that it's a share of profits) is generally not the case. Unprofitable companies giving away a share of their revenue to other companies making those companies profitable is not sustainable.

    Advertising could save AI if AI was a relatively small part of the US (or world) economy and could benefit by extracting a share of the profits from other companies. But if most your GDP is from AI how can it possibly cannibalize other companies in a sustainable way?

    0. https://www.techspot.com/news/109626-ai-bubble-only-thing-ke...

    • StopHammoTime 3 hours ago

      You've run a false equivalency in your argument. Growth is not representative of the entire economy. The economy is, in aggregate, much more than tech - they have the biggest public companies which skews how people think. No exclusive sector makes up "most" of the economy, in fact the highest sector, which is finance only makes up 21% of the US economy.

      https://www.statista.com/statistics/248004/percentage-added-...

      • crystal_revenge 3 hours ago

        > Growth is not representative of the entire economy

        Our entire economy is based on debt, it cannot function without growth. This is demonstrated by the fact that:

        > in fact the highest sector, which is finance only makes up 21% of the US economy

        Every cent earned by the finance sector boils down from being derived from debt (i.e. growth has to pay it off). You just pointed out the largest sector of our economy relies on rapid growth, and the majority of growth right now is coming from AI. AI, therefore, cannot derive the majority of it's value by cannibalizing the growth of other sectors because no other sector has sufficient growth the fund both AI, itself and the debt that needs to be repaid to make the entire thing make sense.

    • lemonlearnings 3 hours ago

      US GDP is 30T so that revenue is less than 1% of it. But 1% of GDP us still eye popping amount. But remember in the non Google world that is split up into Yellow Pages and TV ads and etc. and possibly many ventures that were not possible because of lack of targeted ads didnt come to fruition.

  • lelanthran an hour ago

    Google is tightly integrated vertically. It is going to be very hard to dislodge that.

    Right now Gemini gives a youtube link in every response. That means they have already monetised their product using ads.

  • ares623 an hour ago

    > ChatGPT has more consumer trust than Google at this point

    Gee I wonder why?

  • piskov 7 hours ago

    I don’t pay $200 per month to use a product tightly coupled for ad revenue (ahem tracking).

    That’s why I use Kagi, Hey, Telegram, Apple (for now) etc.

    I really hope OpenAI can build a sustainable model which is not based on that.

    • nharada 7 hours ago

      I suspect ads would be an attempt to monetize the free users not people paying $200/mo for Pro. Though who knows...

      • piskov 6 hours ago

        First, as an advertiser you want those sweet-sweet people with money.

        Second, if they put “display: hidden” on ads doesn’t mean they will create and use entirely other architecture, data flow and storage, just for those pro users.

  • StarterPro 2 hours ago

    Its a completely optional purchase, and there's no clear way for ads to be included without it muddying up the actual answer.

    "The most popular brand of bread in America is........BUTTERNUT (AD)"

    Its a sinkhole that they are destroying our environment for. Its not sustainable on a massive scale, and I expect to see Sam Altman join his 30 under 30 cohorts SBF and such eventually.

  • syntaxing 6 hours ago

    They should be concerned with open weight models that don’t run on consumer hardware. The larger models from Qwen (Qwen Max) and ZLM (GLM and GLM air) perform not too far from Claude Sonnet 4 and GPT-5. ZLM offers a $3 plan that is decently generous. I can pretty much replace it over Sonnet 4 in Claude Code (I swear, Anthropic has been nerfing Sonnet 4 for people on the Pro plan).

    You can run Qwen3-coder for free upto 1000 requests a day. Admittedly not state of the art but works as good of 5o-mini

    • imachine1980_ 6 hours ago

      I believe regular people will not change from chatGPT if it has some ads. I know people who use "alternative" wrappers that have ads because they aren't tech savvy, and I agree with the OP that this could be a significant amount of money We aren't 700 million people that use it.

      • syntaxing 6 hours ago

        Definitely don’t argue against that, once people get into a habit of using something, it takes quite a bit to get away from it. Just that an American startup can literally run ZLM models themselves (open weight with permissive license) as a competitor to ChatGPT is pretty wild to think about

        • hx8 6 hours ago

          One of the side effects of having a chat interface, is that there is no moat around it. Using it is natural.

          Changing from Windows to Mac or iOS to Android requires changing the User Interface. All of these chat applications have essentially the same interface. Changing between ChatGPT and Claude is essentially like buying a different flavor of potato chip. There is some brand loyalty and user preference, but there is very little friction.

  • majormajor 4 hours ago

    If they overnight were able to capture as much revenue per user as Meta (about 50 bucks a year) they'd bring in a bucket of cash immediately.

    But selling that much ad inventory overnight - especially if they want new formats vs "here's a video randomly inserted in your conversation" sorta stuff - is far from easy.

    Their compute costs could easily go down as technology advances. That helps.

    But can they ramp up the advertising fast enough to bring in sufficient profit before cheaper down-market alternatives become common?

    They lack the social-network lock-in effect of Meta, or the content of ESPN, and it remains to be seen if they will have the "but Google has better results than Bing" stickiness of Google.

  • zdragnar 7 hours ago

    What is OpenAI's moat? There's plenty of competitors running their own models and tools. Sure, they have the ChatGPT name, but I don't see them massively out-competing the entire market unless the future model changes drastically improve over the 3->4->5 trajectory.

    • nojs 6 hours ago

      It feels similar to Google to me - what is (was) their moat? Basically slightly better results and strong brand recognition. In the later days maybe privileged data access. But why does nobody use Bing?

      • zdragnar 5 hours ago

        Google got a massive leg up on the rest be having a better service. When Bing first came out, I was not impressed with what I got, and never really bothered going back to it.

        Search quality isn't what it used to be, but the inertia is still paying dividends. That same inertia also applied to Google ads.

        I'm not nearly so convinced OpenAI has the same leg up with ChatGPT. ChatGPT hasn't become a verb quite like google or Kleenex, and it isn't an indispensable part of a product.

        • typpilol 4 hours ago

          I actually find bing better now for more technical searches.

          Most technical Google searches end up at win fourms or official Microsoft support site which is basically just telling you that running sfc scannow for everything is the fix.

      • balder1991 5 hours ago

        Google has always been much better than the competition. Even today with their enshittification, competitors still aren’t as good.

        The only thing that has changed that status quo is the rise of audiovisual media and sites closing up so that Google can’t index them, which means web search lost a lot of relevance.

      • HDThoreaun 3 hours ago

        google's moat is a combination of it being free and either being equal to or outright better than competitors

    • bobby_mcbrown 6 hours ago

      It's Sam.

      From what I understand he was the only one crazy enough to demand hundreds of GPUs for months to get ChatGPT going. Which at the time sounded crazy.

      So yeah Sam is the guy with the guts and vision to stay ahead.

      • shermantanktop 6 hours ago

        Past performance is no guarantee of future results.

        You might see Sam as a Midas who can turn anything into gold. But history shows that very few people sustain that pattern.

      • bix6 5 hours ago

        Ignoring Sutskever much?

      • croes 6 hours ago

        OpenAI isn't ahead

    • bcrl 6 hours ago

      This! The cost of training models inevitably goes down over time as FLOPS/$ and PB/$ increases relentlessly thanks to the exponential gains of Moore's law. Eventually we will end up with laptops and phones being Good Enough to run models locally. Once that happens, any competitor in the space that decides to actively support running locally will have operating costs that are a mere fraction of OpenAI's current business.

      The pop of this bubble is going to be painful for a lot of people. Being too early to a market is just as bad as being too late, especially for something that can become a commodity due to a lack of moat.

      • otabdeveloper4 an hour ago

        > increases relentlessly thanks to the exponential gains of Moore's law

        Moore's so-called "law" hasn't been true for years.

        Chinese AI defeated American companies because they spent effort to optimize the software.

      • aurareturn 6 hours ago

        You just said that everyone will be able to run a powerful AI locally and then you said this would lead to a pop of the bubble.

        Well, which is it? That AI is going to have huge demands for chips that it is going to get much bigger or is the bubble going to pop? You can’t have both.

        My opinion is that local LLMs will do a bulk of the low value interference such as your personal life mundane tasks. But cloud AI will be reserved for work and for advanced research purposes.

  • Thaxll 5 hours ago

    Google has google.com, youtube, chrome, android, gmail, google map etc ... I don't see OpenAI having a product close to that.

    • wslh 5 hours ago

      Google is older and many of the products you describe were acquisitions (inorganic growth).

      • vitus 4 hours ago

        > google.com, youtube, chrome, android, gmail, google map etc

        Of those, it's 50/50. The acquisitions were YT, Android, Maps. Search was obviously Google's original product, Chrome was an in-house effort to rejuvenate the web after IE had caused years of stagnation, and Gmail famously started as a 20% project.

        There are of course criticisms that Google has not really created any major (say, billion-user) in-house products in the past 15 years.

        • ppseafield 3 hours ago

          Chrome's engine was WebKit originally, which they then forked. Not an acquisition, but benefitted greatly from prior work.

      • Workaccount2 5 hours ago

        By this point I imagine it's a novelty to find any code from the original acquisition in those products.

  • zahlman 4 hours ago

    > $264B in 2024.

    Why is this much money spent on advertising? Surely it isn't really justified by increase in sales that could be attributed to the ads? You're telling me people actually buy these ridiculous products I see advertised?

    • ipaddr 4 hours ago

      A lot of that money comes from search result ads. Sometimes I click on an ad to visit a site I search for instead of scrolling to the same link in the actual search results. Many companies bid on keywords for their own name to prevent others from taking a customer who is interested in you.

      You use to be a useful site and be at the top of the search results for some keywords and now you have to pay.

    • returnInfinity 4 hours ago

      It's a lot more complicated, but yes advertising works.

      There is a saying in India, whats seen is what is sold.

      Not the hidden best product.

    • HeatrayEnjoyer 2 hours ago

      Yes, they do. Advertising works. "Free with ads" isn't really free because on average you'll end up spending more money than you would otherwise. You're also paying more than if it was a subscription because the producer has to create both the product and also advertise it.

  • alok-g 6 hours ago

    >> ... underestimating the money they will come from ads in the future.

    I would like AI to focus on helping consumers discover the right products for their stated needs as opposed to just being shown (personalized) ads. As of now, I frequently have a hard time finding the things I need via Amazon search, Google, as well as ChatGPT.

  • bradly 6 hours ago

    Are they currently adding affiliate links to their outbound Amazon product links?

  • outside1234 4 hours ago

    The problem with this is that I have moved to Gemini with zero loss in functionality, and I’m pretty sure that Google is 100x better at ads than OpenAI.

  • deadbabe 7 hours ago

    In 10 years most serious users of AI will be using local LLMs on insanely powerful devices, with no ads. API based services will have limited horizon.

    • api 6 hours ago

      Some will but you’re underestimating the burning desire to avoid IT and sysadmin work. Look at how much companies overspend on cloud just to not have to do IT work. They’d rather pay 10X-100X more to not have to admin stuff.

      • beeflet 5 hours ago

        It is just downloading a program and using it

      • FpUser 6 hours ago

        >"Look at how much companies overspend on cloud just to not have to do IT work."

        I think they are doing it for a different reasons. Some are legit like renting this supercomputer for a day and some are like everybody else is doing it. I am friends with the small company owner and they have sysadmin who picks nose and does nothing and then they pay a fortune to Amazon

      • deadbabe 6 hours ago

        I’m talking about prepackaged offline local only on device LLMs.

        What you are describing though will almost certainly happen even sooner once AI tech stablizes and investing in powerful hardware no longer means you will become quickly out of date.

    • IncreasePosts 6 hours ago

      Ok, but there will be users using even more insanely powerful datacenter computers that will be able to our-AI the local AI users.

      • bad_haircut72 5 hours ago

        Nvidia/Apple (hardware companies) are the only winner in this case

  • throwaway2037 5 hours ago

        > They generated $4.3B in revenue without any advertising program
    
    To be clear, they bought/aired a Superbowl advert. That is a pretty expensive. You might argue that "Superbowl advert" versus 4B+ in revenue is inconsequential, but you cannot say there is no advertising.

    Also, their press release said:

        > $2 billion spent on sales and marketing
    
    Vague. Is this advertising? Eh, not sure, but that is a big chunk of money.
    • NotMichaelBay 5 hours ago

      I think they mean OpenAI showing ads from other companies to users, not buying ads themselves.

  • wnevets 6 hours ago

    Banner ads would only be the start of the enshittification of AI chats. I can't wait for the bots to start recommending products and services of the highest bidder.

stephc_int13 11 hours ago

Everyone is trying to compare AI companies with something that happened in the past, but I don't think we can predict much from that.

GPUs are not railroads or fiber optics.

The cost structure of ChatGPT and other LLM based services is entirely different than web, they are very expensive to build but also cost a lot to serve.

Companies like Meta, Microsoft, Amazon, Google would all survive if their massive investment does not pay off.

On the other hand, OpenAI, Anthropic and others could be soon find themselves in a difficult position and be at the mercy of Nvidia.

  • wood_spirit 11 hours ago

    Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027. It won’t retain much value in the same way as the infrastructure of previous bubbles did?

    • christina97 10 hours ago

      The A100 came out 5.5 years ago and is still the staple for many AI/ML workloads. Even AI hardware just doesn’t depreciate that quickly.

      • oblio 4 hours ago

        Don't they degrade physically from being run at full blast 24/7 for so many years?

      • dzhiurgis 7 hours ago

        This. There’s even a market for them being built (DRW).

    • layoric 11 hours ago

      > Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.

      I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.

      Betting against compute getting better/cheaper/faster is probably a bad idea, but fundamental improvements I think will be a lot slower over the next decade as shrinking gets a lot harder.

      • palmotea 10 hours ago

        >> Unlike railroads and fibre, all the best compute in 2025 will be lacklustre in 2027.

        > I definitely don't think compute is anything like railroads and fibre, but I'm not so sure compute will continue it's efficiency gains of the past. Power consumption for these chips is climbing fast, lots of gains are from better hardware support for 8bit/4bit precision, I believe yields are getting harder to achieve as things get much smaller.

        I'm no expert, buy my understanding is that as feature sizes shrink, semiconductors become more prone to failure over time. Those GPUs probably aren't going to all fry themselves in two years, but even if GPUs stagnate, chip longevity may limit the medium/long term value of the (massive) investment.

      • spiderice 10 hours ago

        Unfortunately changing 2027 to 2030 doesn't make the math much better

        • JumpCrisscross 5 hours ago

          > changing 2027 to 2030 doesn't make the math much better

          Could you show me?

          Early turbines didn't last that long. Even modern ones are only rated for a few decades.

      • [removed] an hour ago
        [deleted]
      • skywhopper 10 hours ago

        Unfortunately the chips themselves probably won’t physically last much longer than that under the workloads they are being put to. So, yes, they won’t be totally obsolete as technology in 2028, but they may still have to be replaced.

    • mcswell 6 hours ago

      "...all the best compute in 2025 will be lacklustre in 2027": How does the compute (I assume you mean on PCs) of 2025 compare with the compute of 2023?

      Oh wait, the computer I'm typing this on was manufactured in 2020...

      • brianwawok 6 hours ago

        Neato. How’s that 1999 era laptop? Because 25 year old trains are still running and 25 year old train track is still almost new. It’s not the same and you know it.

        • 1oooqooq 4 hours ago

          last month HN was talking about a win95 with floppy drivers handling rail in Germany no less

    • potatolicious 11 hours ago

      Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.

      Effectively every single H100 in existence now will be e-waste in 5 years or less. Not exactly railroad infrastructure here, or even dark fiber.

      • hyperbovine 7 hours ago

        > Effectively every single H100 in existence now will be e-waste in 5 years or less.

        This is definitely not true, the A100 came out just over 5 years ago and still goes for low five figures used on eBay.

      • fooker 10 hours ago

        > Effectively every single H100 in existence now will be e-waste in 5 years or less.

        This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.

        If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.

      • 9rx 10 hours ago

        > Yep, we are (unfortunately) still running on railroad infrastructure built a century ago.

        That which survived, at least. A whole lot of rail infrastructure was not viable and soon became waste of its own. There was, at one time, ten rail lines around my parts, operated by six different railway companies. Only one of them remains fully intact to this day. One other line retained a short section that is still standing, which is now being used for car storage, but was mostly dismantled. The rest are completely gone.

        When we look back in 100 years, the total amortization cost for the "winner" won't look so bad. The “picks and axes” (i.e. H100s) that soon wore down, but were needed to build the grander vision won't even be a second thought in hindsight.

      • Spooky23 8 hours ago

        How was your trip down the third Avenue El? Did your goods arrive via boxcar to 111 8th Ave?

        • selimthegrim 5 hours ago

          At the rate they are throwing obstacles at the promised subway which they got rid of the 3rd Ave El for maybe his/her grandkids will finish the trip.

      • SJC_Hacker 10 hours ago

        > Yep, we are (unfortunately) still running on railroad infrastructure built a century ago. The amortization periods on that spending is ridiculously long.

        Are we? I was under the impression that the tracks degraded due to stresses like heat/rain/etc. and had to be replaced periodically.

    • Spooky23 8 hours ago

      Unlike 1875, we have Saudi and other tillion/billionaires willing commit almost any amount to own the future of business.

      • rchaud 7 hours ago

        Except they behave less like shrewd investors and more like bandwagon jumpers looking to buy influence or get rich quick. Crypto, Twitter, ridesharing, office sharing and now AI. None of these have been the future of business.

        Business looks a lot like what it has throughout history. Building physical transport infrastructure, trade links, improving agricultural and manufacturing productivity and investing in military advancements. In the latter respect, countries like Turkey and Iran are decades ahead of Saudi in terms of building internal security capacity with drone tech for example.

        • Spooky23 7 hours ago

          Agreed - I don’t think they are particularly brilliant as a category. Hereditary kleptocracy has limits.

          But… I don’t think there’s an example in modern history of the this much capital moving around based on whim.

          The “bet on red” mentality has produced some odd leaders with absolute authority in their domain. One of the most influential figures on the US government claims to believe that he is saving society from the antichrist. Another thinks he’s the protagonist in a sci-fi novel.

          We have the madness of monarchy with modern weapons and power. Yikes.

    • Analemma_ 11 hours ago

      Exactly: when was the last time you used ChatGPT-3.5? Its value deprecated to zero after, what, two-and-a-half years? (And the Nvidia chips used to train it have barely retained any value either)

      The financials here are so ugly: you have to light truckloads of money on fire forever just to jog in place.

      • falcor84 10 hours ago

        I would think that it's more like a general codebase - even if after 2.5 years, 95% percent of the lines were rewritten, and even if the whole thing was rewritten in a different language, there is no point in time at which its value diminished, as you arguably couldn't have built the new version without all the knowledge (and institutional knowledge) from the older version.

        • spwa4 10 hours ago

          I rejoined an previous employer of mine, someone everyone here knows ... and I found that half their networking equipment is still being maintained by code I wrote in 2012-2014. It has not been rewritten. Hell, I rewrote a few parts that badly needed it despite joining another part of the company.

      • tim333 8 hours ago

        OpenAI is now valued at $500bn though. I doubt the investors are too wrecked yet.

        It may be like looking at the early Google and saying they are spending loads on compute and haven't even figured how to monetize search, the investors are doomed.

      • CompoundEyes 6 hours ago

        A really did few days ago gpt-3.5-fast is a great model for certain tasks and cost wise via the API. Lots of solutions being built on the today’s latest are for tomorrow’s legacy model — if it works just pin the version.

      • fooker 10 hours ago

        > And the Nvidia chips used to train it have barely retained any value either

        Oh, I'd love to get a cheap H100! Where can I find one? You'll find it costs almost as much used as it's new.

      • cj 10 hours ago

        > money on fire forever just to jog in place.

        Why?

        I don't see why these companies can't just stop training at some point. Unless you're saying the cost of inference is unsustainable?

        I can envision a future where ChatGPT stops getting new SOTA models, and all future models are built for enterprise or people willing to pay a lot of money for high ROI use cases.

        We don't need better models for the vast majority of chats taking place today E.g. kids using it for help with homework - are today's models really not good enough?

      • mattmanser 11 hours ago

        But is it a bit like a game of musical chairs?

        At some point the AI becomes good enough, and if you're not sitting in a chair at the time, you're not going to be the next Google.

        • potatolicious 10 hours ago

          Not necessarily? That assumes that the first "good enough" model is a defensible moat - i.e., the first ones to get there becomes the sole purveyors of the Good AI.

          In practice that hasn't borne out. You can download and run open weight models now that are spitting distance to state-of-the-art, and open weight models are at best a few months behind the proprietary stuff.

          And even within the realm of proprietary models no player can maintain a lead. Any advances are rapidly matched by the other players.

          More likely at some point the AI becomes "good enough"... and every single player will also get a "good enough" AI shortly thereafter. There doesn't seem like there's a scenario where any player can afford to stop setting cash on fire and start making money.

  • conartist6 7 hours ago

    It's not that the investments just won't pay off, it's that the global markets are likely to crash like happened with the subprime mortgage crisis.

    • vitaflo 7 hours ago

      This is much closer to the dotcom boom than the subprime stuff. The dotcom boom/bust affected tech more than anything else. It didn’t involve consumers like the housing crash did.

      • bobxmax 7 hours ago

        The dot com boom involved silly things like Pets.com IPOing pre-revenue. Claude code hit $500m in ARR in 3 months.

        The fact people don't see the difference between the two is unreal. Hacker news has gone full r* around this topic, you find better nuance even on Reddit than here.

      • digdugdirk 7 hours ago

        But it does involve a ton of commercial real estate investment, as well as a huge shakeup in the energy market. People may not lose their homes, but we'll all be paying for this one way or another.

    • mothballed 7 hours ago

      The fed could still push the real value of stocks quite a bit by destroying the USD, if they want, by pinning interest rates near 0 and forcing a rush to the exits to buy stock and other asset classes.

      • mcny 7 hours ago

        The point still stands though. All these other companies can pivot to some thing else if AI fails but what will OpenAI do?

  • JCM9 11 hours ago

    Businesses are different but the fundamentals of business and finance stay consistent. In every bubble that reality is unavoidable, no matter how much people say/wish “but this time is different.”

  • redwood 7 hours ago

    I'm reminded of the quote "If you owe the bank $100 that's your problem. If you owe the bank $100 million, that's the bank's problem." - J. Paul Getty

    Nvidia may well be at the mercy of them! Hence the recent circular dealing

  • 01100011 3 hours ago

    The one thing smaller companies might have is allocated power budgets from power companies. Part of the mad dash to build datacenters right now is just to claim the power so your competitors can't. Now I do think the established players hold an edge here, but I don't think OpenAI/Anthropic/etc are without some bargaining power(hah).

  • bee_rider 6 hours ago

    The past/present company they remind me of the most is semiconductor fabs. Significant generation-to-generation R&D investment, significant hardware and infrastructure investment, quite winner-takes-all on the high end, obsoleted in a couple years at most.

    The main differences are these models are early in their development curve so the jumps are much bigger, and they are entirely digital so they get “shipped” much faster, and open weights seem to be possible. None of those factors seem to make it a more attractive business to be in.

  • EasyMark 4 hours ago

    In the end Revenues > Costs or you have an issue. That "startup" money will eventually be gone, and you're back to MIMO Money In vs Money Out and if it's not > , you will go bankrupt.

  • LarsDu88 8 hours ago

    If you build the actual datacenter, less than half the cost is the actual compute. The other half is the actual datacenter infrastructure, power infrastructure, and cooling.

    So in that sense it's not that much different from Meta and Google which also used server infrastructure that depreciated over time. The difference is that I believe Meta and Google made money hand over fist even in their earliest days.

    • Lalo-ATX 6 hours ago

      Last time i ran the numbers -

      Data center facilities are ~$10k per kW

      IT gear is like $20k-$50k per kW

      Data center gear is good for 15-30 years. IT is like 2-6ish.

      Would love to see updated numbers. Got any?

  • lossolo 10 hours ago

    The funniest thing about all this is that the biggest difference between LLMs from Anthropic, Google, OpenAI, Alibaba is not model architecture or training objectives, which are broadly similar but it's the dataset. What people don't realize is how much of that data comes from massive undisclosed scrapes + synthetic data + countless hours of expert feedback shaping the models. As methodologies converge, the performance gap between these systems is already narrowing and will continue to diminish over time.

  • yieldcrv 11 hours ago

    Just because they have ongoing costs after purchasing them doesn't mean it's different than something else we've seen? What are you trying to articulate exactly, this is a simple business and can get costs under control eventually, or not

simonw 11 hours ago

I think the most interesting numbers in this piece (ignoring the stock compensation part) are:

$4.3 billion in revenue - presumably from ChatGPT customers and API fees

$6.7 billion spent on R&D

$2 billion on sales and marketing - anyone got any idea what this is? I don't remember seeing many ads for ChatGPT but clearly I've not been paying attention in the right places.

Open question for me: where does the cost of running the servers used for inference go? Is that part of R&D, or does the R&D number only cover servers used to train new models (and presumably their engineering staff costs)?

  • bfirsh 11 hours ago

    Free usage usually goes in sales and marketing. It's effectively a cost of acquiring a customer. This also means it is considered an operating expense rather than a cost of goods sold and doesn't impact your gross margin.

    Compute in R&D will be only training and development. Compute for inference will go under COGS. COGS is not reported here but can probably be, um, inferred by filling in the gaps on the income statement.

    (Source: I run an inference company.)

    • singron 3 hours ago

      I think it makes the most sense this way, but I've seen it accounted for in other ways. E.g. if free users produce usage data that's valuable for R&D, then they could allocate a portion of the costs there.

      Also, if the costs are split, there usually has to be an estimation of how to allocate expenses. E.g. if you lease a datacenter that's used for training as well as paid and free inference, then you have to decide a percentage to put in COGS, S&M, and R&D, and there is room to juice the numbers a little. Public companies are usually much more particular about tracking this, but private companies might use a proxy like % of users that are paid.

      OpenAI has not been forthcoming about their financials, so I'd look at any ambiguity with skepticism. If it looked good, they would say it.

  • adamhartenz 11 hours ago

    Marketing != advertising. Although this budget probably does include some traditional advertising. It is most likely about building the brand and brand awareness, as well as partnerships etc. I would imagine the sales team is probably quite big, and host all kinds of events. But I would say a big chunk of this "sales and marketing" budget goes into lobbying and government relations. And they are winning big time on that front. So it is money well spent from their perspective (although not from ours). This is all just an educated guess from my experience with budgets from much smaller companies.

    • echelon 11 hours ago

      I agree - they're winning big and booking big revenue.

      If you discount R&D and "sales and marketing", they've got a net loss of "only" $500 million.

      They're trying to land grab as much surface area as they can. They're trying to magic themselves into a trillion dollar FAANG and kill their peers. At some point, you won't be able to train a model to compete with their core products, and they'll have a thousand times the distribution advantage.

      ChatGPT is already a new default "pane of glass" for normal people.

      Is this all really so unreasonable?

      I certainly want exposure to their stock.

      • runako 10 hours ago

        > If you discount R&D and "sales and marketing"

        If you discount sales & marketing, they will start losing enterprise deals (like the US government). The lack of a free tier will impact consumer/prosumer uptake (free usage usually comes out of the sales & marketing budget).

        If you discount R&D, there will be no point to the business in 12 months or so. Other foundation models will eclipse them and some open source models will likely reach parity.

        Both of these costs are likely to increase rather than decrease over time.

        > ChatGPT is already a new default "pane of glass" for normal people.

        OpenAI should certainly hope this is not true, because then the only way to scale the business is to get all those "normal" people to spend a lot more.

  • delaminator 10 hours ago

    We gave ChatGPT advertising on bus-stops here in the UK.

    Two people in a cafe having a meet-up, they are both happy, one is holding a phone and they are both looking at it.

    And it has a big ChatGPT logo in the top right corner of the advertisement - transparent just the black logo with ChatGPT written underneath.

    That's it. No text or anything telling you what the product is or does. Just it will make you happy during conversations with friends somehow.

  • rkharsan64 an hour ago

    I see multiple banner ads promoting ChatGPT on my way to work. (India)

  • diggan 11 hours ago

    > $2 billion on sales and marketing - anyone got any idea what this is?

    Not sure where/how I read it, but remember coming across articles stating OpenAI has some agreements with schools, universities and even the US government. The cost of making those happen would probably go into "sales & marketing".

    • JCM9 11 hours ago

      Most folks that are not an engineer building is likely classified as “sales and marketing.” “Developer advocates” “solutions architects” and all that stuff included.

    • chermi 10 hours ago

      So probably just write-offs of tokens they give away?

    • infecto 11 hours ago

      This will include the people cost of sales and marketing teams.

  • gmerc 10 hours ago

    Stop R&D and the competition is at parity with 10x cheaper models in 3-6 months.

    Stop training and your code model generates tech debt after 3-6 month

    • chermi 9 hours ago

      It's pretty well accepted now that for pre-training LLMs the curve is S not an exponential, right? Maybe it's all in RL post-training now, but my understanding(?) is that it's not nearly as expensive as pre-training. I don't think 3-6 months is the time to 10X improvement anymore (however that's measured), it seems closer to a year and growing assuming the plateau is real. I'd love to know if there are solid estimates on "doubling times" these days.

      With the marginal gains diminishing, do we really think they're (all of them) are going to continue spending that much more for each generation? Even the big guys with the money like google can't justify increasing spending forever given this. The models are good enough for a lot of useful tasks for a lot of people. With all due respect to the amazing science and engineering, OpenAI (and probably the rest) have arrived at their performance with at least half of the credit going to brute-force compute, hence the cost. I don't think they'll continue that in the face of diminishing returns. Someone will ramp down and get much closer to making money, focusing on maximizing token cost efficiency to serve and utility to users with a fixed model(s). GPT-5 with it's auto-routing between different performance models seems like a clear move in this direction. I bet their cost to serve the same performance as say gemini 2.5 is much lower.

      Naively, my view is that there's some threshold raw performance that's good enough for 80% of users, and we're near it. There's always going to be demand for bleeding edge, but money is in mass market. So if you hit that threshold, you ramp down training costs and focus on tooling + ease of use and token generation efficiency to match 80% of use cases. Those 80% of users will be happy with slowly increasing performance past the threshold, like iphone updates. Except they probably won't charge that much more since the competition is still there. But anyway, now they're spending way less on R&D and training, and the cost to serve tokens @ the same performance continues to drop.

      All of this is to say, I don't think they're in that dreadful of a position. I can't even remember why I chose you to reply to, I think the "10x cheaper models in 3-6 months" caught me. I'm not saying they can drop R&D/training to 0. You wouldn't want to miss out on the efficiency of distillation, or whatever the latest innovations I don't know about are. Oh and also, I am confident that whatever the real number N is for NX cheaper in 3-6 months, a large fraction of that will come from hardware gains that are common to all of the labs.

      • necovek 2 hours ago

        Someone brought up an interesting point: to get the latest data (news, scientific breakthroughs...) into the model, you need to constantly retrain it.

      • Spooky23 7 hours ago

        Google has the best story imo. Gemini > Azure - it will accelerate GCP growth.

    • Spivak 5 hours ago

      Also R&D, for tax purposes, likely includes everyone at the company who touches code so there's probably a lot of operational cost being hidden in that number.

  • hedayet 11 hours ago

    > $2 billion on sales and marketing - anyone got any idea what this is?

    enterprise sales are expensive. And selling to the US government is on a very different level.

  • lemonlearnings 3 hours ago

    I have seen a tonnes of Chat GPT ads on Reddit. Usually with image generation of a dog in Japanese cartoon style.

    • necovek 2 hours ago

      The dog sitting in a house on fire proclaiming "this is fine" is an old meme, not an OpenAI generated image.

      Oh, not that dog? :)

  • lanthissa 10 hours ago

    you see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.

    you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.

  • abaymado 11 hours ago

    > $2 billion on sales and marketing - anyone got any idea what this is?

    I used to follow OpenAI on Instagram, all their posts were reposts from paid influencers making videos on "How to X with ChatGPT." Most videos were redundant, but I guess there are still billions of people that the product has yet to reach.

    • gizajob 10 hours ago

      Seems like it’ll take billions more down the drain to serve them.

    • what 3 hours ago

      There’s a bunch of users here that are probably paid by them too.

  • epolanski 7 hours ago

    I've seen some OpenAI ads on Italian tv and they made no sense to me, they tried hard to be apple like, but realistically nobody knew what they were about.

    • joering2 7 hours ago

      Italian advertising is weird in general. Month ago leaving Venice we pulled over on a gas station and I started just going thru pages on some magazine. At some point I see advertising on what looks like old fashioned shoes - and owner of the company holding his son with sign "from generation to generation". Only thing - the ~3 year old boy is completely naked wearing only shoes with his little pee pee sticking out. It shocked me and unsure if it was just my American domestication or there was really something wrong with it. I took a picture and wanted to send it to my friends in USA to show them how Italian advertising looks like, before getting sweats that if I were caught with that picture in the US, I would get in some deep trouble. I quickly deleted it, just in case. Crazy story..

      • necovek 2 hours ago

        Not crazy, it's just a cultural thing.

        US (and maybe the whole of Anglosaxon world) is a bit mired in this let's consider everything the worst case scenario: no, having a photo of your friend's naked kiddo they shared being funny at the beach or in the garden in your messenger app is not child pornography. The fact that there are extremely few people who might see it as sexual should not influence the overall population as much as it does.

        For me, I wouldn't blink an eye to such an ad, but due to my exposure to US culture, I do feel uneasy about having photos like the above in my devices (to the point of also having a thought pass my mind when it's of my own kids mucking about).

        I resist it because I believe it's the wrong cultural standard to adhere to: nakedness is not by default sexual, and especially with small kids before they develop any significant sexual characteristics.

      • epolanski 7 hours ago

        Nudity in general is not weird in Europe, let alone children's.

  • xmprt 10 hours ago

    Free users typically fall into sales and marketing. The idea is that if they cut off the entire free tier, they would have still made the same revenue off of paying customers by spending $X on inference and not counting the inference spend on free users.

  • eterm 11 hours ago

    > ? I don't remember seeing many ads for ChatGPT

    FWIW I got spammed non-stop with chatGPT adverts on reddit.

  • hu3 6 hours ago

    OpenAI keeps spamming me with ads on instagram and reddit.

    Pretty sure I'm not a cheap audience to target ads at, for multiple reasons.

  • actuallyalys 7 hours ago

    I’ve seen some on electronic street-level signs in Atlanta when I visited. So there is some genuine advertising.

  • Jallal 11 hours ago

    I'm pretty sure I saw some ChatGPT ads on Duolingo. Also, never forget that the regular dude do not use ad blockers. The tech community often doesn't realize how polluted the Internet/Mobile apps are.

  • patrickhogan1 6 hours ago

    Sales people out in the field selling to enterprises + free credits to get people hooked.

  • [removed] 11 hours ago
    [deleted]
  • wood_spirit 11 hours ago

    Speculating but they pay to be integrated as the default ai integration in various places the same way google has paid to be the default search engine on things like the iPhone?

  • zurfer 11 hours ago

    Inference etc should go in this bucket: "Operating losses reached US$7.8 billion"

    That also includes their office and their lawyers etc , so hard to estimate without more info.

  • infecto 11 hours ago

    Hard to know where it is in this breakdown but I would expect them to have the proper breakdowns. We know on the inference side it’s profitable but not to what scale.

  • Our_Benefactors 11 hours ago

    > $2 billion on sales and marketing

    Probably an accounting trick to account for non-paying-customers or the week of “free” cursor GPT-5 use.

  • [removed] 11 hours ago
    [deleted]
t4TLLLSZ185x 14 minutes ago

People in this comment section focus on brand ads too much.

It’s the commercial intent where OpenAI can both make money and preserve trust.

I already don’t Google anymore. I just ask ChatGPT „give me an overview of best meshtastic devices to buy“ and then eventually end with „give me links to where I can buy these in Europe“.

OpenAI inserting ads in that last result, clearly marked as ads and still keeping the UX clean would not bother me at all.

And commercial queries are what, 40-50% of all Google revenue?