Comment by gradus_ad

Comment by gradus_ad 2 days ago

54 replies

How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models? What hurt open source in the past was its inability to keep up with the quality and feature depth of closed source competitors, but models seem to be reaching a performance plateau; the top open weight models are generally indistinguishable from the top private models.

Infrastructure owners with access to the cheapest energy will be the long run winners in AI.

teleforce 2 days ago

>How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

According to Google (or someone at Google) no organization has moat on AI/LLM [1]. But that does not mean that it is not hugely profitable providing it as SaaS even you don't own the model or Model as a Service (MaaS). The extreme example is Amazon providing MongoDB API and services. Sure they have their own proprietary DynamoDB but for the most people scale up MongoDB is more than suffice. Regardless brand or type of databases being used, you paid tons of money to Amazon anyway to be at scale.

Not everyone has the resource to host a SOTA AI model. On top of tangible data-intensive resources, they are other intangible considerations. Just think how many company or people host their own email server now although the resources needed are far less than hosting an AI/LLM model?

Google came up with the game changing transformer at its backyard and OpenAI temporarily stole the show with the well executed RLHF based system of ChatGPT. Now the paid users are swinging back to Google with its arguably more superior offering. Even Google now put AI summary as its top most search return results for free to all, higher than its paid advertisement clients.

[1]Google “We have no moat, and neither does OpenAI”:

https://news.ycombinator.com/item?id=35813322

  • Tepix a day ago

    Hosting a SOTA AI model is something that can be separated well from the rest of your cloud deployments. So you can pretty much choose between lots of vendors and that means margins will probably not be that great.

  • istjohn a day ago

    That quote from Google is 2.5 years old.

    • KeplerBoy a day ago

      I also cringed a bit about seeing a statement that old being cited, but all the events since then only proved google right, I'd say.

      Improvements seem incremental and smaller. For all I care, I could still happily use sonnet 3.5.

    • mistrial9 a day ago

      undergrads at UC Berkeley are wearing vLLM t-shirts

bashtoni 2 days ago

This is exactly why the CEO of Anthropic has been talking up "risks" from AI models and asking for legislation to regulate the industry.

alexandre_m a day ago

> What hurt open source in the past was its inability to keep up with the quality and feature depth of closed source competitors

Quality was rarely the reason open source lagged in certain domains. Most of the time, open source solutions were technically superior. What actually hurt open source were structural forces, distribution advantages, and enterprise biases.

One could make an argument that open source solutions often lacked good UX historically, although that has changed drastically the past 20 years.

  • zarzavat a day ago

    For most professional software, the open source options are toys. Is there anything like an open source DAW, for example? It's not because music producers are biased against open source, it's because the economics of open source are shitty unless you can figure out how to get a company to fund development.

    • throwup238 a day ago

      > Is there anything like an open source DAW, for example?

      Yes, Ardour. It’s no more a toy than KiCad or Blender.

dotancohen 2 days ago

People and companies trust OpenAI and Anthropic, rightly or wrongly, with hosting the models and keeping their company data secure. Don't underestimate the value of a scapegoat to point a finger at when things go wrong.

  • reed1234 2 days ago

    But they also trust cloud platforms like GCP to host models and store company data.

    Why would a company use an expensive proprietary model on Vertex AI, for example, when they could use an open-source one on Vertex AI that is just as reliable for a fraction of the cost?

    I think you are getting at the idea of branding, but branding is different from security or reliability.

    • verdverm 2 days ago

      Looking at and evaluating kimi-2/deepseek vs gemini-family (both through vertex ai), it's not clear open sources is always cheaper for the the same quality

      and then we have to look at responsiveness, if the two models are qualitatively in the same ballpark, which one runs faster?

  • ehnto a day ago

    > Don't underestimate the value of a scapegoat to point a finger at when things go wrong.

    Which is an interesting point in favour of the human employee, as you can only consolidate scape goats so far up the chain before saying "It was AIs fault" just looks like negligence.

jonplackett 2 days ago

Either...

Better (UX / ease of use)

Lock in (walled garden type thing)

Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name)

  • niek_pas 2 days ago

    > Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name.

    Not Google, and not Amazon. Microsoft is a maybe.

    • reed1234 2 days ago

      People trust google with their data in search, gmail, docs, and android. That is quite a lot of personal info, and trust, already.

      All they have to do is completely switch the google homepage to gemini one day.

    • polyomino 2 days ago

      The success of Facebook basically proves that public brand perception does not matter at all

      • acephal 2 days ago

        Facebook itself still has a big problem with it's lack of youth audience though. Zuck captured the boomers and older Gen X, which are the biggest demos of living people however.

        • eru a day ago

          > Zuck captured the boomers and older Gen X, which are the biggest demos of living people however.

          In the developed world. I'm not sure about globally.

  • poszlem 2 days ago

    Or lobbing for regulations. You know. The "only american models are safe" kind of regulation.

WhyOhWhyQ a day ago

I don't see what OpenAI's niche is supposed to be, other than role playing? Google seems like they'll be the AI utility company, and Anthropic seems like the go-to for the AI developer platform of the future.

  • linkage a day ago

    Anthropic has RLed the shit out of their models to the extent that they give sub-par answers to general purpose questions. Google has great models but is institutionally incapable of building a cohesive product experience. They are literally shipping their org chart with Gemini (mediocre product), AI Overview (trash), AI Mode (outstanding but limited modality), Gemini for Google Workspace (steaming pile), Gemini on Android (meh), etc.

    ChatGPT feels better to use, has the best implementation of memory, and is the best at learning your preferences for the style and detail of answers.

adam_patarino a day ago

It’s convenience - it’s far easier to call an API than deploy a model to a VPC and configure networking, etc.

Given how often new models come out, it’s also easier to update an API call than constantly deploying model upgrades.

But in the long run, I hope open source wins out.

delichon 2 days ago

> Infrastructure owners with access to the cheapest energy will be the long run winners in AI.

For a sufficiently low cost to orbit that may well be found in space, giving Musk a rather large lead. By his posts he's currently obsessed with building AI satellite factories on the moon, the better to climb the Kardashev scale.

  • kridsdale1 2 days ago

    The performance bottleneck for space based computers is heat dissipation.

    Earth based computers benefit from the existence of an atmosphere to pull cold air in from and send hot air out to.

    A space data center would need to entirely rely on city sized heat sink fins.

    • delichon 2 days ago

      For radiative cooling using aluminum, per 1000 watts at 300 kelvin: ~2.4m^2 area, ~4.8 liters volume, ~13kg weight. So a Starship (150k kg, re-usable) could carry about a megawatt of radiators per launch to LEO.

      And aluminum is abundant in the lunar crust.

      • ehnto a day ago

        We are jumping pretty far ahead for a planet that can barely put two humans up there, but it is a great deal of my scifi dreams in one technology tree so I'll happily watch them try.

        • eru a day ago

          The grandfather comment is perhaps mixing up two things:

          If launch costs are cheap enough, you can bring aluminum up from earth.

          But once your in-space economy is developed enough, you might want to tap the moon or asteroids for resources.

    • ehnto a day ago

      And the presence of humans. Like with a lot of robotics, the devil is probably in the details. Very difficult to debug your robot factory while it's in orbit.

      That was fun to write but also I am generally on board with humanity pushing robotics further into space.

      I don't think an orbital AI datacentre makes much sense as your chips will be obsolete so quickly that the capex getting it all up there will be better spent on buying the next chips to deploy on earth.

      • eru a day ago

        Well, _if_ they can get launch costs down to 100 dollar / kg or so, the economics might make sense.

        Radiative cooling is really annoying, but it's also an engineering problem with a straightforward solution, if mass-in-orbit becomes cheap enough.

        The main reason I see for having datacentres in orbit would be if power in orbit becomes a lot cheaper than power on earth. Cheap enough to make up for the more expensive cooling and cheap enough to make up for the launch costs.

        Otherwise, manufacturing in orbit might make sense for certain products. I heard there's some optical fibres with superior properties that you can only make in near zero g.

        I don't see a sane way to beam power from space to earth directly.

tsunamifury 2 days ago

Pure models clearly aren’t the monetizing strategy, use of them on existing monetized surfaces are the core value.

Google would love a cheap hq model on its surfaces. That just helps Google.

  • gradus_ad 2 days ago

    Hmmm but external models can easily operate on any "surface". For instance Claude Code simply reads and edits files and runs in a terminal. Photo editing apps just need a photo supplied to them. I don't think there's much juice to squeeze out of deeply integrated AI as AI by its nature exists above the application layer, in the same way that we exist above the application layer as users.

    • tsunamifury a day ago

      Gemini is the most used model on the planet per request.

      All the facts say otherwise to your thoughts here.

empath75 21 hours ago

> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

So a couple of things. There are going to be a handful of companies in the world with the infrastructure footprint and engineering org capable of running LLMs efficiently and at scale. You are never going to be able to run open models in your own infra in a way that is cost competitive with using their API.

Competition _between_ the largest AI companies _will_ drive API prices to essentially 0 profit margin, but none of those companies will care because they aren't primarily going to make money by selling the LLM API -- your usage of their API just subsidizes their infrastructure costs, and they'll use that infra to build products like chat gpt and claude, etc. Those products are their moat and will be where 90% of their profit comes from.

I am not sure why everyone is so obsessed with "moats" anyway. Why does gmail have so many users? Anybody can build an email app. For the same reason that people stick with gmail, people are going to stick with chatgpt. It's being integrated into every aspect of their lives. The switching costs for people are going to be immense.

iLoveOncall 2 days ago

> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

They won't. Actually, even if open models aren't competitive, they still won't. Hasn't this been clear since a while already?

There's no moat in models, investments in pure models has only been to chase AGI, all other investment (the majority, from Google, Amazon, etc.) has been on products using LLMs, not models themselves.

This is not like the gold rush where the ones who made good money were the ones selling shovels, it's another kind of gold rush where you make money selling shovels but the gold itself is actually worthless.

pembrook 2 days ago

I call this the "Karl Marx Fallacy." It assumes a static basket of human wants and needs over time, leading to the conclusion competition will inevitably erode all profit and lead to market collapse.

It ignores the reality of humans having memetic emotions, habits, affinities, differentiated use cases & social signaling needs, and the desire to always want to do more...constantly adding more layers of abstraction in fractal ways that evolve into bigger or more niche things.

5 years ago humans didn't know a desire for gaming GPUs would turn into AI. Now it's the fastest growing market.

Ask yourself: how did Google Search continue to make money after Bing's search results started benchmarking just as good?

Or: how did Apple continue to make money after Android opened up the market to commoditize mobile computing?

Etc. Etc.

  • chinesedessert a day ago

    this name is illogical as karl marx did not commit this fallacy

    • pembrook a day ago

      Yes, he did, and it was fundamental to his entire economic philosophy: https://en.wikipedia.org/wiki/Tendency_of_the_rate_of_profit...

      • deadfoxygrandpa a day ago

        no, he didn't, and your link has nothing to do with your fallacy you were talking about

      • Balinares a day ago

        I'm not seeing anywhere in that page anything about an assumed static basket of human wants and needs. Maybe I missed it -- can you point out where you saw that?

        Interesting, though, that per the very same article someone like Adam Smith concurred empirically with Marx's observation on the titular tendency of rates of profit to fall. This suggests to me it likely had some meat to it.

blibble a day ago

> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

hopefully they won't

and their titanic off-balance sheet investments will bankrupt them as they won't be able to produce any revenue