Comment by spwa4

Comment by spwa4 20 hours ago

3 replies

We are in a pretty amazing situation. If you're willing to go down 10% in benchmark scores, you easily 25% your costs. Now with Deepseek 3.2 another shot across the bow.

But if the ML, if SOTA intelligence becomes basically a price war, won't that mean that Google (and OpenAI and Microsoft and any other big model) lose big? Especially Google, as the margin even Google cloud (famously a lot lower than Google's other businesses) requires to survive has got to be sizeable.

golfer 18 hours ago

Google trains its own AI with TPU's, which are designed in house. Google doesn't have to pay retail rates for Nvidia GPUs, like other hyperscalers in the AI rat race. Therefore, Google trains its AI for cheaper than everyone else. I think everyone else "loses big" other than Google.

  • tvshtr 13 hours ago

    Well, those who are aware of this definitely know what it is leading to. But most will act shocked surely.

  • spwa4 3 hours ago

    But ... I don't understand why this is supposedly such a big deal. Look into it, calculate, and a very different picture comes forward, nVidia reportedly makes about 70% margin on their sales (which is COGS, in other words nVidia still pays about $1400 for chips and memory to produce a $4500 RTX5090 card, and that cost is rising fast).

    When you include research for current and future cards, that margin drops to 55-60%.

    When you include everything on their cash flow statement it drops to about 50%.

    And this is disregarding what Michael Burry pointed out: you really should subtract their stock dilution which is due to stock-based compensation, or about 0.2% of 4.6 trillion dollars per year. Michael Burry's point is of course that this makes for slightly negative shareholders' equity, ie. brings the margin to just under 0, which is mathematically true. But for this argument let's very generously say it eats about another 10% out of that margin. As opposed to the 50% it mathematically eats.

    Google and Amazon will have to be less efficient than nVidia, because they're making up ground. Let's very generously say that's another 10%, maybe 20%.

    So really, for Google making their own chips saves them at best 30% to 40% on the price, generously. And let's again ignore that Google's claim is that they're 30% to 50% less efficient than nVidia chips, which for large training runs translates directly to dollars.

    So for Google, TPUs are just about revenue neutral. It probably allows them to have more chips, more compute than they'd otherwise have, but it doesn't save them money over buying nVidia chips. Frankly, this conclusion sounds "very Google" to me.

    It's exactly the sort of thing I'd expect Google to do. VERY impressive technical accomplishment ... but can be criticized for being beside the point. It doesn't actually matter. As an engineer I applaud that they do it, please keep doing it, but it's not building a moat, not building revenue or profit, so the finance guy in me is screaming "WHY????????"

    At best, for Google, TPUs mean certainty of supply, relative to nVidia (whereas supplier contracts could build certainty of supply down the chain)