enopod_ 4 hours ago

That's exactly the thing. It's only about bookkeeping.

The big AI corps keep pushing depreciation for GPUs into the future, no matter how long the hardware is actually useful. Some of them are now at 6 years. But GPUs are advancing fast, and new hardware brings more flops per watt, so there's a strong incentive to switch to the latest chips. Also, they run 24/7 at 100% capacity, so after only 1.5 years, a fair share of the chips is already toast. How much hardware do they have in their books that's actually not useful anymore? Noone knows! Slower depreciation means more profit right now (for those companies that actually make profit, like MS or Meta), but it's just kicking the can down the road. Eventually, all these investments have to get out of the books, and that's where it will eat their profits. In 2024, the big AI corps invested about $1 trillion in AI hardware, next year is expected to be $2 trillion. Only the interest payments for that are crazy. And all of this comes on top of the fact that none of the these companies actually make any profit at all with AI. (Except Nvidia of course) There's just no way this will pan out.

  • avisser 9 minutes ago

    > Also, they run 24/7 at 100% capacity, so after only 1.5 years

    How does OpenAI keep this load? I would expect the load at 2pm Eastern to be WAY bigger than the load after California goes to bed.

  • gizmo 2 hours ago

    Flops per watt is relevant for a new data center build-out where you're bottlenecked on electricity, but I'm not sure it matters so much for existing data centers. Electricity is such as small percentage of total cost of ownership. The marginal cost of running a 5 year old GPU for 2 more years is small. The husk of a data center is cheap. It's the cooling, power delivery equipment, networking, GPUs etc that costs money, and when you retrofit data centers for the latest and greatest GPUs you have to throw away a lot of good equipment. Makes more sense to build new data centers as long as inference demand doesn't level off.

duped 9 hours ago

How different is this from rental car companies changing over their fleets? I don't know, this is a genuine question. The cars cost 3-4x as much and last about 2x as far as I know, and the secondary market is still alive.

  • logifail 6 hours ago

    > How different is this from rental car companies changing over their fleets?

    New generations of GPUs leapfrog in efficiency (performance per watt) and vehicles don't? Cars don't get exponentially better every 2–3 years, meaning the second-hand market is alive and well. Some of us are quite happy driving older cars (two parked outside our home right now, both well over 100,000km driven).

    If you have a datacentre with older hardware, and your competitor has the latest hardware, you face the same physical space constraints, same cooling and power bills as they do? Except they are "doing more" than you are...

    Would we could call it "revenue per watt"?

    • wongarsu 4 hours ago

      The traditional framing would be cost per flop. At some point your total costs per flop over the next 5 years will be lower if you throw out the old hardware and replace it with newer more efficient models. With traditional servers that's typically after 3-5 years, with GPUs 2-3 years sounds about right

      The major reason companies keep their old GPUs around much longer with now are the supply constraints

    • bbarnett 4 hours ago

      The used market is going to be absolutely flooded with millions of old cards. I imagine shipping being the most expensive cost for them. The supply side will be insane.

      Think 100 cards but only 1 buyer as a ratio. Profit for ebay sellers will be on "handling", or inflated shipping costs.

      eg shipping and handling.

      • 3form 3 hours ago

        I assume NVIDIA and co. already protects themselves in some way, either by the fact of these cards not being very useful after resale, or requiring them to go to the grinder after they expire.

        • bbarnett 3 hours ago

          Cards don't "expire". There are alternate strategies to selling cards, but if they don't sell the cards, then there is no transfer of ownership, and therefore NVIDIA is entering some form of leasing model.

          If NVIDIA is leasing, then you can't get use those cards as collateral. You can't also write off depreciation. Part of what we're discussing is that terms of credit are being extended too generously, with depreciation in the mix.

          The could require some form of contractual arrangement, perhaps volume discounts for cards, if they agree to destroy them at a fixed time. That's very weird though, and I've never heard of such a thing for datacenter gear.

          They may protect themselves on the driver side, but someone could still write OSS.

  • afavour 9 hours ago

    Rental car companies aren’t offering rentals at deep discount to try to kickstart a market.

    It would be much less of a deal if these companies were profitable and could cover the costs of renewing hardware, like car rental companies can.

  • cjonas 9 hours ago

    I think it's a bit different because a rental car generates direct revenue that covers its cost. These GPU data centers are being used to train models (which themselves quickly become obsolete) and provide inference at a loss. Nothing in the current chain is profitable except selling the GPUs.

    • sho 7 hours ago

      > and provide inference at a loss

      You say this like it's some sort of established fact. My understanding is the exact opposite and that inference is plenty profitable - the reason the companies are perpetually in the red is that they're always heavily investing in the next, larger generation.

      I'm not Anthropic's CFO so i can't really prove who's right one way or the other, but I will note that your version relies on everyone involved being really, really stupid.

      • darkwater 6 hours ago

        The current generation of today was the next generation of yesterday. So, unless the services sold on inference can cover the cost of inference + training AND gain money, they are still operating at loss.

      • elktown 6 hours ago

        “like it's some sort of established fact” -> “My understanding”?! a.k.a pure speculation. Some of you AI fans really need to read your posts out loud before posting them.

      • rvba 4 hours ago

        Or just "everyone" being greedy

  • chii 9 hours ago

    > the secondary market is still alive.

    this is the crux. Will these data center cards, if a newer model came out with better efficiency, have a secondary market to sell to?

    It could be that second hand ai hardware going into consumers' hands is how they offload it without huge losses.

    • vesrah 8 hours ago

      The GPUs going into data centers aren't the kind that can just be reused by putting them into a consumer PC and playing some video games, most don't even have video output ports and put out FPS similar to cheap integrated GPUs.

      • geerlingguy 7 hours ago

        And the big ones don't even have typical PCIe sockets, they are useless outside of behemoth rackmount servers requiring massive power and cooling capacity that even well-equipped homelabs would have trouble providing!

    • physicsguy 8 hours ago

      Data centre cards a don’t have fans and don’t have video out these days.

      • chii 8 hours ago

        i dont mean consumer market for video cards - i mean a consumer buying ai chips to run themselves so they can have it locally.

        If i can buy a $10k ai card for less than $5000 dollars, i probably would, if i can use it to run an open model myself.

    • [removed] 8 hours ago
      [deleted]