Comment by fooker

Comment by fooker 17 hours ago

6 replies

> Effectively every single H100 in existence now will be e-waste in 5 years or less.

This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.

If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.

fxtentacle 16 hours ago

Yeah, I can rent an A100 server for roughly the same price as what the electricity would cost me.

  • fennecbutt 5 hours ago

    Because they buy the electricity in bulk so these things are not the same.

  • fooker 12 hours ago

    That is true for almost any cloud hardware.

  • typpilol 14 hours ago

    Where?

    • Cheer2171 12 hours ago

      ~$1.25-1.75/hr at Runpod or vast.ai for an A100

      Edit: https://getdeploying.com/reference/cloud-gpu/nvidia-a100

      • diziet 2 hours ago

        The A100 SXM4 has a TDP of 400 watts, let's say about 800 with cooling etc overhead.

        Bulk pricing per KWH is about 8-9 cents industrial. We're over an order of magnitude off here.

        At 20k per card all in price (MSRSP + datacenter costs) for the 80GB version, with a 4 year payoff schedule the card costs 57 cents per hour (20,000/24/365/4) assuming 100% utilization.