Comment by YetAnotherNick

Comment by YetAnotherNick 4 days ago

2 replies

Using TPU has the same opportunity cost as GPU. Just because they built something doesn't mean it's cheaper. If it is they can rent it cheaper to save money on paying billions of dollars to Nvidia.

A big segment of the market just uses GPU/TPU to train LLMs, so they don't exactly need flexibility if some tool is well supported.

querez 3 days ago

I assume TPU TCO is significantly cheaper than GPU TCO. At the same time, I also assume that market demand for GPUs is higher than TPUs (external tooling is just more suited to GPU -- e.g. I'm not sure what the Pytorch-on-TPU story is these days, but I'd be astounded if it's on par with their GPU support). So moving all your internal teams to TPUs means that all the GPUs can be allocated to GCP.

  • YetAnotherNick 3 days ago

    Just doesn't make sense. If you make significantly more money renting TPU, why not rent them cheaper to shift the customers(and save billions that you are giving to Nvidia). TPU right now isn't significantly more cheaper to external customer.

    Again I am talking about LLM training/inference which if I were to guess is more than half of the workload currently for which the switching cost is close to 0.