Comment by hyperbovine
Comment by hyperbovine a day ago
I'm willing to bet almost nobody you know calls the CUDA API directly. What AMD needs to focus on is getting the ROCm backend going for XLA and PyTorch. That would unlock a big slice of the market right there.
They should also be dropping free AMD GPUs off helicopters, as Nvidia did a decade or so ago, in order to build up an academic userbase. Academia is getting totally squeezed by industry when it comes to AI compute. We're mostly running on hardware that's 2 or 3 generations out of date. If AMD came with a well supported GPU that cost half what an A100 sells for, voila you'd have cohort after cohort of grad students training models on AMD and then taking that know-how into industry.
Indeed. the user-facing software stack componentry - pytorch and jax/xla - are owned by meta, and google and open sourced. Further, the open-source models (llama/deepseek) are largely hw agnostic. There is really no user or eco-system lock-in. Also, clouds are highly incentivized to have multiple hardware alternatives.