Comment by aseipp
There already is ROCm support for PyTorch. Then there's stuff like this: https://semianalysis.com/2024/12/22/mi300x-vs-h100-vs-h200-b...
They have improved since that article, by a decent amount from my understanding. But by now, it isn't enough to have "a backend". The historical efforts have spoiled that narrative so badly that it won't be enough to just have a pytorch-rocm pypi package; some of that flak is unfair though not completely unsubstantiated. But frankly they need to deliver better software, across all their offerings, for multiple successive generations before the bad optics around their software stack will start fading. Their competitors are already on their next gen architecture since that article was written.
You are correct that people don't really invoke CUDA APIs much, but that's partially because those APIs actually work and deliver good performance, so things can actually be built on top of them.