Comment by fancyfredbot
Comment by fancyfredbot 3 days ago
There is more than one way to answer this.
They have made an alternative to the CUDA language with HIP, which can do most of the things the CUDA language can.
You could say that they haven't released supporting libraries like cuDNN, but they are making progress on this with AiTer for example.
You could say that they have fragmented their efforts across too many different paradigms but I don't think this is it because Nvidia also support a lot of different programming models.
I think the reason is that they have not prioritised support for ROCm across all of their products. There are too many different architectures with varying levels of support. This isn't just historical. There is no ROCm support for their latest AI Max 395 APU. There is no nice cross architecture ISA like PTX. The drivers are buggy. It's just all a pain to use. And for that reason "the community" doesn't really want to use it, and so it's a second class citizen.
This is a management and leadership problem. They need to make using their hardware easy. They need to support all of their hardware. They need to fix their driver bugs.
This ticket, finally closed after being open for 2 years, is a pretty good micocosm of this problem:
https://github.com/ROCm/ROCm/issues/1714
Users complaining that the docs don't even specify which cards work.
But it goes deeper - a valid complaint is that "this only supports one or two consumer cards!" A common rebuttal is that it works fine on lots of AMD cards if you set some environment flag to force the GPU architecture selection. The fact that this is so close to working on a wide variety of hardware, and yet doesn't, is exactly the vibe you get with the whole ecosystem.