Comment by let_me_post_0
Comment by let_me_post_0 4 days ago
Nvidia isn't very big on opensource either. Most CUDA libraries are still closed source. I think this might eventually be their downfall, because people want to know what they are working with. For example with PyTorch, I can profile the library against my use case and then decide to modify the official library to get some bespoke optimization. With CUDA, if I need to do that, I need to start from scratch and guess as to whether the library from the api already has such optimizations.
NVIDIA does have a bunch of FOSS libraries - like CUB and Thrust (now part of CCCL). But - they tend to suffer from "not invented here" syndrome [1] ; so they seem to avoid collaboration on FOSS they don't manage/control by themselves.
I have a bit of a chip on my shoulder here, since I've been trying to pitch my Modern C++ API wrappers to them for years, and even though I've gotten some appreciative comments from individuals, they have shown zero interest.
https://github.com/eyalroz/cuda-api-wrappers/
There is also their driver, which is supposedly "open source", but actually none of the logic is exposed to you. Their runtime library is closed too, their management utility (nvidia-smi), their LLVM-based compiler, their profilers, their OpenCL stack :-(
I must say they do have relatively extensive documentation, even if it doesn't cover everything.
[1] - https://en.wikipedia.org/wiki/Not_invented_here