Comment by imtringued
Comment by imtringued a day ago
The average consumer uses llama.cpp. So here is your list of kernels: https://github.com/ggml-org/llama.cpp/tree/master/ggml/src/g...
And here is pretty damning evidence that you're full of shit: https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/g...
The ggml-hip backend references the ggml-cuda kernels. The "software is the same" (as in, it is CUDA) and yet AMD is still behind.