Comment by moralestapia
Comment by moralestapia a day ago
The software is the same, AMD is not doing its own LLMs.
Comment by moralestapia a day ago
The software is the same, AMD is not doing its own LLMs.
Wrong.
Show me one single CUDA kernel on Llama's source code.
(and that's a really easy one, if one knows a bit about it)
The average consumer uses llama.cpp. So here is your list of kernels: https://github.com/ggml-org/llama.cpp/tree/master/ggml/src/g...
And here is pretty damning evidence that you're full of shit: https://github.com/ggml-org/llama.cpp/blob/master/ggml/src/g...
The ggml-hip backend references the ggml-cuda kernels. The "software is the same" (as in, it is CUDA) and yet AMD is still behind.
I think the software they were referring to is CUDA and the developer experience around the nvidia stack.