Comment by pjmlp

Comment by pjmlp 9 hours ago

19 replies

CUDA success has much to thank Intel and AMD for never providing anything with OpenCL that could be a proper alternative in developer experience, graphical debugging, libraries and stable drivers.

Even OpenCL 2.x C++ standard was largely ignored or badly supported by their toolchains.

winwang 8 hours ago

Isn't the point of OpenCL to be... open? Not only did Intel and AMD not provide enough value, but neither did the community.

CUDA... is kind of annoying. And yet, it's the best experience (for GPGPU), as far as I can tell.

I feel like it says something that CUDA sets a standard for GPGPU (i.e. its visible runtime API) but others still fail to catch up.

  • cogman10 7 hours ago

    The problem is the OpenCL development model is just garbage.

    Compare the hello world of OpenCL [1] vs CUDA [2]. So much boilerplate and low level complexity for doing OpenCL whereas the CUDA example is just a few simple lines using the cuda compiler.

    And what really sucks is it's pretty hard to get away from that complexity the way OpenCL is structured. You simply have to know WAY too much about the hardware of the machine you are running on, which means having the intel/amd/nvidia routes in your application logic when trying to make an OpenCL app.

    Meanwhile, CUDA, because it's unapologetically just for nVidia cards, completely does away with that complexity in the happy path.

    For something to be competitive with CUDA, the standard needs something like a platform agnostic bytecode to target so a common accelerated platform can scoop up the bytecode and run it on a given platform.

    [1] https://github.com/intel/compute-samples/blob/master/compute...

    [2] https://github.com/premprakashp/cuda-hello-world

    • winwang 4 hours ago

      Yeah, not just OpenCL, but even "newer" standards like WebGPU. I considered making a blog post where I just put the two hello worlds side-by-side and say nothing else.

      I was severely disappointed after seeing people praise WebGPU (I believe for being better than OpenGL).

      As for the platform-agnostic bytecode, that's where something like MLIR would work too (kind of). But we could also simply just start with transpiling that bytecode into CUDA/PTX.

      Better UX with wider platform compatibility: CuPy, Triton.

dragontamer 8 hours ago

OpenCL 2.x was a major failure across the board.

OpenGL and Vulkan were good though. Gotta take the wins where they exist.

  • pjmlp 7 hours ago

    Thanks to Intel and AMD.

    • dragontamer 2 hours ago

      NVidia never even implemented OpenCL 2.0

      AMD had a buggy version. Intel had no dGPIs so no one cared how well an iGPU ran OpenCL (be it 1.3 or 2.0)

      --------

      AMD was clearly pushing C++ AMP at the time with Microsoft. And IMO, it worked great!! Alas, no one uses it so that died.

talldayo 8 hours ago

cough cough

Remind me who owns the OpenCL trademark, again?

Intel and AMD weren't the ones that abandoned it. Speaking in no uncertain terms, there was a sole stakeholder that can be held responsible for letting the project die and preventing the proliferation of Open GPGPU standards. A company that has everything to gain from killing Open standards in the cradle and replacing them with proprietary alternatives. Someone with a well-known grudge against Khronos who's willing to throw an oversized wrench into the plans as long as it hurts their opponents.

  • pjmlp 7 hours ago

    Don't blame Apple for what Khronos, Intel and AMD have done with OpenCL after version 1.0.

    It isn't Apple's fault that Intel and AMD didn't deliver.

    • talldayo 7 hours ago

      It is entirely Apple's fault that they rejected OpenCL to replace it with a proprietary library. If this was an implementation or specification problem, Apple had every opportunity to shape the project in their own image. They cannot possibly argue that this was done for any other reason than greed, considering they themselves laid the framework for such a project. Without Apple's cooperation, Open Source GPGPU libraries can not reasonably target every client. Apple knows they wield this power, and considering their history it's both illogical and willfully ignorant to assume they're not doing this as part of a broader trend of monopolistic abuse.

      Having shut out Nvidia as part of a petty feud, Apple realized they could force any inferior or nonfree CUDA alternative onto their developers no matter how unfinished, slow or bad it is. They turned away from the righteous and immediately obvious path to complicate things for developers that wanted to ship cross-platform apps instead of Mac-only ones.

  • google234123 8 hours ago

    Would you be willing to share the deal with Apple/Khronos relations?

    • troupo 8 hours ago

      Apple didn't like OpenGL, rightfully, and came up with their own Metal which they released two years before first version of Vulkan was released.

      Now people pretend that Apple is bad because it never adopted Vulkan and never implemented the "good modern OpenGL" (which never really existed).

      • jsheard 8 hours ago

        It runs deeper than that, during the development of WebGPU it came to light that Apple was vetoing the use of any Khronos IP whatsoever, due to a private legal dispute between them. That led to WebGPU having to re-invent the wheel with a brand new shader language because Apples lawyers wouldn't sign off on using GLSL or SPIR-V under any circumstances.

        The actual details of the dispute never came out, so we don't know if it has been resolved or not.