Comment by fooker
When intel did it, the pitchforks came out.
Nvidia seems to get a pass. Whys that?
When intel did it, the pitchforks came out.
Nvidia seems to get a pass. Whys that?
Intel disabled optimisations when they detected they were running on their competitors hardware. The motivation was to make competitors compare badly in benchmarks.
Nvidia are disabling optimisations on their own hardware. The motivation appears to be related to these optimisations being unsafe to apply to general code.
nVidia got their pitchforks back in 2003: https://web.archive.org/web/20051218120547/http://techreport...
And again in 2010, although as far as I'm aware this was just based on speculation and it was never proved that it was intentional, or that the optimisation would have netted the gains the author said: https://web.archive.org/web/20250325144612/https://www.realw...
It really depends on details.
If intentionally slowing non CUTLASS shaders, sure pitchfork time.
If it's an option that /technically/ breaks the CUDA shader compatibility contract, then enabling it in specific "known good" situations is just business as usual for GPU drivers.
That can be for all kinds of reasons - straightforward bugs or incomplete paths in the optimization implementation, the app not actually needing the stricter parts of the contract so can have a faster path, or even bugs in apps that need workarounds.
Though piggybacking into these without understanding can be extremely fragile - you don't know why they've limited it, and you run the risk of tripping over some situation that will simply fail, either with incorrect results or something like a crash. And possibly in rather unexpected, unpredictable situations.