Comment by dragontamer
Comment by dragontamer 10 hours ago
Cloth physics in Blender are stored in RAM (as scenes and models can grow very large, too large for a GPU).
Figuring out which verticies for a physics simulation need to be sent to the GPU would be time, effort, and PCIe traffic _NOT_ running the cloth physics.
Furthermore, once all the data is figured out and blocked out in cache, its... cached. Cloth physics only interacts with a small number of close, nearby objects. Yeah, you _could_ calculate this small portion and send it to the GPU, but CPU is really good at just automatically traversing trees and storing the most recently used stuff in L1, L2, and L3 caches automatically (without any need of special code).
All in all, I expect something like Cloth physics (which is a calculation Blender currently does on CPU-only), is best done CPU only. Not because GPUs are bad at this algorithm... but instead because PCIe transfers are too slow and cloth physics is just too easily cached / benefited by various CPU features.
It'd be a lot of effort to translate all that code to GPU and you likely won't get large gains (like Raytracing/Cycles/Rendering gets for GPU Compute).
NVIDIA's physX has its own cloth physics abstractions: https://docs.nvidia.com/gameworks/content/gameworkslibrary/p..., so I'm sure it is a thing we do on GPUs already, if only for games. These are old demos anyways:
https://www.youtube.com/watch?v=80vKqJSAmIc
I wonder what the difference is between the cloth physics you are talking about and the one NVIDIA has been doing for I think more than a decade now? Is it scale? It sounds like, at least, there are alternatives that do it on the GPU and there are questions if Blender will do it on the GPU:
https://blenderartists.org/t/any-plans-to-make-cloth-simulat...