Comment by Reason077

Comment by Reason077 14 hours ago

1 reply

> "I assume you have to do a full memory copy to the GPU to display the image in the end."

On a unified memory architecture (eg: Apple Silicon), that's not an expensive operation. No copy required.

raphlinus 14 hours ago

Unfortunately graphics APIs suck pretty hard when it comes to actually sharing memory between CPU and GPU. A copy is definitely required when using WebGPU, and also on discrete cards (which is what these APIs were originally designed for). It's possible that using native APIs directly would let us avoid copies, but we haven't done that.