Comment by mmastrac
QSound was magic at the time. We had a DSP class in my EE degree where we implemented a very minor transform that would shift position of audio and it was wild.
It's impossible to get 3D audio to be absolutely as flawless as the real world because human ears all vary slightly and your 3D spacial perception of sound is literally tuned on your own ears, but QSound's transfer functions come as close as you can get.
The algorithm also falls apart a bit outside of the sweet spot, and is really only useful in headphones and specific cases where a human is known to be placed in a certain location relative to speakers.
The original model was developed using a simulated human head and lots of hand-tuning. I am curious if we've advanced far enough with tech that a more modern set of transfer function parameters could be developed.
Nothing beats N speakers for positional audio, but this is a pretty decent replacement if the conditions are ideal.
OpenAL was designed as an open-source library to bring 3D audio to the masses in the same way that OpenGL did (basically exposing QSound/equivalent hardware on sound cards to an API), but I'm not sure what happened to it [1].
[1] https://www.openal.org/documentation/openal-1.1-specificatio...
Isn’t this the same fundamental technique as Spatial Audio and binaural Atmos rendering? AirPods can even measure your personal ear transfer functions.