Comment by mmastrac

Comment by mmastrac 10 days ago

8 replies

QSound was magic at the time. We had a DSP class in my EE degree where we implemented a very minor transform that would shift position of audio and it was wild.

It's impossible to get 3D audio to be absolutely as flawless as the real world because human ears all vary slightly and your 3D spacial perception of sound is literally tuned on your own ears, but QSound's transfer functions come as close as you can get.

The algorithm also falls apart a bit outside of the sweet spot, and is really only useful in headphones and specific cases where a human is known to be placed in a certain location relative to speakers.

The original model was developed using a simulated human head and lots of hand-tuning. I am curious if we've advanced far enough with tech that a more modern set of transfer function parameters could be developed.

Nothing beats N speakers for positional audio, but this is a pretty decent replacement if the conditions are ideal.

OpenAL was designed as an open-source library to bring 3D audio to the masses in the same way that OpenGL did (basically exposing QSound/equivalent hardware on sound cards to an API), but I'm not sure what happened to it [1].

[1] https://www.openal.org/documentation/openal-1.1-specificatio...

wrs 10 days ago

Isn’t this the same fundamental technique as Spatial Audio and binaural Atmos rendering? AirPods can even measure your personal ear transfer functions.

  • brudgers 9 days ago

    Yes and no. Contemporary spatial audio render images in real-time. Older systems rendered the image during mixing.

    On the other hand psychoacoustic techniques have not changed.

    • conradev 9 days ago

      While Apple’s spatial audio engine renders in real-time, Apple Music’s spatial audio tracks are rendered binaurally during mixing.

      Whereas Sony’s 360 Reality Audio is object-based and rendered in real-time.

  • [removed] 9 days ago
    [deleted]
  • mmastrac 10 days ago

    Looks like it is. The Apple HRTF should be much more accurate than QSound -- QSound was designed to work without any analysis.

StilesCrisis 10 days ago

I experimented with OpenAL when Apple developed an implementation and it was unfortunately quite buggy. There were obvious threading hazards visible in the code. It was fine for toy/demo usage but it wasn't fit for production.

It looks like OpenAL on other platforms was used in various games though.

lynx23 9 days ago

Well, the OpenAL API is now part of WebAudio. Listener position, buffers, sources... You name it, WebAudio API has it.

PittleyDunkin 9 days ago

It's much easier to replace OpenAudio with other engines—fmod, notably, is better in almost every way.