Comment by cwillu

Comment by cwillu 10 months ago

8 replies

Theoretically possible, but show me a sound server that automatically drops resampling quality instead of just increasing the buffer size.

ssl-3 10 months ago

Perhaps that's a theory.

In reality, my desktop does-everything Linux rig literally does everything. It's my ZFS file server/NAS, and VM host, and web-browsing machine, and gaming box, and it does everything else I do with a computer at home (except for routing, directly controlling 3D printers, and playing movies on the BFT).

Sometimes, especially when gaming, sound glitches. It's annoying to me when this happens. (It'd be far worse than annoying if I were doing serious audio work, but I am not.)

An RT kernel may help with that. Not by automagically adjusting buffers (or whatevers) for a glitch after it happens, but by preventing it from ever happening to begin with.

(And I intend to find out for sure if I ever get far enough into moving into this new place that I can plug my desktop back in, now that it is a mainlined feature instead of a potential rabbit hole.)

snvzz 10 months ago

That's a different knob that can be used; Increasing buffer size simply is a different compromise to achieve the result of meeting audio deadlines.

Quality vs latency, pick one.

Or just use PREEMPT_RT to tighten the timings for the critical audio worker getting the cpu ;)

  • cwillu 10 months ago

        JERRY: We didn't bet on if you wanted to. We bet on if it would be done.
    
        KRAMER: And it could be done.
    
        JERRY: Well, of course it could be done! Anything could be done! But it only is 
                      done if it's done. Show me the levels! The bet is the levels.
    
    Again, the point isn't that there is a possible tradeoff to be made, nor that the configuration option isn't available, nor even that some people tweak that setting for this very reason. It was stated that better RT performance will automatically improve audio quality because the audio system may automatically switch resampling methods on xrun, and that is specifically what I'm doubting.

    The bet isn't that it could be done. Anything could be done! Show me that it is being done!

    • snvzz 10 months ago

      A true audiophile can tell.

      Nevermind switching approaches to interpolation; The microjitter is blatant, and the plankton is lost.

      • bmicraft 10 months ago

        Wow, we got a No True Scotsman right here. On a more serious note, why would there be (more) microjitter? Isn't the defaut reaction to jitter to automatically increase buffer size as stated above?

        • snvzz 10 months ago

          >On a more serious note, why would there be (more) microjitter?

          This was audiophile bull for the sake of entertainment, if not clear enough. There wouldn't be any more or less jitter with or without RT.

          It is the same samples, and these samples are not played by Linux, but by the DAC, which has its own clock.

          >Isn't the defaut reaction to jitter to automatically increase buffer size as stated above?

          I suspect you mean buffer underruns. A non-toy audio system will continue to try its best to deliver the requested latency, even when these have already happened.

          In the same manner an orchestra won't stop just because a performer played the wrong note, or was off by half a second.

      • LtdJorge 10 months ago

        Does a true audiophile need cables made with 99.9% pure silver to tell, tho?