Comment by bmicraft
Wow, we got a No True Scotsman right here. On a more serious note, why would there be (more) microjitter? Isn't the defaut reaction to jitter to automatically increase buffer size as stated above?
Wow, we got a No True Scotsman right here. On a more serious note, why would there be (more) microjitter? Isn't the defaut reaction to jitter to automatically increase buffer size as stated above?
>On a more serious note, why would there be (more) microjitter?
This was audiophile bull for the sake of entertainment, if not clear enough. There wouldn't be any more or less jitter with or without RT.
It is the same samples, and these samples are not played by Linux, but by the DAC, which has its own clock.
>Isn't the defaut reaction to jitter to automatically increase buffer size as stated above?
I suspect you mean buffer underruns. A non-toy audio system will continue to try its best to deliver the requested latency, even when these have already happened.
In the same manner an orchestra won't stop just because a performer played the wrong note, or was off by half a second.