Comment by aa-jv

Comment by aa-jv a day ago

19 replies

>Windows Energy-saving timer heuristics

Another example of Windows' technical debt being there, low-hanging fruit-wise, to be cashed in by performance-oriented developers. Its interesting that Youtube changing the timer resolution propagates to other threads .. does this harken to darker days in the MSDOS era? Youtube, another Turbo button?

fragmede a day ago

The best one, of course, is the one where Windows would install faster if you jiggled the mouse.

  • fhdkweig 20 hours ago

    I was told a story by some hackers in the old multi-user mainframe days. They said that a good speed booster was to have the program open a terminal because it made the mainframe OS think it was a real-time user interactive program and give it more resources.

    • aa-jv 2 hours ago

      I still have the brainwash/muscle-memory to type:

      $ gunzip -c somefile.tar.gz | tar xvf -

      .. because there was, once, a day when the terminal buffer available for this pipe was bigger than available memory offered to a process by default, meaning the thing would unpack faster if I did that versus:

      $ tar zxvf somefile.tar.gz

      Admittedly, this discrepancy of available memory was usually because the BOFH hadn't realized there was also an allocation for pipes-per-user, so it was a neat trick to get around the hard limits that BOFH had imposed on some of my processes in terms of heap allocation ..

  • jaffa2 a day ago

    I this related to when you are scrolling and selecting within a document, and you wiggle the mouse, it scrolls faster ? I always thought it was just a nice UI optimisation, but I could believe it's actually some accidental side-effect at play.

    (like make a 20 page word doc, and start selecting from the first page and drag through - it wil go faster if you jiggle. same in excel and nearly every windows app, even windows explorer)

    • ninkendo a day ago

      No, it has to do with every time you move the mouse over a window, a hover event is sent to the application, which runs its main event loop. Either the installer only updated its progress bar when an event happened (in which case it would only appear to be going faster, because the progress bar would move more smoothly) or there was some really terribly written code that literally only made progress when an (unrelated) event happened. My guess is the former.

    • joshuaissac 19 hours ago

      There must be so many subtle features like these that people use subconsciously, and when they try to move to another operating system, they try it, nothing happens and they get frustrated.

  • giancarlostoro a day ago

    Makes me wonder how much of Windows is like Pokemon glitching, a community that never stops, even several generations ahead.

antonvs a day ago

A performance issue related to this is more likely a shortcoming in the software experiencing this issue.

The setting in question is the minimum timer resolution. Changing this will only have an impact on applications that depend heavily on that resolution, i.e. it's not some sort of turbo button for general execution speed. In fact according to the docs, a higher resolution can "reduce overall system performance, because the thread scheduler switches tasks more often."

An application whose performance depends on the timer resolution should be setting that resolution itself, using the Win32 API function mentioned in the thread, timeBeginPeriod, which includes the following in its documentation:

> For processes which call this function, Windows uses the lowest value (that is, highest resolution) requested by any process. For processes which have not called this function, Windows does not guarantee a higher resolution than the default system resolution.

> Starting with Windows 11, if a window-owning process becomes fully occluded, minimized, or otherwise invisible or inaudible to the end user, Windows does not guarantee a higher resolution than the default system resolution. See SetProcessInformation for more information on this behavior.

> Setting a higher resolution can improve the accuracy of time-out intervals in wait functions. However, it can also reduce overall system performance, because the thread scheduler switches tasks more often. High resolutions can also prevent the CPU power management system from entering power-saving modes.

https://learn.microsoft.com/en-us/windows/win32/api/timeapi/...

  • aa-jv a day ago

    Thats all well and good, but this part seems a bit .. uninformative, or at the very least, easily misunderstood by the harried developer:

    >For processes which have not called this function, Windows does not guarantee a higher resolution than the default system resolution.

    There should at least be mention that changing this resolution can effect other processes.

    Is this a bug? Its hard to see it as a feature.

    • whizzter 19 hours ago

      Think of it this way, the global timer resolution of the system is minOf(allProcessesTimerResolution). If no process needs higher accuracy timing then there is nothing hindering the system from sleeping longer periods to save power and/or have less interrupt overhead (An feature I'd say).

      These API's are from the 90s, in the beginning of the 90s where these API's are from having an global system interrupt firing 1000 times per second could very well have taken a percent or two or more from overall CPU performance (people already complained about the "overhead" of having a "real OS").

      On the other hand writing audio-players on DOS you had the luxury of receiving your own interrupt within a few samples worth of audio, this meant that you could have very tight audio-buffers with less latency and quicker response to user triggers.

      Not having that possibility to get that timing fidelity would have made Windows a no-go platform for audio-software, thus giving developers the freedom to enable it when needed was needed. Removing it in the next 10 years would probably have risked bad regressions.

      Like a sibling comment noted, they finally removed it during Windows 10's lifespan and with modern CPU's _AND_ multicore they probably felt safe enough with performance margins to separate high accuracy threads/processes to separate cores and let other cores sleep more and actually win more battery life out of it.

      It might not be "perfect engineering", but considering the number of schedulers written for Linux over the years to address desktop(audio) vs server loads it was a fairly practical and usable design.

      • boznz 18 hours ago

        DOS was basically bare-metal programming with a few hardware and software calls thrown in. With 50 cent ARM processors these days having the power of an 80's mainframe Bare-metal on $5 dev-board is still my preferred way to go for simple projects that boot instantly and never need updates. I'm currently listening to music on a DOS MP3 player on a throwaway industrial x86 motherboard I built into an amplifier case 23 years ago.

    • ninkendo a day ago

      > There should at least be mention that changing this resolution can affect other processes.

      That sorta is what it’s saying. If you don’t set it yourself, you won’t get any better than the “default system resolution”. But if the default system resolution changes (say, by entering a sort of “performance mode” when playing games or watching videos), then it would imply it will affect all processes that are using the default, right?

      • Someone a day ago

        Sorta, on Windows < 10. From the same Microsoft page:

        “Prior to Windows 10, version 2004, this function affects a global Windows setting. For all processes Windows uses the lowest value (that is, highest resolution) requested by any process. Starting with Windows 10, version 2004, this function no longer affects global timer resolution.”

      • aa-jv 2 hours ago

        I mean, sure, it implies things. But we all know that devs have a hard time reading between the lines when the compiler is boiling away.

        You get it, I get it, but I guarantee you there are a thousand developers for each one of us who won't get it and wonder why the heck things change now and then, without realizing they also need to test their timer-dependent code under less than hygienic conditions in order to validate the results ..

        I think that this technically is a distasteful situation and whoever wrote those technical docs kind of wanted to avoid having to admit the painful truth, and just out and out state that changing the timer resolution will have a system-wide impact, because .. really .. why should it? There is no good reason for it. Only bad reasons, imho. Ancient, technical debt'ish kinda reasons.