Comment by defrost
Having a number of running processes take the CPU usage to 100% is one thing, have an under utilised CPU with almost no processes running report that usage is at 100% is another thing, the subject of the article here.
Having a number of running processes take the CPU usage to 100% is one thing, have an under utilised CPU with almost no processes running report that usage is at 100% is another thing, the subject of the article here.
Some mainframes have the ability to lock clock speed and always run at exactly 100%, so you can often have hard guarantees about program latency and performance.
I didn't intend this as an example of the issue the article mentions (a misreporting of usage because of a hardware design issue). It was just a fun example of how different hardware behaves differently.
One can also say Omegamon (or whatever tool) was misreporting, because it didn't account for the processor time of the various supporting systems that dealt with peripheral operations. After all, they also paid for the disk controllers, disks, tape drives, terminal controllers and so on, so they could want to drive those to close to 100% as well.