Comment by alain94040
Comment by alain94040 2 days ago
This is interesting if quite incomplete (as noted in the end conclusion). CPU re-order buffers turn what you think as mostly sequential execution into a massively parallel engine. Data memory access, perfecting, speculative execution, etc. But if you are running a micro-bencmark with a tight loop of millions of iterations, then understanding the pipeline dependencies and dispatching can provide good insights.
Yep. Cache is always the wildcard.