Comment by wtallis

Comment by wtallis 9 hours ago

6 replies

> Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).

You need to re-check your sources. When AMD started doing integrated memory controllers in 2003, they had Socket 754 (single channel / 64-bit wide) for low-end consumer CPUs and Socket 940 (dual channel / 128-bit wide) for server and enthusiast destkop CPUs, but less than a year later they introduced Socket 939 (128-bit) and since then their mainstream desktop CPU sockets have all had a 128-bit wide memory interface. When Intel later also moved their memory controller from the motherboard to the CPU, they also used a 128-bit wide memory bus (starting with LGA 1156 in 2008).

There's never been a desktop CPU socket with a memory bus wider than 128 bits that wasn't a high-end/workstation/server counterpart to a mainstream consumer platform that used only a 128-bit wide memory bus. As far as I can tell, the CPU sockets supporting integrated graphics have all used a 128-bit wide memory bus. Pretty much all of the growth of desktop CPU core counts from dual core up to today's 16+ core parts has been working with the same bus width, and increased DRAM bandwidth to feed those extra cores has been entirely from running at higher speeds over the same number of wires.

What has regressed is that the enthusiast-oriented high-end desktop CPUs derived from server/workstation parts are much more expensive and less frequently updated than they used to be. Intel hasn't done a consumer-branded variant of their workstation CPUs in several generations; they've only been selling those parts under the Xeon branding. AMD's Threadripper line got split into Threadripper and Threadripper PRO, but the non-PRO parts have a higher starting price than early Threadripper generations, and the Zen 3 generation didn't get non-PRO Threadrippers.

zozbot234 9 hours ago

At some point the best "enthusuast-oriented HEDT" CPU's will be older-gen Xeon and EPYC parts, competing fairly in price, performance and overall feature set with top-of-the-line consumer setups.

  • wtallis 5 hours ago

    Based on historical trends, that's never going to happen for any workloads where single-thread performance or power efficiency matter. If you're doing something where latency doesn't matter but throughput does, then old server processors with high core counts are often a reasonable option, if you can tolerate them being hot and loud. But once we reached the point where HEDT processors could no longer offer any benefits for gaming, the HEDT market shrank drastically and there isn't much left to distinguish the HEDT customer base from the traditional workstation customers.

    • zozbot234 4 hours ago

      I'm not going to disagree outright, but you're going to pay quite a bit for such a combination of single-thread peak performance and high power efficiency. It's not clear why we should be regarding that as our "default" of sorts, given that practical workloads increasingly benefit from good multicore performance. Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts) than CPU.

      • wtallis 3 hours ago

        I said "single-thread performance or power efficiency", not "single-thread performance and power efficiency". Though at the moment, the best single-thread performance does happen to go along with the best power efficiency. Old server CPUs offer neither.

        > Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts)

        A gaming GPU doesn't need all of the bandwidth available from a single PCIe x16 slot. Mid-range GPUs and lower don't even have x16 connectivity, because it's not worth the die space to put down more than 8 lanes of PHYs for that level of performance. The extra PCIe connectivity on server platforms could only matter for workloads that can effectively use several GPUs. Gaming isn't that kind of workload; attempts to use two GPUs for gaming proved futile and unsustainable.