Comment by senkora

Comment by senkora 4 days ago

22 replies

+1. An important non-obvious detail for AMD is that they (at least in the past, I assume for this as well) have kept the instruction timings similar from generation to generation of consoles.

Different x86 micro-architectures benefit from writing the machine code in slightly different ways. Games are highly optimized to the specific micro-architecture of the console, so keeping that stable helps game developers optimize for the console. If you suddenly changed the micro-architecture (if switching to Intel), then old games could suddenly become janky and slow even though both systems are x86.

(This would only matter if you were pushing performance to the edge, which is why it rarely matters for general software development, but console game dev pushes to the edge)

So it isn't just the graphics APIs that would change going from AMD to Intel, but the CPU performance as well.

deaddodo 4 days ago

> Different x86 micro-architectures benefit from writing the machine code in slightly different ways. Games are highly optimized to the specific micro-architecture of the console, so keeping that stable helps game developers optimize for the console.

While that can be true, very few gamedev companies these days optimize to that degree. They almost all use off-the-shelf middleware and game engines that are built to support all of the platforms. The companies that do go through that effort tend to have very notable releases.

Nobody is hand-tuning Assembler code these days to fit into tight instruction windows. At least, not outside of some very specific logic fragments. Instead they're all writing generic interrupt-based logic. Which is fine, as that's what the newer CPUs expect and optimize for internally.

In addition, the difference in the Zen generation gap is as different as switching to Intel. We're talking fairly different cache coherency, memory hierarchies, CCX methodologies, micro-op and instruction timings, iGPU configurations, etc.

That all being said, AMD was going to beat Intel regardless because of established business relationships and their current internal struggles (both business-wise and R&D) making it fairly difficult for them to provide an equivalent alternative.

  • soganess 3 days ago

    Asking this as an open ended (if leading) question: I assume enough people are doing it otherwise PS5 Pro makes no sense... Right?

    They (AMD/Sony) shoehorned the RDNA 3/3.5 GPU architecture onto an older Zen 2 core, with a different process node, because... they felt like making a frankenAPU? Especially since the APUs are usually monolithic (vs chiplet) in design and share a memory controller. Surely it would have been easier/cheaper to put in 8 zen 4c/5c cores and call it a day.

    I'm pretty sure I'm just missing something obvious...

    • wmf 3 days ago

      For PlayStation APUs, it's likely that AMD presents a menu of options and Sony chooses which components they want. For PS5 Pro, the CPU is unchanged from PS5 because Sony doesn't feel the need for anything faster. A newer CPU would take more area. But Sony really wanted better raytracing and AI so they chose RDNA 3.9 or whatever for the GPU. I suspect the cores are all mostly synthesized so they can support any process and Infinity Fabric is compatible enough that you can mix and match new and old cores.

    • deaddodo 2 days ago

      > They (AMD/Sony) shoehorned the RDNA 3/3.5 GPU architecture onto an older Zen 2 core

      The original core was already a custom configuration. I don't see why it seems odd that the new version would be a custom configuration based on the previous one.

      > with a different process node

      This doesn't apply to the PS5 SoC, but is general to AMD's methodology.

      AMD has been using an off-chip interposer setup for multiple generations now. They did this specifically to allow for different process nodes for different chips.

      It's cheaper (and there are more fab options) to produce chips at a lower process node. If there's no reason to update the CPU, it would make sense to keep it on the cheaper option.

      In regards to the PS5 and Xbox SoCs, specifically.

      The entirety of the SoC is fabbed at the same process node. A core designed for a 14nm process node and then fabbed at 7nm (assuming drastic changes weren't needed to make it function at the lower node) is going to be much smaller and run cooler on that node size. This is cheaper and leaves more space in the total footprint for the GPU-specific and auxiliary logic cores. Same rule applies above, why use more if it's not needed.

      > they felt like making a frankenAPU

      All of the game console chips are "frankenAPUs".

      > Especially since the APUs are usually monolithic (vs chiplet) in design and share a memory controller.

      "Monolithic" vs "chiplet" is an arbitrary distinction, in this case. The individual logic cores are still independent and joined together with interposers and glue logic. This is clear from the die shots:

      https://videocardz.com/newz/sony-playstation-5-soc-die-pictu...

      To return to the previous point, look at the space dedicated to the CCXs. The Zen2 has ~1.9bln transistors, the Zen3 ~4.1bln, the Zen4 ~6.6bln, etc. To use a newer core would double or triple that space. Increasing the total die size, making it more expensive per chip and increasing the defect rate.

      > Surely it would have been easier/cheaper to put in 8 zen 4c/5c cores and call it a day.

      Definitely not.

      > I'm pretty sure I'm just missing something obvious...

      Nothing about chip design is obvious.

    • pjmlp 3 days ago

      PS5 Pro makes no sense, yes.

      Most studios aren't even able to push current PS 5 to its limit, given current development schedules and budgets.

      PS 5 Pro is for the same target audience as PS 4 Pro, hardcode console fans that will buy whatever the console vendor puts out, and Sony needs to improve their margins.

  • MichaelZuo 4 days ago

    How would you explain cross PS5/PC releases being much more efficient on the PS5?

    e.g. Horizon Forbidden West needing a much better GPU on PC to run at the same level of fidelity as the PS5.

    If not for special tuning specific to the PS5’s differences.

    (I can imagine Windows bloat and other junk requiring an additional 10% to 20%, but not 30% to 50%.)

    • jitl 4 days ago

      The comment above is elaborating on x86 micro-architecture, the differences between how the CPU handles x86 instructions specifically.

      The overall system architecture is different between PC, which has discrete memory systems for the CPU and GPU, and a very long pathway between GPU memory and system/CPU memory, versus today's consoles which have unified memory for CPU+GPU, and optimized pathways for loading from persistent storage too.

      Consolesuse their own graphics APIs, but you would have any vendor you contract with for graphics support your native graphics API and everything would be "fine". PS5 games use GNM/GNMX Playstation proprietary graphics APIs. Usually PC ports of console native games re-implement the rendering engine using the PC graphics APIs like DirectX or Vulkan. The re-implementation is probably less efficient and less tuned.

      • TimeBearingDown 3 days ago

        Great answer. Denuvo and other heavy anti-piracy tools are also sometimes used for releases on PCs which can seriously impact performance.

      • ac29 3 days ago

        > Usually PC ports of console native games re-implement the rendering engine using the PC graphics APIs like DirectX or Vulkan. The re-implementation is probably less efficient and less tuned.

        This was true 25 years ago when in house bespoke game engines were more common and consoles weren't basically PCs. In 2024, I highly doubt many cross-platform games are ported at all - its just a different target in Unreal/Unity/etc.

        • kuschku 3 days ago

          > I highly doubt many cross-platform games are ported at all - its just a different target in Unreal/Unity/etc.

          Horizon is running in Guerilla Games' in-house Decima Engine, which is PS5-only for production builds. Ports are handled by nixxes.

          Kojima games previously used Konami's in-house Fox Engine, again primarily designed for playstation. Since Kojima left Konami, Kojima games use the Decima Engine as well.

    • yangff 3 days ago

      Horizon Forbidden West was ported from PS to PC. Decima is an engine from Sony’s first-party studio, so it's understandable that their development process would lean more towards PS's internal architecture rather than the more common GPUs on the market. Of course, even general-purpose engines can perform better on PS5, AMD, or NV. But, for these engines, they have less information about how customers will use the engines, so there's less infomation can be used to optimize. On the other side, customers using these engines often don’t have enough experience optimizing sufficiently for each platform. None of this is absolute, but I think this logic is reasonable.

      For game developers using these engines, if they take optimization seriously, they typically make adjustments to lighting, LOD, loading, and model details or shaders on console platforms to achieve a similar visual effect while meeting the targeted performance goals. This is why you usually get better performance on a console at the same price point compared to a PC, aside from the subsidies provided by Sony.

    • tacticus 3 days ago

      > Horizon Forbidden West needing a much better GPU on PC to run at the same level of fidelity as the PS5.

      not being expected to run with variable refresh rate\interleaving and accepting 30\60 fps in best case situations?

    • deaddodo 3 days ago

      > They almost all use off-the-shelf middleware and game engines that are built to support all of the platforms. The companies that do go through that effort tend to have very notable releases.

  • HelloNurse 3 days ago

    And, more simply, Moore's Law should ensure that in a next-generation console with a new microprocessor architecture slowdown in some instructions and memory access patterns is compensated by general speedup, limiting performance regressions to terribly unfortunate cases (which should be unlikely and so obvious that they are mitigated).

mikepavone 4 days ago

> An important non-obvious detail for AMD is that they (at least in the past, I assume for this as well) have kept the instruction timings similar from generation to generation of consoles.

What? The Jaguar-based CPU in the PS4 has both a much lower clock and substantially lower IPC than the Zen 2 based one in the PS5. The timings are not remotely the same and the micro-architectures are quite different. Jaguar was an evolution of the Bobcat core which was AMD's answer to the Intel Atom at the time (i.e. low cost and low-power, though it was at least an out-of-order core unlike contemporary Atoms).

Going from GCN to RDNA on the GPU side is also a pretty significant architectural change, though definitely much less than the going from AMD to Intel would be.

  • senkora 4 days ago

    I did some more research and I was wrong.

    My source was an AMD tech talk from years ago where they mentioned keeping instruction timings the same for backwards compatibility reasons.

    I believe they were talking about this for the XBox One X: https://en.wikichip.org/wiki/microsoft/scorpio_engine#Overvi... (and a similar chip for the PS4 Pro)

    So basically, they upgraded and lightly enhanced the Jaguar architecture, shrunk the process (28nm -> 16nm), but otherwise kept it the same. AMD Zen was released around this time and was far superior but they decided to stick with Jaguar in order to make sure that instruction timings were kept the same.

    I guess that they didn't want two hardware revisions of the same console generation running on different micro-architectures, but they were okay switching the micro-architecture for the next console generation.

jheriko 4 days ago

you clearly haven't played a modern game :P

cpu timings taken care around by developers is 10-15 years out of date. most of them these days dont even know what a dot product is, how to find the distance to a point or a straight line in-between two... and the people they rely on to do this for them make horrendous meals of it.

but yeah, sure, cpu instruction timings matter.

  • DaoVeles 4 days ago

    I was about to say. I bailed out of the industry just as the Xbox One/Ps4 was coming in. Even with the 360/Ps3, it was considered wise to try and steer clear of that kind of low level stuff just for ones sanity. When the X1/Ps4 came in, it was completely abandoned, turns out x86 compilers combined with OoO execution just made that kind of tinkering not only nearly pointless but sometime actually hurt performance.

    Nowadays,I suspect it is almost entirely in the hands of the compilers, the API's and the base OS to figure out the gritty details.

    • xgkickt 3 days ago

      There are still manual optimizations that can be done (non-temporal writes where appropriate for example), but nothing like the painstaking removal of Load-Hit-Stores and cache control of the 360/PS3 era.

  • Meganet 3 days ago

    The new chip will be relevant faster. I would bet that bandwidth between certain components is a lot more critical. Or NUNA or bandwidth between cores.

    Im surprised that cpu instruction latency is mentioned before other

lxgr 3 days ago

Given the size of such a contract, wouldn't it be reasonable for Sony to just request equal or better instruction latency for everything relevant from the old CPU?