Comment by ksec

Comment by ksec 7 hours ago

33 replies

If Intel decide to focus on Foundry, I just wish AMD and Intel could work together and make a subset clean up of x86 ISA open source or at least available for licensing. I dont want it to end up like MIPS or POWER ISA where everything is too little too late.

holowoodman 7 hours ago

A subset of an ISA will be incompatible with the full ISA and therefore be a new ISA. No existing software will run on it. So this won't really help anyone.

And x86 isn't that nice to begin with, if you do something incompatible, you might as well start from scratch and create a new, homogenous, well-designed and modern ISA.

  • ksec 3 hours ago

    There are software compiled today without using MMX support. I was thinking the idea of something that is open or for licensing is an 86 ISA that is forward compatible. And for customers that requires strict backward compatibility they could still source it from AMD and Intel.

    i.e Software compiled for 86 should work on x86. The value for backward compatibility is kept with both Intel and AMD. If the market wants something in between they now have an option.

    I know this isn't a sexy idea because HN or most tech people like something shiny and new. But I have always like the idea of extracting value from the "old and tried" solutions.

    • Scoundreller an hour ago

      Sadly over the past year, Spotify builds require AVX extensions. Had an issue updating my 2008 Dell semi-upgraded bench PC that has a Q9300 in it (no AVX on it)

      But thankfully I could install an old bin and lock it out from updating.

      Intel’s software development emulator might run the newest bin but variable how slow it might be.

      In other circumstances, the AVX extensions aren’t required but the app is compiled to fail if they’re not required: https://www.reddit.com/r/pcgaming/comments/pix02j/hotfix_for...

  • fooker 5 hours ago

    Software or microcode emulation works pretty well.

    So it would be faster and more efficient when sticking to the new subset and Nx slower then using the emulation path.

    • kimixa 5 hours ago

      You could argue that microcode emulation is what they do now.

tester756 4 hours ago

>I just wish AMD and Intel could work together and make a subset clean up of x86 ISA

AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing

Standardizing x86 features

Key technical milestones, include:

    FRED (Flexible Return and Event Delivery): Finalized as a standard feature, FRED introduces a modernized interrupt model designed to reduce latency and improve system software reliability.
    AVX10: Established as the next-generation vector and general-purpose instruction set extension, AVX10 boosts throughput while ensuring portability across client, workstation, and server CPUs.
    ChkTag: x86 Memory Tagging: To combat longstanding memory safety vulnerabilities such as buffer overflows and use-after-free errors, the EAG introduced ChkTag, a unified memory tagging specification. ChkTag adds hardware instructions to detect violations, helping secure applications1, operating systems, hypervisors, and firmware. With compiler and tooling support, developers gain fine-grained control without compromising performance. Notably, ChkTag-enabled software remains compatible with processors lacking hardware support, simplifying deployment and complementing existing security features like shadow stack and confidential computing. The full ChkTag specification is expected later this year – and for further feature details, please visit the ChkTag Blog.
    ACE (Advanced Matrix Extensions for Matrix Multiplication): Accepted and implemented across the stack, ACE standardizes matrix multiplication capabilities, enabling seamless developer experiences across devices ranging from laptops to data center servers.
fulafel 6 hours ago

90s x86 from ISA pov is already free to use, no? The original patents must have expired and there's no copyright protection of ISAs. The thing keeping the symbiotic cross-licensed duopoly going is mutating the ISA all the time so they can mix in more recently patented stuff.

  • tracker1 6 hours ago

    AFAIK, most of event x86_64 patents are largely expired, or will be within the next 6 years. That said, efforts for a more open platform are probably more likely to be centered around risc or another arm alternative than x86... While I could see a standardization of x86 compatible shortcuts for use with emulation platforms on arm/risc processors. Transmeta was an idea too far ahead of its time.

    • fulafel 5 hours ago

      Remembering the Mac ARM transition pain wrt Docker and Node/Python/Lambda cross builds targeting servers, there's a lot to be said for binary compatibility.

      • tracker1 3 hours ago

        You're doing builds for Docker on your desktop for direct deployment instead of through a CI/CD service?

        My biggest issue was the number of broken apps in Docker on Arm based Macs, and even then was mostly able to work around it without much trouble.

      • cmrdporcupine 4 hours ago

        90% of those problems effect people like you and I, developers and power users, not "regular" users of machines who are mostly mobile device and occasional laptop/desktop application users.

        I suspect we'll see somebody -- a phone manufacturer or similar device -- make a major transition to RISC-V from ARM etc in the next 10 years that we won't even notice.

        • fulafel 3 hours ago

          I agree, some will, but it may not be a more open platform from developer POV.

  • fweimer 2 hours ago

    I don't think it works that way in practice.

    Some distributions like Debian or Fedora will make newer features (such as AVX/VEX) mandatory only after the patents expire, if ever. So a new entrant could implement the original x86-64 ISA (maybe with some obvious extensions like 128-bit atomics) in that time frame and preempt the patent-based lockout due to ISA evolution. If there was a viable AMD/Intel alternative that only implements the baseline ISA, those distributions would never switch away from it.

    It's just not easy to build high-performance CPUs, regardless of ISA.

lloydatkinson 7 hours ago

They recently killed off their recent attempt; x86s.

  • userbinator 18 minutes ago

    The "s" stands for "stupid".

    But it's fortunate that they realised the main attraction to x86 is backwards-compatibility, so attempting to do away with that will lead to even less marketshare.

  • tester756 4 hours ago

    Wut?

    >AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing

    Oct 13, 2025

    Standardizing x86 features

    Key technical milestones, include:

        FRED (Flexible Return and Event Delivery): Finalized as a standard feature, FRED introduces a modernized interrupt model designed to reduce latency and improve system software reliability.
    
        AVX10: Established as the next-generation vector and general-purpose instruction set extension, AVX10 boosts throughput while ensuring portability across client, workstation, and server CPUs.
    
        ChkTag: x86 Memory Tagging: To combat longstanding memory safety vulnerabilities such as buffer overflows and use-after-free errors, the EAG introduced ChkTag, a unified memory tagging specification. ChkTag adds hardware instructions to detect violations, helping secure applications1, operating systems, hypervisors, and firmware. With compiler and tooling support, developers gain fine-grained control without compromising performance. Notably, ChkTag-enabled software remains compatible with processors lacking hardware support, simplifying deployment and complementing existing security features like shadow stack and confidential computing. The full ChkTag specification is expected later this year – and for further feature details, please visit the ChkTag Blog.
    
        ACE (Advanced Matrix Extensions for Matrix Multiplication): Accepted and implemented across the stack, ACE standardizes matrix multiplication capabilities, enabling seamless developer experiences across devices ranging from laptops to data center servers.
    • wtallis 4 hours ago

      Copying and pasting a press release does not make for a good comment. Especially because you don't seem to have understood what you pasted in, or the context of this discussion. What you're demonstrating is several more new features added to the pile. Intel's retracted X86S proposal was actually about removing legacy features, creating a cleaner subset for the modern era.

      • tester756 4 hours ago

        Both x86S and advisory group is about the same: improving x86.

        As of today it resulted in more features, but who knows what changes it will bring tomorrow?

        Calling x86 clean up initiative dead/cancelled is quite not fair since this group is still working.

      • ksec 4 hours ago

        Hi Wtallis, any insight as to why they abandoned the idea? I was looking forward to x86s and may be even reshaping some of the x86-64 instructions. But looks like that is largely gone as well.

IshKebab 6 hours ago

Far too late for that. Does anyone seriously think ARM isn't going to obliterate x86 in the next 10-20 years?

  • Keyframe 5 hours ago

    In which space? Desktop and high performance servers? Why would it?

    Mature gallery of software to be ported from TSO to weak memory model is a soft moat. So is avx/simd mature dominance vs neon/sve. x86/64 is a duopoly and a stable target vs fragmented landscape of ARM. ARM's whole spiel is performance per watt, scale out type of thing vs scale up. In that sense the market has kind of already moved. With ARM if you start pushing for sustained high throughput, high performance, 5Ghz+ envelope, all the advantages are gone in favor of x86 so far.

    What might be interesting is if let's say AMD adds an ARM frontend decoder to Zen. In one of Jim Keller's interviews that was shared here, he said it wouldn't be that big of a deal to make such a CPU for it to be an ARM decoding one. That'd be interesting to see.

    • philistine 4 hours ago

      > In which space? Desktop and high performance servers? Why would it?

      Laptops. Apple already owned the high margin laptop market before they switched to ARM. With phones, tablets, laptops above 1k, and all the other doodads all running ARM, it's not that x86 will simply disappear. Of course not. But the investments simply aren't comparable anymore with ARM being an order of magnitude more common. x86 is very slowly losing steam, with their chips generally behind in terms of performance per watt. And it's not because of any specific problem or mistake. It's just that it no longer makes economic sense.

  • tracker1 6 hours ago

    Well, given some of the political/legal gamesmanship over the company itself the past few years, it could very well self destruct in favor of RISC-V or something else entirely in the next decade, who knows.

  • fulafel 5 hours ago

    Look how long SPARC, z/Architecture, PowerPC etc have kept going even after they lost their strong positions on the market (a development which is nowhere in sight for x86), and they had a tiny fraction of the inertia of x86 softare base.

    Obliterating x86 in that time would take quite a lot more than what the ARM trajectory is now. It's had 40 years to try by now and the technical advantage window (power efficieny advantage) has closed.

  • fweimer 2 hours ago

    It seems to me that interest in AArch64 for on-promise general-purpose compute workloads has largely waned. Are Dell/HPE/Lenovo currently selling AArch64 servers? Maybe there is a rack-mounted Nvidia DGX variant, but that's more focused on GPU compute for sure.

  • tester756 4 hours ago

    >Does anyone seriously think ARM isn't going to obliterate x86 in the next 10-20 years?

    Lunar Lake shows that x86 is capable of getting that energy efficiency

    Panther Lake that will be released in around 30 days is expected to show significant improvement over Lunar Lake

    So... why switch to ARM if you will get similar perf/energy eff?

  • izacus 5 hours ago

    20 years is half of x86's lifetime and less than half of the lifetime of home computing as we know it.

    So this is kind of a useless question, because in such a timespan anything can happen. 20 years ago computers had somewhere around 512MB of RAM and a single core and had a CRT on desk.

  • [removed] 2 hours ago
    [deleted]
  • zzzoom 4 hours ago

    Why would the market jump from one proprietary ISA to another proprietary ISA?