ksec 14 hours ago

This is nothing about ARMv9 the ISA but much more about their new CEO Rene Haas. Arm has always been pricing their design on the lower end, bundling GPU and other designs IP. I have long argued since they enter 64bit era their performance profile and profits does not align well especially when comparing to AMD and Intel.

Even with the increased pricing the Cortex X5 / X925 and upcoming X6 / X930 they are still pretty good value. Unless Apple has something big with A19 / M5 the X6 / X930 should be competitive with M4 already. I just wish they spend a little more money on R&D for the GPU IP side of things.

Hoepfully we have some more news from Nvidia in Computex 2025

  • ryao 11 hours ago

    AMD and Intel actually fabricate chips for sale to others (outsourced to TSMC in AMD’s case) and take the risks associated with that. ARM on the other hand is just an IP provider. They are not comparable. ARM should have kept its original strategy of aiming to profit from volume that enabled its rise in the first place. Its course change likely looks great to SoftBank’s investors for now, but it will inevitably kill the goose that lays the golden eggs as people look elsewhere for what ARM was.

    That said, ARM’s increased license fees are a fantastic advocate for RISC-V. Some of the more interesting RISC-V cores are Tenstorrent’s Ascalon and Ventana’s Veyron V2. I am looking forward to them being in competition with ARM’s X925 and X930 designs.

    • tails4e 10 hours ago

      RISC-V is not immune from license fees, unless you want to design a high performance core from the ground up. If you want something as capable as an M4, there is years of R&D to get to that level. I'm sure a big player could do just that in house, but many would license Si-Five or similar. It will be interesting to see if Qualcomm and the like would make a move towards RISC-V, given their ARM legal issues

      • ryao 7 hours ago

        There are an incredible number of companies designing their own RISC-V cores right now. Some of them are even are making some of their designs entirely open source so that they are royalty free. The highest end designs are not, but it is hard to imagine their creators not undercutting ARM’s license fees since that is money that they would not have otherwise.

        As for Qualcomm, they won the lawsuit ARM filed against them. Changing from ARM to RISC-V would delay their ambition to take marketshare from Intel and AMD, so they are likely content to continue paying ARM royalties because they have their eyes on a much bigger prize. It also came out during the lawsuit that Qualcomm considers their in-house design team to be saving them billions of dollars in ARM royalty fees, since they only need to pay royalties for the ISA and nothing else when they use their own in-house designs.

      • xbmcuser 8 hours ago

        China will likely be the country taking forward RISC-V and ditching Arm and x86 completely. With USA trying to stop other countries from using latest Chinese tech they are given more reason to ditch any and all propitiatory US tech. So over the next decade I expect RISC-V architecture to enter and flood all Chinese tech devices from Tvs to cars and everything else that needs a CPU.

        I personally hope China get's competitive in the node size as well as I want gpu and cpus start getting cheaper every generation again as once TSMC got big lead over Intel/Samsung and Nvidia got a big lead over AMD prices have stopped coming down generation to generation for CPU's and GPU's

      • rollcat 5 hours ago

        Correct me if I am wrong, but in RISC-V's case, you would be licensing the core design alone, not a license for the ISA plus the core on top.

        Right now, AFAIK only Apple is serious about designing their own ARM cores, while there are multiple competing implementations for RISC-V (which are still way behind both ARM and x86, but slooowly making their way).

        VERY long-term, I expect RISC-V to become more competitive, unless whoever-owns-ARM-at-the-time adjusts strategy.

        Either way, I'm glad to see competition after decades of Intel/x86 dominance.

        • ryao 5 hours ago

          Qualcomm has a serious development effort in their Oryon CPU cores. Marvel had ThunderX from the Cavium acquisition, but they seem to have discontinued development.

      • solarkraft 3 hours ago

        Yes, but the playing field is different. Anyone can become a Risc-V IP provider and many such companies have already been created.

    • ksec 4 hours ago

      MediaTek and others using ARMv9 design and pricing, heck even Qualcomm are selling their SoC on Windows PC at cheaper price compared to Intel or AMD.

      Even at a higher IP price their final product are cheaper, faster and competitive. There may be a strategy about leaving money on the table, but it is another thing about leaving TOO much money on the table. If Intel and AMD's pricing is so far above ARM, there is nothing wrong with increasing your highest performance core 's pricing.

      I would not be surprised in a 2 - 3 years time the highest PC performance CPU / SoC is coming from Nvidia with ARM CPU Core rather than x86. But knowing Nvidia I know they will charge similar pricing to Intel :D

      • ryao 3 hours ago

        So far, Qualcomm is not paying the royalty rate hikes since they are selling ARM hardware using cores covered under the ARMv8 architectural license that they obtained before SoftBank started pushing ARM to improve profitability.

        It is interesting that you should mention MediaTek. They joined the RISC-V Software Ecosystem in May 2023:

        https://riseproject.dev/

        It seems reasonable to think that they are considering jumping ship. If they are designing their own in-house CPU cores, it will likely be a while before we see them as part of a mediatek SoC.

        In any case, people do not like added fees. They had previously tolerated ARM’s fees since they were low, but now that they are raising them, people are interested in alternatives. At least some of ARM’s partners are paying the higher for now, but it is an incentive to move to RISC-V, which is no fee for the ISA and either no fee or low fee for IP cores. For example, the hazard3 cores that the Raspberry Pi Foundation adopted in the RP2350 did not require them to pay royalty fees to anyone.

  • [removed] 8 hours ago
    [deleted]
margorczynski 9 hours ago

Doesn't ARM have a problem with RISC-V and Chinese CPUs? Long term seems they're bound to loose most of the market by simply being priced out.

  • chvid 8 hours ago

    Most of the high end Chinese chips are based on ARM as of now.

    • surajrmal 2 hours ago

      What about in 5 years? riscv isn't competitive at the top end but it's closing in fast.

      • chvid an hour ago

        Huawei is probably the one that possibly might move away from arm because they have their own operating system. The US could tighten their controls more and ban them from arm altogether- so far they are prohibited from arm 9.

        I am not sure Huawei would go for riscv - they could easily go for their own isa or an arm fork.

moshegramovsky 13 hours ago

Timothy Prickett Morgan is a fantastic writer and analyst. Love reading his stuff.

Neywiny 16 hours ago

As an almost exclusively microcontroller user of Arm's products, a big meh from me. v8 is still slowly rolling out. M33 is making headway but I was really hoping for M55 to be the bigger driver.

  • jauntywundrkind 15 hours ago

    That folks are still making new Cortex-A7 (2011) designs is wild. A-35 doesn't seem to be very popular or better.

    Cortex-M33 (2016) derives–as you allude to–from ARMv8 (2015). But yeah it barely seems only barely popular, even now.

    Having witnessed some of the 9p's & aughts computing, I never in a million years would have guessed microcontrollers & power efficient small chips would see so little change across a decade of time!!

    • Neywiny 8 minutes ago

      At a trade show I saw a chip coming out with DDR3L. Imagine a 2025 chip with RAM from 15? years ago. They said it's all that they needed. Probably have a perpetual license or something.

    • conradev 14 hours ago

      Isn’t there some dynamic at play where STM will put one of these on a board, that board becomes a “standard” and then it’s cloned by other manufacturers, lowering cost? (legality aside)

      STM32H5 in 2023 (M33): https://newsroom.st.com/media-center/press-item.html/p4519.h...

      GD32F5 in 2024: https://www.gigadevice.com/about/news-and-event/news/gigadev...

      STM32N6 in 2025 (M55): https://blog.st.com/stm32n6/

      i.e. it takes some time for new chips to hit cost targets, and most applications don’t need the latest chips?

      I don’t know a lot about the market, though, and interested to learn more

      • jauntywundrkind 3 hours ago

        Some chips that have come out in the past 3 years with Cortex A7:

        Microchip SAMA7D65 and SAMA7G54. Allwinner V853 and T113-S3.

        It's not like a massive stream of A7's. But even pretty big players don't really seem to have any competitive options to try. The A-35 has some adoption. There is an A34 and A32 that I don't see much of, don't know what they'd bring above the A7. All over a decade old now and barely seen.

        To be fair, just this year ARM announced Cortex-A320 which I don't know much about, but might perhaps be a viable new low power chip.

    • duskwuff 14 hours ago

      You can get a lot of mileage out of a Cortex-M7. NXP has some which run up to 1 GHz - that's a ridiculous amount of power for a "microcontroller". It'd easily outperform an early-to-mid-2000s desktop PC.

      • adrian_b 9 hours ago

        There are no similarities between Cortex-M7 and Cortex-A7 from the POV of obsolescence.

        Cortex-M7 belongs to the biggest-size class of ARM-based microcontrollers. There is one newer replacement for it, Cortex-M85, but for now Cortex-M7 is not completely obsolete, because it is available in various configurations from much more vendors and at lower prices than Cortex-M85.

        Cortex-M7 and its successor Cortex-M85 have similar die sizes and instructions-per-clock performance with the Cortex-R8x and Cortex-A5xx cores (Cortex-M5x, Cortex-R5x and Cortex-A3x are smaller and slower cores), but while the Cortex-M8x and Cortex-R8x cores have short instruction pipelines, suitable for maximum clock frequencies around 1 GHz, the Cortex-A5xx cores have longer instruction pipelines, suitable for maximum clock frequencies around 2 GHz (allowing greater throughput, but also greater worst-case latency).

        Unlike Cortex-M7, Cortex-A7 is really completely obsolete. It has been succeeded by Cortex-A53, then by Cortex-A55, then by Cortex-A510, then by Cortex-A520.

        For now, Cortex-A55 is the most frequently used among this class of cores and both Cortex-A7 and Cortex-A53 are truly completely obsolete.

        Even Cortex-A55 should have been obsolete by now, but the inertia in embedded computers is great, so it will remain for some time the choice for cheap embedded computers where the price of the complete computer must be well under $50 (above that price Cortex-A7x or Intel Atom cores become preferable).

    • bsder 14 hours ago

      > I never in a million years would have guessed microcontrollers & power efficient small chips would see so little change across a decade of time

      It's because the software ecosystem around them is so incredibly lousy and painful.

      Once you get something embedded to work, you never want to touch it again if you can avoid it.

      I was really, really, really hoping that the RISC-V folks were going to do better. Alas, the RISC-V ecosystem seems doomed to be repeating the same levels of idiocy.

      • 01100011 13 hours ago

        Switching microcontrollers means you have a lot of work to do to redo the HW design, re-run all of your pre-production testing, update mfg/QA with new processes and tests, possibly rewrite some of your application.. and you need to price in a new part to your BOM, figure out a secure supply for some number of years... And that just assumes you don't want to do even more work to take advantage of the new chip's capabilities by rewriting even more of your code. All while your original CPU probably still does fine because this is embedded we're talking about and your product already does what it needs to do.

      • ryao 6 hours ago

        The RP2040 and RP2350 are fairly big changes from the status quo, although they are not very energy efficient compared to other MCUs. Coincidentally, the RP2350 is part of the RISC-V ecosystem. It has both RISC-V and ARM cores and lets you pick which to use.

      • danhor 9 hours ago

        RISC-V is even worse: The Cortex-M series have standardized interrupt handling and are built so you can avoid writing any assembly for the startup code.

        Meanwhile the RISC-V spec only defines very basic interrupt functionality, with most MCU vendors adding different external interrupt controllers or changing their cores to more closely follow the faster Cortex-M style, where the core itself handles stashing/unstashing registers, exit of interrupt handler on ret, vectoring for external interrupts, ... .

        The low knowledge/priority of embedded of RISC-V can be seen in how long it took to specify an extension tha only includes multiplication, not division.

        Especially for smaller MCUs the debug situation is unfortunate: In ARM-World you can use any CMSIS-DAP debug probe to debug different MCUs over SWD. RISC-V MCUs either have JTAG or a custom pin-reduced variant (as 4 pins for debugging is quite a lot) which is usually only supported by very few debug probes.

        RISC-V just standardizes a whole lot less (and not sensibly for small embedded) than ARM.

      • fidotron 6 hours ago

        > It's because the software ecosystem around them is so incredibly lousy and painful.

        This is reaching breaking point entirely because of how powerful modern MCUs are too. You simply cannot develop and maintain software of scale and complexity to exploit those machines using the mainstream practices of the embedded industry.

  • tails4e 10 hours ago

    I am surprised more uC use cases have not moved to RISC-5. What do you see keeping you on ARM for what you work on?

    • Neywiny 12 minutes ago

      Depends on the task. My favorite example is a chip that has a lot more than a microcontroller onboard, but it's an old v7m. I need the rest and have to struggle with what they give. If it was RV, power PC, mips, whatever, I'd have to use it.

    • ryao 2 hours ago

      The RP2350 lets you choose between 2 RISC-V cores and 2 ARM cores. I believe it even supports 1 RISC-V core and 1 ARM core for those who like the idea of their microcontrollers using two different ISAs simultaneously.

      Microchip Technology has a number of RISC-V options.

  • the__alchemist 14 hours ago

    Same. On v7 still for most things, even on newer MCUs. The v8 ones, for the use cases I've encountered, primarily add IOT features like secured Flash.

SillyUsername 5 hours ago

ARM used to be UK owned until Conservative government lack of foresight allowed it to be sold to Softbank, leaving AIM (UK's NASDAQ, part of LSE) despite being in the national interests, and security, to keep it British. Thanks Mrs May (ex-PM) for approving that one (it was the last regulatory hurdle, that it was not in national security interests, so had to go past her).

Of course Boris Johnson (the next PM) __tried to woo ARM back to LSE__ because they realised they fucked up, and of course what huge foreign company would refloat on the LSE when you have NASDAQ, or bother floating on both?

Can you imagine if America had decided to allow Intel or Apple to be sold to a company in another country? Same sentiment.

- Yep I'm a pissed off ex-ARM shareholder forced out by the board's buyout decision and Mrs May waving it through.

  • surajrmal an hour ago

    They did the world a favor by indirectly helping riscv. So arguably it's a net positive move.

AtlasBarfed 13 hours ago

Before reading article: I would like to know if this architecture will help Linux close to Apple architecture efficiencies....

After reading article: I suddenly realize that CPUs will probably no longer pursue making "traditional computing" any faster/efficient. Instead, everything will be focused on AI processing. There are absolutely no market/hype forces that will prompt the investment in "traditional" computing optimization anymore.

I mean, yeah, there's probably three years of planning and execution inertia, but any push to close the gap with Apple by ARM / AMD / Intel is probably dead, and Apple will probably stop innovating the M series.

  • surajrmal 17 minutes ago

    It makes sense to focus. Efficiencies in CPU design are not going to see as large of an impact on user workloads as focused improvements on inference workloads. The average phone user will be happier for the longer battery life as the onslaught of ai workloads from software companies is likely not going to slow and battery life will be wrecked if nothing changes.

  • tlb 8 hours ago

    The 128- and 256-core ARM server chips (like from Ampere) are pushing server performance in interesting ways. They're economically viable now for trivially parallelizable things like web servers, but possibly game-changing if your problem can put that many general-purpose cores to work.

    The thing is, there aren't that many HPC applications for that level of parallelism that aren't better served by GPUs.

  • maz1b 12 hours ago

    You think so? I posit that the deliverance of AI/ML (LLM/genAI) services and experiences are predicated upon "traditional computing" - so, there will be some level of improvement in this domain for at least quite some time longer.

  • Calwestjobs 13 hours ago

    apple M4 vs Intel Core Ultra 9 285K. apple m4 vs AMD Ryzen AI 9 365

    apple has to do something.

    im not sure intel cpus can have 196GB ram, or it is some mobile ram manufacturing limit, but i really want to have atleast 96GB in notebook, tablet.

    • wqaatwt 12 hours ago

      M4 still has >2x better performance per watt than either of those chips. Of course they are pretty much ignoring desktop so they can’t really compete with AMD/Intel when power is not an issue but that’s not exactly new

      • adrian_b 8 hours ago

        M4 has ">2x better performance per watt" than either Intel or AMD only in single-threaded applications or applications with only a small number of active threads, where the advantage of M4 is that it can reach the same or a higher speed at a lower clock frequency (i.e. the Apple cores have a higher IPC).

        For multithreaded applications, where all available threads are active, the advantage in performance per watt of Apple becomes much lower than "2x" and actually much lower than 1.5x, because it is determined mostly by the superior CMOS manufacturing process used by Apple and the influence of the CPU microarchitecture is small.

        While the big Apple cores have a much better IPC than the competition, i.e. they do more work per clock cycle so they can use lower clock frequencies, therefore lower supply voltages, when at most a few cores are active, the performance per die area of such big cores is modest. For a complete chip, the die area is limited, so the best multithreaded performance is obtained with cores that have maximum performance per area, so that more cores can be crammed in a given die area. The cores with maximum performance per area are cores with intermediate IPC, neither too low, nor too high, like ARM Cortex-X4, Intel Skymont or AMD Zen 5 compact. The latter core from AMD has a higher IPC, which would have led to a lower performance per area, but that is compensated by its wider vector execution units. Bigger cores like ARM Cortex-X925 and Intel Lion Cove have very poor performance per area.

      • guiriduro 9 hours ago

        Apple is ignoring desktop?

        • joshstrange 8 hours ago

          I guess that depends on your definition of “desktop”.

          What that really means (I think) is they aren’t using the power and cooling available to them in traditional desktop setups. The iMac and the Studio/Mini and yes, even the Mac Pro, are essentially just laptop designs in different cases.

          Yes, they (Studio/Pro) can run an Ultra variant (vs Max being the highest on the laptop lines) but the 2x Ultra chip so far has not materialized. Rumors say Apple has tried it but rather could get efficiencies to where they needed to be or ran into other problems connecting 2 Ultras to make a ???.

          The current Mac Pro would be hilarious if it wasn’t so sad, it’s just “Mac Studio with expansion slots”. One would expect/hope that the Mac Pro would take advantage of the space in some way (other than just expansion slots, which most people have no use for aside from GPUs which the os can’t/won’t leverage IIRC).

    • znpy 9 hours ago

      > but i really want to have atleast 96GB in notebook, tablet.

      in notebooks it's been possible for years. a friend of mine had 128gb (4x32gb ddr4) in his laptop about 4-6 years ago already. it was a dell precision workstation (2100 euros for the laptop alone, core i9 cpu, nothing fancy).

      Nowadays you can get 64gb individual ddr5 laptop ram sticks. as long as you can find a laptop with two ram sockets you can easily get 128b memory on laptops.

      regarding tablets... it's unlikely to be seen (<edit> in the near future</edit>). tablet OEMs tip their hats to the general consumer markets, where <=16gb ram is more than enough (and 96gb memory would cost more than the rest of the hardware for no real user/market/sales advantage)

    • Etheryte 9 hours ago

      I think this largely misses the point. Power users, so most of the users on HN, are a niche market. Most people don't need a hundred gigs of RAM, they need their laptop to run Powerpoint and a browser smoothly and for the battery to last a long time. No other manufacturer is anywhere close to Apple in that segment as far as I'm concerned.

    • junon 8 hours ago

      Some Intel chips have a max of 192GiB. Others 4TiB. It depends on the chip, but there are definitely machines running terabytes of memory.