Comment by bsder

Comment by bsder 18 hours ago

11 replies

> I never in a million years would have guessed microcontrollers & power efficient small chips would see so little change across a decade of time

It's because the software ecosystem around them is so incredibly lousy and painful.

Once you get something embedded to work, you never want to touch it again if you can avoid it.

I was really, really, really hoping that the RISC-V folks were going to do better. Alas, the RISC-V ecosystem seems doomed to be repeating the same levels of idiocy.

01100011 16 hours ago

Switching microcontrollers means you have a lot of work to do to redo the HW design, re-run all of your pre-production testing, update mfg/QA with new processes and tests, possibly rewrite some of your application.. and you need to price in a new part to your BOM, figure out a secure supply for some number of years... And that just assumes you don't want to do even more work to take advantage of the new chip's capabilities by rewriting even more of your code. All while your original CPU probably still does fine because this is embedded we're talking about and your product already does what it needs to do.

ryao 10 hours ago

The RP2040 and RP2350 are fairly big changes from the status quo, although they are not very energy efficient compared to other MCUs. Coincidentally, the RP2350 is part of the RISC-V ecosystem. It has both RISC-V and ARM cores and lets you pick which to use.

danhor 12 hours ago

RISC-V is even worse: The Cortex-M series have standardized interrupt handling and are built so you can avoid writing any assembly for the startup code.

Meanwhile the RISC-V spec only defines very basic interrupt functionality, with most MCU vendors adding different external interrupt controllers or changing their cores to more closely follow the faster Cortex-M style, where the core itself handles stashing/unstashing registers, exit of interrupt handler on ret, vectoring for external interrupts, ... .

The low knowledge/priority of embedded of RISC-V can be seen in how long it took to specify an extension tha only includes multiplication, not division.

Especially for smaller MCUs the debug situation is unfortunate: In ARM-World you can use any CMSIS-DAP debug probe to debug different MCUs over SWD. RISC-V MCUs either have JTAG or a custom pin-reduced variant (as 4 pins for debugging is quite a lot) which is usually only supported by very few debug probes.

RISC-V just standardizes a whole lot less (and not sensibly for small embedded) than ARM.

  • ryao 10 hours ago

    Being customizable is one of RISC-V’s strengths. Multiplication can be easily done in software by doing bit shifts and addition in a loop. If an embedded application does not make heavy use of multiplication, you can omit multiplication from the silicon for cost savings.

    That said, ARM’s SWD is certainly nice. It appears to be possible to debug the Hazard3 cores in the RP2350 in the same way as the ARM cores:

    https://gigazine.net/gsc_news/en/20241004-raspberry-pi-pico-...

    • magicalhippo 9 hours ago

      > If an embedded application does not make heavy use of multiplication, you can omit multiplication from the silicon for cost savings.

      The problem was that the initial extension that included multiplication also included division[1]. A lot of small microcontrollers have multiplication hardware but not division hardware.

      Thus it would make sense to have a multiplication-only extension.

      IIRC the argument was that the CPU should just trap the division instructions and emulate them, but in the embedded world you'll want to know your performance envelopes so better to explicitly know if hardware division is available or not.

      [1]: https://docs.openhwgroup.org/projects/cva6-user-manual/01_cv...

      • ryao 9 hours ago

        Software division is often faster than hardware division, so your performance remark seems to be a moot point:

        https://libdivide.com/

fidotron 9 hours ago

> It's because the software ecosystem around them is so incredibly lousy and painful.

This is reaching breaking point entirely because of how powerful modern MCUs are too. You simply cannot develop and maintain software of scale and complexity to exploit those machines using the mainstream practices of the embedded industry.