Comment by chubot

Comment by chubot 15 hours ago

3 replies

That looks more like a SIMD problem than a multi-core problem

You want bigger units of work for multiple cores, otherwise the coordination overhead will outweigh the work the application is doing

I think the Erlang runtime is probably the best use of functional programming and multiple cores. Since Erlang processes are shared nothing, I think they will scale to 64 or 128 cores just fine

Whereas the GC will be a bottleneck in most languages with shared memory ... you will stop scaling before using all your cores

But I don't think Erlang is as fine-grained as your example ...

Some related threads:

https://news.ycombinator.com/item?id=40130079

https://news.ycombinator.com/item?id=31176264

AFAIU Erlang is not that fast an interpreter; I thought the Pony Language was doing something similar (shared nothing?) with compiled code, but I haven't heard about it in awhile

fmajid 8 hours ago

Yes, Erlang's zero-sharing model is what I think Rust should have gone for in its concurrency model. Sadly too few people have even heard of it.

  • chubot an hour ago

    That would be an odd choice for a low-level language ... languages like C, C++, and Rust let you use whatever the OS has, and the OS has threads

    A higher level language can be more opinionated, but a low level one shouldn't straight jacket you.

    i.e. Rust can be used to IMPLEMENT an Erlang runtime

    If you couldn't use threads, then you could not implement an Erlang runtime.

juped 9 hours ago

There's some sharing used to avoid heavy copies, though GC runs at the process level. The implementation is tilted towards copying between isolated heaps over sharing, but it's also had performance work done over the years. (In fact, if I really want to cause a global GC pause bottleneck in Erlang, I can abuse persistent_term to do this.)