Comment by hansvm
Making it dead simple to have different tokens is exactly the goal. A smattering of examples recently on my mind:
As a background, you might ask why you need different runtimes ever. Why not just make everything async and be done with it, especially if the language is able to hide that complexity?
1. In the context of a systems language that's not an option. You might be writing an OS, embedded code, a game with atypical performance demands requiring more care with the IO, some kernel-bypass shenanigan, etc. Even just selecting between a few builtin choices (like single-threaded async vs multi-threaded async vs single-threaded sync) doesn't provide enough flexibility for the range of programs you're trying to allow a user to write.
2. Similarly, even initializing a truly arbitrary IO effect once at compile-time doesn't always suffice. Maybe you normally want a multi-threaded solution but need more care with respect to concurrency in some critical section and need to swap in a different IO. Maybe you normally get to interact with the normal internet but have a mode/section/interface/etc where you need to send messages through stranger networking conditions (20s ping, 99% packet loss, 0.1kbps upload on the far side, custom hardware, etc). Maybe some part of your application needs bounded latency and is fine dropping packets but some other part needs high throughput and no dropped packets at any latency cost. Maybe your disk hardware is such that it makes sense for networking to be async and disk to be sync. And so on. You can potentially work around that in a world with a single IO implementation if you can hack around it with different compilation units or something, but it gets complicated.
Part of the answer then is that you need (or really want) something equivalent to different IO runtimes, hot-swappable for each function call. I gave some high-level ideas as to why that might be the case, but high-level observations often don't resonate, so let's look at a concrete case where `await` is less ergonomic:
1. Take something like TLS as an example (stdlib or 3rd-party, doesn't really matter). The handshake code is complicated, so a normal implementation calls into an IO abstraction layer and physically does reads and writes (as opposed to, e.g., a pure state-machine implementation which returns some metadata about which action to perform next -- I hacked together a terrible version of that at one point [0] if you want to see what I mean). What if you want to run it on an embedded device? If it were written with async it would likely have enough other baggage that it wouldn't fit or otherwise wouldn't work. What if you want to hide your transmission in other data to sneak it past prying eyes (steganography, nowadays that's relatively easy to do via LLMs interestingly enough, and you can embed arbitrary data in messages which are human-readable and purport to discuss completely other things without exposing hi/lo-bit patterns or other such things that normally break steganography)? Then the kernel socket abstraction doesn't work at all, and "just using await" doesn't fix the problem. Basically, any place you want to use that library (and, arguably, that's the sort of code where you should absolutely use a library rather than rolling it yourself), if the implementer had a "just use await" mentality then you're SOL if you need to use it in literally any other context.
I was going to write more concrete cases, but this comment is getting to be too long. The general observation is that "just use await" hinders code re-use. If you're writing code for your own consumption and also never need those other uses then it's a non-issue, but with a clever choice of abstraction it _might_ be possible (old Zig had a solution that didn't quite hit the mark IMO, and time will tell if this one is good enough, but I'm optimistic) to enable the IO code people naturally write to be appropriately generic by default and thus empower future developers via a more composable set of primitives.
They really nailed that with the allocator interface, and if this works then my only real concern is a generic "what next" -- it's pushing toward an effect system, but integrating those with a systems language is mostly an unsolved problem, and adding a 3rd, 4th, etc explicit parameter to nearly every function is going to get unwieldy in a hurry (back-of-the-envelope idea I've had stewing if I ever write a whole "major" language is to basically do what Zig currently does and pack all those "effects" into a single effect parameter that you pass into each function, still allowing you to customize each function call, still allowing you to inspect which functions require allocators or whatever, but making the experience more pleasant if you have a little syntactic sugar around sub-effects and if the parent type class is comptime-known).
[0] https://github.com/hmusgrave/rayloop/blob/d5e797967c42b9c891...
The case I'm making is not that different Io context are good. The point I'm making is that mixing them is almost never what is needed. I have seen valid cases that do it, but it's not in the "used all the time" path. So I'm more then happy with the better ergonomics of traditional async await in the style of Rust , that sacrifices super easy runtime switching. Because the former is used thousands of times more.