Comment by audunw

Comment by audunw 20 hours ago

1 reply

Your preference to have them be compile time generic shouldn’t come at the cost of those that would want runtime virtualisation.

As the article concludes, you get the best of both worlds here, where the result is effectively compile time generic if you only use one io implementation in your program. In theory it’d also partially compile time generic if you exclusively use one io for one set of libraries/functions and a different io for another set of libraries/functions.

I see this as the objectively correct design based on the existing design decisions in Zig. It follows from the allocator interface decision.

anonymoushn 18 hours ago

Yes, I understand that the designers prefer the Allocator situation and that Reader and Writer being anytype was downstream of the difficulty of using async readers and writers otherwise. So the intention was always to go with the design that I do not prefer. One reason I do not prefer it is that the Reader and Writer interfaces were already staggeringly inefficient, despite the lack or virtualization. We have avoided the issue by reimplementing a bunch of their API in some specific readers and writers and modifying the stdlib Reader and Writer to dispatch to these methods if they are present.

To be honest, I just do not have much faith in the commitment to optimality, when it seems like the team has not spent time doing things like profiling a program that spends a lot of time formatting integers as decimal syrings, and noticing that the vast majority of that formatting runtime is UTF-8 validation. I am happy to continue using the language, because it makes it easy enough to fix these issues oneself.

The only aspect that may not be recoverable by the end user is the "am I async/is this async" reflection issue, though a core team member has clarified in this comment section that the code in the article is a sketch and the design of stackless coroutines is far from done, so we may yet get this.

Some other philosophical point is, like, lua's coroutine.create/resume/yield/clone are control flow primitives for use within a single thread of execution. It's fine to ship an async runtime, which embodies the view they they are not control flow primitives for use within a single thread of execution, for doing I/O. But focusing the primitives for creating and switching between execution contexts too narrowly on the async runtime use case is liable to he harmful to other use cases for these operations. Ideally, we would be able to write things like a prominent SNES emulator that uses stack switching to ensure the simulation of different components proceeds in an order known to be more correct than other orders, and we would be able to do it using native language features, which would compile down to something a bit cheaper than dumping all of our registers onto the stack. Ideally when we do this we would not be asked by the language to consider what it would mean to "cancel" the execution context managing one of the components, in the same way that we do not need to consider what it means to cancel an arbitrary struct, or the function which is calling the function currently executing.