Comment by vlovich123

Comment by vlovich123 2 days ago

10 replies

I really don’t understand this argument. If you force the user to transfer ownership of the buffer into the I/O subsystem, the system can make sure to transfer ownership of the buffer into the async runtime, not leaving it held within the cancellable future and the future returns that buffer which is given back when the completion is received from the kernel. What am I missing?

Inufu a day ago

Requiring ownership transfer gives up on one of the main selling points of Rust, being able to verify reference lifetime and safety at compile time. If we have to give up on references then a lot of Rusts complexity no longer buys us anything.

  • vlovich123 a day ago

    I'm not sure what you're trying to say, but the compile-time safety requirement isn't given up. It would look something like:

        self.buffer = io_read(self.buffer)?
    
    This isn't much different than

        io_read(&mut self.buffer)?
    
    since rust doesn't permit simultaneous access when a mutable reference is taken.
    • Inufu a day ago

      It means you can for example no longer do things like get multiple disjoint references into the same buffer for parallel reads/writes of independent chunks.

      Or well you can, using unsafe, Arc and Mutex - but at that point the safety guarantees aren’t much better than what I get in well designed C++.

      Don’t get me wrong, I still much prefer Rust, but I wish async and references worked together better.

      Source: I recently wrote a high-throughput RPC library in Rust (saturating > 100 Gbit NICs)

newpavlov 2 days ago

The goal of the async system is to allow users to write synchronous looking code which is executed asynchronously with all associated benefits. "Forcing" users to do stuff like this shows the clear failure to achieve this goal. Additionally, passing ownership like this (instead of passing mutable borrow) arguably goes against the zero-cost principle.

  • vlovich123 2 days ago

    I don’t follow the zero copy argument. You pass in an owned buffer and get an owned buffer back out. There’s no copying happening here. It’s your claim that async is supposed to look like synchronous code but I don’t buy it. I don’t see why that’s a goal. Synchronous is an anachronistic software paradigm for a computer hardware architecture that never really existed (electronics are concurrent and asynchronous by nature) and cause a lot of performance problems trying to make it work that way.

    Indeed, one thing I’ve always wondered is if you can submit a read request for a page aligned buffer and have the kernel arrange for data to be written directly into that without any additional copies. That’s probably not possible since there’s routing happening in the kernel and it accumulates everything into sk_buffs.

    But maybe it could arrange for the framing part of the packet and the data to be decoupled so that it can just give you a mapping into the data region (maybe instead of you even providing a buffer, it gives you back an address mapped into your space). Not sure if that TLB update might be more expensive than a single copy.

    • newpavlov a day ago

      You have an inevitable overhead of managing the owned buffer when compared against simply passing mutable borrow to an already existing buffer. Imagine if `io::Read` APIs were constructed as `fn read(&mut self, buf: Vec<u8>) -> io::Resul<Vec<u8>>`.

      Parity with synchronous programming is an explicit goal of Rust async declared many times (e.g. see here https://github.com/rust-lang/rust-project-goals/issues/105). I agree with your rant about the illusion of synchronicity, but it does not matter. The synchronous abstraction is immensely useful in practice and less leaky it is, the better.

      • vlovich123 a day ago

        The problem is that "parity" can be interpreted in different ways and you're choosing to interpret it in a way that doesn't seem to be communicated in the issue you referenced. In fact, the common definition of parity is something like "feature parity" meaning that you can do accomplish all the things you did before, even if it's not the same (e.g. MacOS has copy-paste feature parity even though it's a different shortcut or might work slightly differently from other operating systems). It rarely means "drop-in" parity where you don't have to do anything.

        To me it's pretty clear that parity in the issue referenced refers to equivalence parity - that is you can accomplish the tasks in some way, not that it's a drop-in replacement. I haven't seen anywhere suggested that async lets you write synchronous code without any changes, nor that integrating completion-style APIs with asynchronous will yield code that looks like synchronous. For one, completion-style APIs are for performance and performance APIs are rarely structured for simplicity but to avoid implicit costs hidden in common leaky (but simpler) abstractions. For another, completion-style APIs in synchronous programming ALSO looks different from epoll/select-like APIs, so I really don't understand the argument you're trying to make.

        EDIT:

        > You have an inevitable overhead of managing the owned buffer when compared against simply passing mutable borrow to an already existing buffer. Imagine if `io::Read` APIs were constructed as `fn read(&mut self, buf: Vec<u8>) -> io::Resul<Vec<u8>>`.

        I'm imaging and I don't see a huge problem in terms of the overhead this implies. And you'd probably not necessarily take in a Vec directly but some I/O-specific type since such an API would be for performance.

      • namibj a day ago

        The problem is that the ring requires suitably locked memory which won't be subject to swapping, thus inherently forcing a different memory type if you want the extra-low-overhead/extra-scalable operation.

        It makes sense to ask the ring wrapper for memory that you can emplace your payload into before submitting the IO if you want to use zero-copy.

        • surajrmal a day ago

          newpavlov seems to operate in a theoretical bubble. Until something like he describes is published publicly where we talk more concretely, it's not worth engaging. Requiring special buffers is very typical for any hardware offload with zero copy. It isn't a leaky abstraction. Async rust is well designed and I've not seen anything rival it. If there are problems it's in the libraries built on top like tokio.

    • namibj a day ago

      Such reads are in principle supported if you have sufficient hardware offloading of your stream. AFAIK io_uring got an update a while back specifically to make this practical for non-stream reads, where you basically provide a slab allocator region to the ring and get to tell reads to pick a free slot/slab in that region _only when they actually get the data_ instead of you blocking DMA capable memory for as long as the remote takes to send you the data.