Comment by jcranmer

Comment by jcranmer 2 months ago

0 replies

There are, I think, two separate but related issues here:

The first is that file I/O is essentially broken with respect to error reporting. A write is simultaneously treated as a synchronous operation that will wait until it succeeds or fails by most application code, and yet in the kernel, it's a request that may eventually fail. And those errors can only be reported in circumstances where doing anything about the error tends to be impossible. Worse yet, there's a tendency to even throw away these errors on the kernel side, so you have to be really, really diligent to make sure you catch them (this prompted the Postgresql folks to rage at the Linux kernel for a while over the braindead behavior of fsync).

At the end of the day, the issue is that most clients would probably be happy with the notion of I/O being asynchronous and eventually succeeding, so long as a) I/O happens-before edges could be specified and b) there were a delayed success/failure callback that was actually truthful, and not an endless game of people lying about it to win on benchmarks to cause you to go the next level of "no really, this is the real final callback mechanism".

The other issue is that there are just some cases where error handling just ... doesn't make sense. And errors for a basic print to stdout or stderr are quite frankly in that boat. If printing a message to stdout failed, what action are going to be able to do? What action is even reasonable? (Note that this analysis isn't the same for programs that are effectively meant to be part of a Unix shell pipeline, processing stdin into stdout, this is more for bare printf, rather than writing to a stream that just happens to be stdout instead of a file.)