Comment by nh2
Why not a simple solution:
1. Programs should call close() on stdout and report errors.
2. It's the job of whomever creates the open file description to fsync() it afterwards if desired.
3. If somebody runs a file system or hardware that ignores fsync() or hides close() errors, it's their own fault.
If you `hello > out.txt`, then it's not `hello` that creates and opens `out.txt`; the calling shell does it. So if you use `>` redirection, you should fsync in the calling shell.
Is there a drawback to this approach?
> LLVM tools were made to close stdout [..] and it caused so many problems that this code was eventually reverted
It would be good to know what those problems were.
> Programs should call close() on stdout and report errors.
Programs have never called open() to obtain stdin, stdout and stderr. They are inherited from the shell. What would be a meaningful way to report errors if the basic output streams are unreliable? If close(stdout) fails, we would need to write to stderr. Then you will have exactly the same error handling issue with closing stderr.
It's a flaw in the design of Unix where polymorphic behaviour is achieved through file descriptors. Worse is better...