amiga386 15 hours ago

Windows won't let you overwrite files "in use" and "file" is determined by the full pathname.

Linux will let you overwrite files "in use" (though the program(s) using them may not notice) and "file" is determined by a magic number, the inode - you can delete a file from a directory, really it's removing _that inode_, and put a new file in place with the same name, it's a _new_ inode. Programs that still have the file open are referring to the _old_ inode, which only goes away once everyone stops using it.

So actually you need to go round restarting your programs/services on Linux to get them to pick up changes (most package managers do that automatically), but at least it's _possible_ to make those changes without a reboot. Windows has to go into a special mode where nothing else runs, to be sure that it can update files.

  • throwaway106382 15 hours ago

    This is why I love OpenSUSE, when you update your system it will let you know when updated files that certain processes are using were touched and you can then decide if you want to restart them.

    Suse systems in general are just so much nicer to administer than RedHat or Debian/Ubuntu ones (imo of course).

    • aaronmdjones 14 hours ago

      htop(1) can also highlight running processes that have had their on-disk executable replaced (highlights in red) or one of its shared libraries (highlights in yellow). I find this very useful.

    • mschuster91 12 hours ago

      Debian can do just the same, at least for a libc upgrade and a few others you get asked if you wish to restart now or later.

magicalhippo 16 hours ago

KDE Neon used to do this. Almost always caused issues after update with stuff crashing due to mismatch of versions talking to each other over D-Bus and such.

So they moved to something more like the Windows style, where it downloads, reboots to apply and then reboots again freshly updated.

  • bayindirh 14 hours ago

    KDE sometimes borks after big Qt or KDE updates. I just logout and login.

    This what it said on the tin since forever for Linux systems, and it doesn't hurt.

    • magicalhippo 12 hours ago

      Right, but a reboot is just as quick and then you get to load everything from scratch so I just ended up doing that.

      • bayindirh 12 hours ago

        My systems run a couple of services, too. So, I don’t prefer to reboot unless I’m upgrading the kernel or something in close vicinity.

        Also, it surfaces long-running bugs so I can report them.

anonymars 15 hours ago

Briefly: it can (see e.g https://devblogs.microsoft.com/oldnewthing/20130102-00/?p=56...)

Note that just replacing files on disk is not sufficient because all the running software would still have the old version.

In the first place it means the security issue could still be present in currently running software, in the second place exciting things can happen when two (or more?!) different versions try to talk to each other. Oh, and who's to say the whole file was fully loaded into memory (or wasn't partially paged out) - imagine the fun that would happen if you later page in data from a different version of the binary!

So you need to hot patch the running binaries. I don't really remember why it's not done in practice even though it's technically possible, I seem to remember the conclusion was that clustering (in whatever form) was the solution for high availability, rather than trying to keep a single machine running.

  • toast0 13 hours ago

    > So you need to hot patch the running binaries. I don't really remember why it's not done in practice even though it's technically possible, I seem to remember the conclusion was that clustering (in whatever form) was the solution for high availability, rather than trying to keep a single machine running.

    Most systems are technically capable of hot patching (if your exe file is mmaped, and you change the backing file, Bob's your uncle, unless your OS is no fun; which is why unix install pattern is unlink and replace rather than in-place updares). But most executables are not built to be hot patched, especially not without coordination.

    Hot patching lets you make changes to your live environment with tremendous speed, but it also has risk of changing your live environment to an offline environment with tremendous speed. I'm a proponent of hot patching, and would love to be able to hot load all the things, but it has requirements and tradeoffs and most software isn't built for it, and that's probably the right decision for most things.

    • anonymars 13 hours ago

      Yep. In fact rename/replace is conceptually the same as unlink/replace, but another potential issue is in-process dll hell. If a patch replaces multiple libraries, and they're not all loaded into a process yet, even if each is atomic, you might load version 1 of the first library but version 2 of the second

  • Brian_K_White 11 hours ago

    There is no such partial or mixed exe problem from paging.

    It doesn't matter if it was paged out, virtual memory is still just memory.

    Paging out & restoring some memory doesn't know or care where the contents originally came from. It doesn't have an optimization that goes "Oh this chunk of memory is an executable file. I can skip writing this out to the swap file, and later when I need to restore it I can just read the original file instead of swap."

    For files that a program opens, an open handle is an open handle. The entire file is available in whatever state it was at the time the handle was opened, modulo whatever changes this specific handle has made.

    If a program closes and re-opens handles, then it always knew that the entire world could have changed between those 2 opens. Same if it opens non-exclusive. If it opens without exclusive or closes & reopens, then it's ok for the data to change between each access.

    There are problems during updates, but they are much higher level and safer than that. Open file handles are open file handles, and currenly loaded exes are consistent and sane until they close. All the problems are in the higher level domains of processes interacting with each other.

AndrewDucker 16 hours ago

Windows locks files when they're in use, so that you cannot overwrite them. Linux doesn't do this.

So if you want to overwrite a running service then you can either stop it, update it, and restart it (tricky to manage if it has dependencies, or is necessary for using the PC), or to shut down everything, update the files while the OS isn't (or is barely) running, and then restart the OS.

  • aaronmdjones 16 hours ago

    > Windows locks files when they're in use, so that you cannot overwrite them. Linux doesn't do this.

    Linux does do this (try overwriting or truncating a binary executable while it's running and you'll get -ETXTBSY).

    The difference is that Linux allows you to delete (unlink) a running executable. This will not free the on-disk space occupied by that executable (so anything you write to disk in the immediate future will not overwrite the executable, and it can continue executing even if not all of the executable has been paged in) until all references to its inode are freed (e.g. the program exits and there are no other hardlinks to it).

    Then you can install a new version of the executable with the same name (since a file by that name no longer exists). This is what install(1) does.

  • formerly_proven 16 hours ago

    MOVEFILE_DELAY_UNTIL_REBOOT is sort of the real trick, because it's processed by the Windows equivalent of PID 0 (ssms), which processes these pending operations before actually starting any other userspace stuff (which would invariably load things like kernel32.dll etc.)

1718627440 8 hours ago

You can even update the kernel while keeping userspace running by hibernating.

cassonmars 15 hours ago

With things like kpatch you can even update the kernel without a reboot

davidhyde 16 hours ago

The file system won’t allow you to overwrite an open file in Windows.

luxuryballs 16 hours ago

thinking it has something to do with this https://unix.stackexchange.com/a/49306

  • mananaysiempre 15 hours ago

    Windows NT was always capable of what Microsoft calls “POSIX delete semantics” (the POSIX compatibility layer was in the design doc since before the name change from “NT OS/2”), but some years ago the default for the Win32 call DeleteFile actually changed to take advantage of that (causing some breakage in applications, but apparently not a lot of it).