Comment by kkfx
Comment by kkfx 14 hours ago
I don't consider myself a "believer" in anything, but as a sysadmin, if I see a deploy with ext4, I classify it as a newbie's choice or someone stuck in the 80s. It's not a matter of conviction; it's simply about managing your data:
- Transferable snapshots (zfs send) mean very low-cost backups and restores, and serious desktop users don't want to be down for half a day because a disk failed.
- A pool means effective low-cost RAID, and anyone in 2026 who isn't looking for at least a mirror for their desktop either doesn't care about their data or lacks the expertise to understand its purpose.
ZFS is the first real progress in storage since the 80s. It's the most natural choice for anyone who wants to manage their digital information. Unfortunately, many in the GNU/Linux world are stuck in another era and don't understand it. They are mostly developers whose data is on someone else's cloud, not on their own hardware. If they do personal backups, they do them halfway, without a proven restore strategy. They are average users, even if more skilled than average, who don't believe in disk failures or bit rot because they haven't experienced it personally, or if they have, they haven't stopped to think about the incident.
If you want to try out services and keep your desktop clean, you need a small, backup-able volume that can be sent to other machines eg. a home server, to be discarded once testing is done. If you want to efficiently manage storage because when something breaks, you don't want to spend a day manually reinstalling the OS and copying files by hand, you'll want ZFS with appropriate snapshots, whether managed with ZnapZend or something else doesn't really matter.
Unfortunately, those without operations experience don't care, don't understand. The possibility of their computer breaking isn't something they consider because in their experience it hasn't happened yet, or it's an exceptional event as exceptional that doesn't need automation. The idea of having an OS installed for 10 years, always clean, because every rebuild is a fresh-install and storage is managed complementarily, is alien to them. But the reality is that it's possible, and those who still understand operations really value it.
Those who don't understand it will hardly choose Guix or NixOS; they are people who play with Docker, sticking to "mainstream" distros like Fedora, Ubuntu, Mint, Arch. Those who choose declarative distros truly want to configure their infrastructure in text, IaC built-in into the OS, and truly have resilience, so their infrastructure must be able to resurrect from its configuration plus backups quickly and with minimal effort, because when something goes wrong, I have other things to think about than playing with the FLOSS toy of the moment.
I'll bite. I use NixOS as a daily driver and IMO makes the underlying FS type even less important. If my main drive goes I can bootstrap a new one by cloning my repo and running some commands. For my data, I just have some rsyc scripts that sling the bits to various locations.
I suppose if I really wanted to I could put the data on different partitions and disks and use the native fs tools but it's a level of detail that doesn't seem to matter that much relative to what I currently have. I could see thinking about FS details much more for a dedicated storage server
Fs level backups for an OS sounds more relevant when the OS setup is not reproducable and would be a pain to recreate.