poisonborz 5 days ago

Way better data security, resilience against file rotting. This goes for both HDDs or SSDs. Copy-on-write, snapshots, end to end integrity. Also easier to extend the storage for safety/drive failure (and SSDs corrupt in a more sneaky way) with pools.

  • wil421 5 days ago

    How many of us are using single disks on our laptops? I have a NAS and use all of the above but that doesn’t help people with single drive systems. Or help me understand why I would want it on my laptop.

    • ryao 5 days ago

      My thinkpad from college uses ZFS as its rootfs. The benefits are:

        * If the hard drive / SSD corrupted blocks, the corruption would be identified.
        * Ditto blocks allow for self healing. Usually, this only applies to metadata, but if you set copies=2, you can get this on data too. It is a poor man’s RAID.
        * ARC made the desktop environment very responsive since unlike the LRU cache, ARC resists cold cache effects from transient IO workloads.
        * Transparent compression allowed me to store more on the laptop than otherwise possible.
        * Snapshots and rollback allowed me to do risky experiments and undo them as if nothing happened.
        * Backups were easy via send/receive of snapshots.
        * If the battery dies while you are doing things, you can boot without any damage to the filesystem.
      
      That said, I use a MacBook these days when I need to go outside. While I miss ZFS on it, I have not felt motivated to try to get a ZFS rootfs on it since the last I checked, Apple hardcoded the assumption that the rootfs is one of its own filesystems into the XNU kernel and other parts of the system.
      • rabf 5 days ago

        Not ever having to deal with partitions and instead using data sets each of which can have their own properties such as compression, size quota, encryption etc is another benefit. Also using zfsbootmenu instead of grub enables booting from different datasets or snapshots as well as mounting and fixing data sets all from the bootloader!

      • CoolCold 5 days ago

        NTFS had compression since mot even sure when.

        For other stuff, let that nerdy CorpIT handle your system.

    • yjftsjthsd-h 5 days ago

      If the single drive in your laptop corrupts data, you won't know. ZFS can't fix corruption without extra copies, but it's still useful to catch the problem and notify the user.

      Also snapshots are great regardless.

      • Polizeiposaune 5 days ago

        In some circumstances it can.

        Every ZFS block pointer has room for 3 disk addresses; by default, the extras are used only for redundant metadata, but they can also be used for user data.

        When you turn on ditto blocks for data (zfs set copies=2 rpool/foo), zfs can fix corruption even on single-drive systems at the cost of using double or triple the space. Note that (like compression), this only affects blocks written after the setting is in place, but (if you can pause writes to the filesystem) you can use zfs send|zfs recv to rewrite all blocks to ensure all blocks are redundant.

    • ekianjo 5 days ago

      It provides encryption by default without having to deal with LUKS. And no need to ever do fsck again.

      • Twey 5 days ago

        Except that swap on OpenZFS still deadlocks 7 years later (https://github.com/openzfs/zfs/issues/7734) so you're still going to need LUKS for your swap anyway.

        • ryao 5 days ago

          Another option is to go without swap. I avoid swap on my machines unless I want hibernation support.

  • jeroenhd 5 days ago

    The data security and rot resilience only goes for systems with ECC memory. Correct data with a faulty checksum will be treated the same as incorrect data with a correct checksum.

    Windows has its own extended filesystem through Storage Spaces, with many ZFS features added as lesser used Storage Spaces options, especially when combined with ReFS.

    • _factor 5 days ago

      This has nothing to do with ZFS as a filesystem. It has integrity verification on duplicated raid configurations. If the system memory flips a bit, it will get written to disk like all filesystems. If a bit flips on a disk, however, it can be detected and repaired. Without ECC, your source of truth can corrupt, but this true of any system.

    • abrookewood 5 days ago

      Please stop repeating this, it is incorrect. ECC helps with any system, but it isn't necessary for ZFS checksums to work.

    • BSDobelix 5 days ago

      On zfs there is the ARC (adaptive read cache), on non-zfs systems this "read cache" is called buffer, both reside in memory, so ECC is equally important for both systems.

      Rot means changing bits without accessing those bits, and that's ~not possible with zfs, additionally you can enable check-summing IN the ARC (disabled by default), and with that you can say that ECC and "enterprise" quality hardware is even more important for non-ZFS systems.

      >Correct data with a faulty checksum will be treated the same as incorrect data with a correct checksum.

      There is no such thing as "correct" data, only a block with a correct checksum, if the checksum is not correct, the block is not ok.

    • mrb 5 days ago

      "data security and rot resilience only goes for systems with ECC memory."

      No. Bad HDDs/SSDs or bad SATA cables/ports cause a lot more data corruption than bad RAM. And ZFS will correct these cases even without ECC memory. It's a myth that the data healing properties of ZFS are useless without ECC memory.

      • elseless 5 days ago

        Precisely this. And don’t forget about bugs in virtualization layers/drivers — ZFS can very often save your data in those cases, too.

        • ryao 5 days ago

          I once managed to use ZFS to detect a bit flip on a machine that did not have ECC RAM. All python programs started crashing in libpython.so on my old desktop one day. I thought it was a bug in ZFS, so I started debugging. I compared the in-memory buffer from ARC with the on-disk buffer for libpython.so and found a bit flip. At the time, accessing a snapshot through .zfs would duplicate the buffer in ARC, which made it really easy to compare the in-memory buffer against the on-disk buffer. I was in shock as I did not expect to ever see one in person. Since then, I always insist on my computers having ECC.

johannes1234321 5 days ago

For a while I ran Open Solaris with ZFS as root filesystem.

The key feature for me, which I miss, is the snapshotting integrated into the package manager.

ZFS allows snapshots more or less for free (due to copy on weite) including cron based snapshotting every 15 minutes. So if I did a mistake anywhere there was a way to recover.

And that integrated with the update manager and boot manager means that on an update a snapshot is created and during boot one can switch between states. Never had a broken update, but gave a good feeling.

On my home server I like the raid features and on Solaris it was nicely integrated with NFS etc so that one can easily create volumes and export them and set restrictions (max size etc.) on it.

  • attentive 4 days ago

    > is the snapshotting integrated into the package manager.

    some linux distros have that by default with btrfs. And usually it's a package install away if you're already on btrfs.

chillfox 5 days ago

Much faster launch of applications/files you use regularly. Ability to always rollback updates in seconds if they cause issues thanks to snapshots. Fast backups with snapshots + zfs send/receive to a remote machine. Compressed disks, this both let's you store more on a drive and makes accessing files faster. Easy encryption. ability to mirror 2 large usb disks so you never have your data corrupted or lose it from drive failures. Can move your data or entire os install to a new computer easily by using a live disk and just doing a send/receive to the new pc.

(I have never used dedup, but it's there if you want I guess)

hoherd 5 days ago

Online filesystem checking and repair.

Reading any file will tell you with 100% guarantee if it is corrupt or not.

Snapshots that you can `cd` into, so you can compare any prior version of your FS with the live version of your FS.

Block level compression.

  • snvzz 4 days ago

    >Reading any file will tell you with 100% guarantee if it is corrupt or not.

    Only possible if it was not corrupted in RAM before it was written to disk.

    Using ECC memory is important, irrespective of ZFS.

e12e 5 days ago

Cross platform native encryption with sane fs for removable media.

  • lazide 5 days ago

    Who would that help?

    MacOS also defaults to a non-portable FS for likely similar reasons, if one was being cynical.

    • e12e 4 days ago

      It would help users using USB sticks, external drives?

      Couple it with encrypted zfs send/receive for cross platform secure backups.

      • lazide 3 days ago

        I meant, why would they prioritize cross platform when it doesn’t help them?

wkat4242 5 days ago

Snapshots (Note: NTFS does have this in the way of Volume Shadow Copy but it's not as easily accessible as a feature to the end user as it is in ZFS). Copy on Write for reliability under crashes. Block checksumming for data protection (bitrot)