Comment by ryao

Comment by ryao 5 days ago

21 replies

ZFS will outscale ext4 in parallel workloads with ease. XFS will often scale better than ext4, but if you use L2ARC and SLOG devices, it is no contest. On top of that, you can use compression for an additional boost.

You might also find ZFS outperforms both of them in read workloads on single disks where ARC minimizes cold cache effects. When I began using ZFS for my rootfs, I noticed my desktop environment became more responsive and I attributed that to ARC.

jeltz 4 days ago

Not on most database workloads. There zfs does not scale very well.

  • ryao 4 days ago

    Percona and many others who benchmarked this properly would disagree with you. Percona found that ext4 and ZFS performed similarly when given identical hardware (with proper tuning of ZFS):

    https://www.percona.com/blog/mysql-zfs-performance-update/

    In this older comparison where they did not initially tune ZFS properly for the database, they found XFS to perform better, only for ZFS to outperform it when tuning was done and a L2ARC was added:

    https://www.percona.com/blog/about-zfs-performance/

    This is roughly what others find when they take the time to do proper tuning and benchmarks. ZFS outscales both ext4 and XFS, since it is a multiple block device filesystem that supports tiered storage while ext4 and XFS are single block device filesystems (with the exception of supporting journals on external drives). They need other things to provide them with scaling to multiple block devices and there is no block device level substitute for supporting tiered storage at the filesystem level.

    That said, ZFS has a killer feature that ext4 and XFS do not have, which is low cost replication. You can snapshot and send/recv without affecting system performance very much, so even in situations where ZFS is not at the top in every benchmark such as being on equal hardware, it still wins, since the performance penalty of database backups on ext4 and XFS is huge.

    • LtdJorge 4 days ago

      There is no way that a CoW filesystem with parity calculations or striping is gonna beat XFS on multiple disks, specially on high speed NVMe.

      The article provides great insight into optimizing ZFS, but using an EBS volume as store (with pretty poor IOPS) and then giving the NVMe as metadata cache only for ZFS feels like cheating. At the very least, metadata for XFS could have been offloaded to the NVMe too. I bet if we store set XFS with metadata and log to a RAMFS it will beat ZFS :)

      • ryao 4 days ago

        L2ARC is a cache. Cache is actually part of its full name, which is Level 2 Adaptive Replacement Cache. It is intended to make fast storage devices into extensions of the in memory Adaptative Replacement Cache. L2ARC functions as a victim cache. While L2ARC does cache metadata, it caches data too. You can disable the data caching, but performance typically suffers when you do. While you can put ZFS metadata on a special device if you want, that was not the configuration that Percona evaluated.

        If you do proper testing, you will find ZFS does beat XFS if you scale it. Its L2ARC devices are able to improve IOPS of storage cheaply, which XFS cannot do. Using a feature ZFS has to improve performance at price point that XFS cannot match is competition, not cheating.

        ZFS cleverly uses CoW in a way that eliminates the need for a journal, which is overhead for XFS. CoW also enables ZFS' best advantage over XFS, which is that database backups on ZFS via snapshots and (incremental) send/recv affect system performance minimally where backups on XFS are extremely disruptive to performance. Percona had high praise for database backups on ZFS:

        https://www.percona.com/blog/zfs-for-mongodb-backups/

        Finally, there were no parity calculations in the configurations that Percona tested. Did you post a preformed opinion without taking the time to actually understand the configurations used in Percona's benchmarks?

    • menaerus 4 days ago

      Refuting the "it doesn't scale" argument with a data from a blog that showcases a single workload (TPC-C) with 200G+10tables dataset (small to medium) at 2vCPU (wtf) machine with 16 connections (no thread pool so overprovisioned) is not quite a definition of a scale at all. It's a lost experiment if anything.

      • ryao 4 days ago

        The guy did not have any data to justify his claims of not scaling. Percona’s data says otherwise. If you don’t like how they got their data, then I advise you to do your own benchmarks.

bayindirh 5 days ago

No doubt. I want to reiterate my point. Citing myself:

> "I personally won't use either on a single disk system as root FS, regardless of how fast my storage subsystem is." (emphasis mine)

We are no strangers to filesystems. I personally benchmarked a ZFS7320 extensively, writing a characterization report, plus we have a ZFS7420 for a very long time, complete with separate log SSDs for read and write on every box.

However, ZFS is not saturation proof, plus is nowhere near a Lustre cluster performance wise, when scaled.

What kills ZFS and BTRFS on desktop systems are write performance, esp. on heavy workloads like system updates. If I need a desktop server (performance-wise), I'd configure it accordingly and use these, but I'd never use BTRFS or ZFS on a single root disk due to their overhead, to reiterate myself thrice.

  • ryao 4 days ago

    I am generally happy with the write performance of ZFS. I have not noticed slow system updates on ZFS (although I run Gentoo, so slow is relative here). In what ways is the write performance bad?

    I am one of the OpenZFS contributors (although I am less active as late). If you bring some deficiency to my attention, there is a chance I might spend the time needed to improve upon it.

    By the way, ZFS limits the outstanding IO queue depth to try to keep latencies down as a type of QoS, but you can tune it to allow larger IO queue depths, which should improve write performance. If your issue is related to that, it is an area that is known to be able to use improvement in certain situations:

    https://openzfs.github.io/openzfs-docs/Performance%20and%20T...

    https://openzfs.github.io/openzfs-docs/Performance%20and%20T...

    https://openzfs.github.io/openzfs-docs/Performance%20and%20T...

    • bayindirh 4 days ago

      What I see with CoW filesystems is, when you force the FS to sync a lot (like apt does to keep immunity against power losses to a maximum), the write performance slouches visibly. This also means that when you're writing a lot of small files with a lot of processes and flood the FS with syncs, you get the same slouching, making everything slower in the process. This effect is better controlled in simpler filesystems, namely XFS and EXT4. This is why I keep backups elsewhere and keep my single disk rootfs on "simple" filesystems.

      I'll be installing a 2 disk OpenZFS RAID1 volume on a SBC for high value files soon-ish, and I might be doing some tests on that when it's up. Honestly, I don't expect stellar performance since I'll be already putting it on constrained hardware, but let you know if I experience anything that doesn't feel right.

      Thanks for the doc links, I'll be devouring them when my volume is up and running.

      Where do you prefer your (bug and other) reports? GitHub? E-mail? IP over Avian Carriers?

      • ryao 4 days ago

        Heavy synchronous IO from incredibly frequent fsync is a weak point. You can make it better using SLOG devices. I realize what I am about to say is not what you want to hear, but any application doing excessive fsync operations is probably doing things wrong. This is a view that you will find prevalent among all filesystem developers (i.e. the ext4 and XFS guys will have this view too). That is because all filesystems run significantly faster when fsync() is used sparingly.

        In the case of APT, it should install all of the files and then call sync() once. This is equivalent of calling fsync on every file like APT currently does, but aggregates it for efficiency. The reason APT does not use sync() is probably a portability thing, because the standard does not require sync() to be blocking, but on Linux it is:

        https://www.man7.org/linux/man-pages/man2/sync.2.html

        From a power loss perspective, if power is lost when installing a package into the filesystem, you need to repair the package. Thus it does not really matter for power loss protection if you are using fsync() on all files or sync() once for all files, since what must happen next to fix it is the same. However, from a performance perspective, it really does matter.

        That said, slow fsync performance generally is not an issue for desktop workloads because they rarely ever use fsync. APT is the main exception. You are the first to complain about APT performance in years as far as I know (there were fixes to improve APT performance 10 years ago, when its performance was truly horrendous).

        You can file bug reports against ZFS here:

        https://github.com/openzfs/zfs

        I suggest filing a bug report against APT. There is no reason for it to be doing fsync calls on every file it installs in the filesystem. It is inefficient.

      • gf000 4 days ago

        Hi! I am quite a beginner when it comes to file systems. Would this sync effect not be helped by direct IO in ZFS's case?

        Also, given that you seem quite knowledgeable of the topic, what is your go-to backup solution?

        I initially thought about storing `zfs send` files into backblaze (as backup at a different location), but without recv-ing these, I don't think the usual checksumming works properly. I can checksum the whole before and after updating, but I'm not convinced if this is the best solution.