We know who won the Linux/ZFS argument

Keywords: #linux #zfs

Or why you should use ZFS

Even if it is on Linux…

In February 2009 I was pretty frustrated at Linux filesystems in And Linux in General, F-U. Well…Sun lost entirely. But we all got ZFS as OpenZFS for the masses! I have a ~40T home NAS using now TrueNAS SCALE, which is OpenZFS on Linux. It was migrated from FreeNAS just a few weeks back, along with a major hardware refresh. Along the way with all of that I’ve been bitten by reiserfs, bitten by btrfs, and even tried out AFS. But since the early work I did with ZFS on Solaris (nexenta, OpenIndiana, Illumos, OpenSolaris), it still has never let me down. So I guess ZoL (ZFS on Linux) is an acceptable state of affairs. Or ZoF (ZFS on FreeBSD) too. Really just use ZFS. You’ll thank me. Honest.

I don’t recall any time where I’ve entirely lost a ZFS pool. I’ve lost a few pieces of files due to very badly misbehaving hardware (like in Dynatron-o-mite?). But in each instance ZFS has known about the corruption, corrected what it could, and gave me a “no warrantees on these specific parts” option to recover what it could. I’ve yet to have a ZFS based storage system offline itself, require hours of offline repairs, or do anything other than what it says in the tin, unlike ext3, ext4, btrfs, or reiserfs.

I am part of a team these days that oversees a number of decent-sized monolithic ZFS based storage systems, even though I don’t have much to do with them anymore. Some with hundreds of drives. For newer stuff the day job leans towards using Ceph which sports an S3 API gateway, block devices via RBD, and CephFS. The current day job we use only the S3 and RBD portions (RBD for OpenStack clusters, S3 for a lot of things), I’ve played around in the past with CephFS a little and it’s quite good now. I can’t tell you how many petabytes and hundreds of servers are in our Ceph clusters (both because I don’t know and can’t tell you even if I did) but just like it says on the tin it works a treat for that petabyte scale.