Not by a long shot. I just assembled two servers with 168 12TB drives each, giving a bit over 1.5PB available space on each server. And I'm pretty confident that this is also not the largest ZFS-on-Linux deployment either.
I don’t see why anyone would ever want to use hardware RAID. It invariably leads to the day when your hardware is busted, there’s no replacement parts, and you can’t read your volumes from any other machine. Use the kernel RAID and you can always rip out disks, replace them, or just boot off a USB stick.
Because of performance, especially regarding being able to use a battery-backed write-back cache on the controller to give a "safe in the event of powerfailure" confirmation to the application before it actually hits disk/flash.
The "can't read from any other machine" is handled by making sure (this includes testing) that the volumes are readable with dmraid. At least that's for SAS/SATA applications. I'm not sure about NVMe, as it uses different paths in the IO subsystem.
> Because of performance, especially regarding being able to use a battery-backed write-back cache on the controller to give a "safe in the event of powerfailure" confirmation to the application before it actually hits disk/flash.
Is this not easily mitigated with a smart UPS? (i.e., one that will notify the host when the battery is low so it can shut down cleanly)
Totally agree and I'll go one further, I don't want to use RAID at all in a non professional context. Maybe I'm too simplistic but for my personal stuff I don't use RAID, LVM or anything beyond plain ext4 file systems on whole disks. For redundancy I use rsync at whatever frequency makes sense to another disk of the same size. I've run like this for 10 years and replaced many disks without losing data. The time I ran soft RAID I lost the whole array because one disk failed and a SATA error happened at the same time.
LVM is very nice because it eliminates that problem where you've got an almost full 2TB disk and you bought another 2TB disk and now you need to figure out what moves where. With LVM you just say nah, that's just 2TB more space for my data, let the machine figure it out.
I mean, if you enjoy sorting through going "OK that's photos of my kids, that goes in pile A, but these are materials from the re-mortgage application and go in pile B" then knock yourself out, but I have other things I want to do with my life, leave it to the machine to store stuff.
If you lost everything that's because you lacked backups and (repeat after me) RAID is not a backup. Everybody should get into the habit of doing backups. Like any survivalist learns, two is one and one is none.
This is Linux right ? would this be the largest deployment of ZFS-on-Linux then ?