Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>There is no viable hardware RAID for NVME, so we’ve switched to ZFS to provide the data protection we need.

This is Linux right ? would this be the largest deployment of ZFS-on-Linux then ?



Not by a long shot. I just assembled two servers with 168 12TB drives each, giving a bit over 1.5PB available space on each server. And I'm pretty confident that this is also not the largest ZFS-on-Linux deployment either.


How do you fit 168 hard drives into a single computer?


A couple of 84-drive 5U rack-mount enclosures, attached to each server with multi-link SAS. It's a fairly off-the-shelf system.


Damn, what is the use case for that?


Porn.


I don’t see why anyone would ever want to use hardware RAID. It invariably leads to the day when your hardware is busted, there’s no replacement parts, and you can’t read your volumes from any other machine. Use the kernel RAID and you can always rip out disks, replace them, or just boot off a USB stick.


Because of performance, especially regarding being able to use a battery-backed write-back cache on the controller to give a "safe in the event of powerfailure" confirmation to the application before it actually hits disk/flash.

The "can't read from any other machine" is handled by making sure (this includes testing) that the volumes are readable with dmraid. At least that's for SAS/SATA applications. I'm not sure about NVMe, as it uses different paths in the IO subsystem.


> Because of performance, especially regarding being able to use a battery-backed write-back cache on the controller to give a "safe in the event of powerfailure" confirmation to the application before it actually hits disk/flash.

Is this not easily mitigated with a smart UPS? (i.e., one that will notify the host when the battery is low so it can shut down cleanly)


Totally agree and I'll go one further, I don't want to use RAID at all in a non professional context. Maybe I'm too simplistic but for my personal stuff I don't use RAID, LVM or anything beyond plain ext4 file systems on whole disks. For redundancy I use rsync at whatever frequency makes sense to another disk of the same size. I've run like this for 10 years and replaced many disks without losing data. The time I ran soft RAID I lost the whole array because one disk failed and a SATA error happened at the same time.


LVM is very nice because it eliminates that problem where you've got an almost full 2TB disk and you bought another 2TB disk and now you need to figure out what moves where. With LVM you just say nah, that's just 2TB more space for my data, let the machine figure it out.

I mean, if you enjoy sorting through going "OK that's photos of my kids, that goes in pile A, but these are materials from the re-mortgage application and go in pile B" then knock yourself out, but I have other things I want to do with my life, leave it to the machine to store stuff.

If you lost everything that's because you lacked backups and (repeat after me) RAID is not a backup. Everybody should get into the habit of doing backups. Like any survivalist learns, two is one and one is none.


I doubt it? 150TB of NVMe storage is big, but I've walked past racks with many orders of magnitude more in it.

(edit: units)


> 150GB of NVMe storage is big

Your age is showing :-)

Every few years, I gotta get used to the 100s MBs is big!! -> 100s GBs is big!! -> 100s TBs is big!!

Seems like we're entering the age of PBs, and then we stopped caring about capacity and more about the speed of our TB+ sized archives.


It's TB though, per unit.


for linux ZFS ? im specifically asking about zfs-on-linux


A decade ago my backup cluster had >100TB of ZFS on Linux. I mean, that predated ZoL, so it was using ZFS-fuse, but...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: