Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This all seems unbelievably more complicated and prone to failure than just doing luks over mdadm. You could just skip this weird, arcane process by imaging the disks, walking them to where they needed to be, then slapping them into the other machine and mounting them as normal.

I do not understand making RAID and encryption so very hard, and then using some NAS in a box distribution like an admission you don't have the skills to handle it. A lot of people are using ZFS and "native encryption" on Archlinux (not in this case) when they should just be using mdadm and luks on Debian stable. It's like they're overcomplicating things in order to be able to drop trendy brand names around other nerds, then often dramatically denouncing those brand names when everything goes wrong for them.

If you don't have any special needs, and you don't know what you're doing, just do it the simple way. This all just seems horrific. I've got >15 year old mdadm+luks arrays that have none of their original disks, are 5x their original disk size, have survived plenty of failures, and aren't in their original machines. It's not hard, and dealing with them is not constantly evolving.

Reading this gives me childhood anxiety from when I compressed by dad's PC with a BBS pirated copy of Stacker so I would have more space for pirated Sierra games, it errored out before finishing, and everything was inaccessible. I spent from dusk to dawn trying to figure out how to fix it (before the internet, but I was pretty good at DOS) and I still don't know how I managed it. I thought I was doomed. Ran like a dream afterwards and he never found out.



There are very real reasons to use ZFS instead of the oldschool Linux block device sandwich. mdadm+luks+lvm still do not quite provide the same set of features that ZFS alone does even without encryption. Namely in-line compression, and data checksumming, not to mention free snapshots.

ZFS is quite mature, the feature discussed in the article is not. As others have pointed out this could have been avoided by running ZFS on top of luks and would have hardly sacrificed any functionality.


> mdadm+luks+lvm still do not quite provide the same set of features that ZFS alone does even without encryption. Namely in-line compression, and data checksumming, not to mention free snapshots.

Sure, but LUKS+ZFS provides all that too, and also encrypts everything (ZFS encryption, surprisingly, does not encrypt metadata).

As this article demonstrates, encryption really is an afterthought with ZFS. Just as ZFS rethought from first principles what storage requires and ended up making some great decisions, someone needs to rethink from first principles what secure storage requires.


> Namely in-line compression, and data checksumming, not to mention free snapshots.

You get these for free with btrfs


It's a little weird to denounce the "block device sandwich" and then say that they should have used... a variation of the block device sandwich.

> There are very real reasons to use ZFS

I feel like, for the types of person GP is talking about, they likely don't really need to use ZFS, and luks+md+lvm would be just fine for them.

Like the GP, I have such a setup that's been in operation for 15-20 years now, with none of the original disks, probably 4 or 5 full disk swaps, starting out as a 4x 500GB array, which is now a 5x 8TB array. It's worked perfectly fine, and the only times I've come close to losing data is when I have done something truly stupid (that is, directly and intentionally ignored the advice of many online tutorials)... and even then, I still have all my data.

Honestly the only thing missing that I wish I had was data checksumming, and even then... eh.


Run enough disks long enough and you'll find one that starts returning garbage while telling the OS everything is ok.

First time I had it happen was on a hardware raid device and a company lost 2 and a half days worth of data as any backups from when it started had bad data.

The next time I had it happen is using ZFS and we saw a flood of checksum errors and replaced the disk. Even after that SMART thought it was perfectly fine and you could send commands to it, you just got garbage back.


How do you know you’ve lost no data? Do you checksum all your files? Bits gonna rot.


> I do not understand making RAID and encryption so very hard,

I don't use ZFS-native encryption, so I won't speak to that, but in what way is RAID hard? You just `zpool create` with the topology and devices and it works. In fact,

> If you don't have any special needs, and you don't know what you're doing, just do it the simple way. This all just seems horrific. I've got >15 year old mdadm+luks arrays that have none of their original disks, are 5x their original disk size, have survived plenty of failures, and aren't in their original machines. It's not hard, and dealing with them is not constantly evolving.

I would write almost this exact thing, but with ZFS. It's simple, it's easy, it just keeps going through disk replacements and migrations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: