Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ZFS is notorious for corrupting itself when bit flips hit it and break the checksum on disk

I don't think it is. I've never heard of that happening, or seen any evidence ZFS is more likely to break than any random filesystem. I've only seen people spreading paranoid rumors based on a couple pages saying ECC memory is important to fully get the benefits of ZFS.



They also insist that you need about 10 TB RAM per TB disk space or something like that.


There is a rule of thumb that you should have at least 1 GB of RAM per TB of disk when using deduplication. That's.... Different.


So you've never seen the people saying you should steer clear of ZFS unless you're going to have an enormous ARC even when talking about personal media servers?


People, especially those on the Internet, say a lot of things.

Some of the things they say aren't credible, even if they're said often.

You don't need an enormous amount of ram to run zfs unless you have dedupe enabled. A lot of people thought they wanted dedupe enabled though. (2024's fast dedupe may help, but probably the right answer for most people is not to use dedupe)

It's the same thing with the "need" for ECC. If your ram is bad, you're going to end up with bad data in your filesystem. With ZFS, you're likely to find out your filesystem is corrupt (although, if the data is corrupted before the checksum is calculated, then the checksum doesn't help); with a non-checksumming filesystem, you may get lucky and not have meta data get corrupted and the OS keeps going, just some of your files are wrong. Having ECC would be better, but there's tradeoffs so it never made sense for me to use it at home; zfs still works and is protecting me from disk contents changing, even if what was written could be wrong.


I have seen people saying that yeah. I've also completely ignored them.

I have a 64TB ZFS pool at home (12x8TB drives in an 11w1s RAID-Z3) on a personal media server. The machine has been up for months. It's using 3 GiB of RAM (including the ARC) out of the 32 I put in it.


Not that I recall? And it's worked fine for me...


I have seen people say such things, and none of it was based on reality. They just misinterpreted the performance cliff that data deduplication had to mean you must have absurd amounts of memory even though data deduplication is off by default. I suspect few of the people peddling this nonsense even used ZFS and the few who did, had not looked very deeply into it.


Even then you obviously need L2ARC as well!! /s


But on optane. Because obviously you need an all flash main array for streaming a movie.


Fortunately, this has significantly improved since dedup was rewritten as part of the new ZFS 2.3 release. Search for zfs “fast dedup”.


It’s unfortunate some folks are missing the tongue-in-cheek nature of your comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: