Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really love ZFS Native encryption, but this is the big problem with it. I use ZFS Raw Sends to store my backups incrementally in a cloud I trust, but not enough to have raw access to my files. ZFS has great attributes there, theoretically - I can send delta updates of my filesystems, and the receiver never has they keys to decrypt them.

I've used this in practice for many years (2020), and aside from encountering exactly this issue (though thankfully I did have a bookmark already in place), it's worked great. I've tested restores from these snapshots fairly regularly (~ quarterly), and only once had an issue related to a migration - I moved the source from one disk to another. This can have some negative effects on encryptionroots, which I was able to solve... But I really, really wish that ZFS tooling had better answers to it, such as being able to explicitly create and break these associations.



Yeah I use different methods for that. I considered using zfs send/receive for backups, however there's one big issue with that: every time you need one or two files from the backup you need to restore the whole filesystem. There's no official way to retrieve a single file from a zfs send stream.

For backup purposes I also greatly prefer file by file encryption because one corruption will only break one file and not the whole backup.

What I do now is encrypt with encfs and store on a S3 glacier-style service.


I can only think of one situation where it's hard to retrieve a remote backup of an individual file; where ZFS native encryption is in use and the remote backup system is not trusted to load the key for the dataset.

For myself, I don't trust remote systems to always have keys loaded, but in an emergency I would feel relatively safe temporarily loading the key, mounting the snapshot read-only, and scp-ing the files out, then unmounting and unloading the key (and rebooting for good measure).

There's also a viable slow option; export the raw storage of the backup ZFS pool over the (inter)network to a trusted machine and import the pool read-only locally, load the key, mount the filesystem, and make a copy. Much slower but is practical. I've used s3backer fairly successfully as a backup method for a pool with native encryption; it takes a minute or so to import the pool and can write backup snapshots at a few MB/s, so there shouldn't be any fundamental reason iscsi or similar wouldn't work.


I've never had to restore a single file that's older than my local snapshots; The restores where I've needed an old subset have been a substantial enough subset that 4-5xing the data size on restore was not really an issue.

I kinda agree with your point on file-by-file encryption, but ZFS's general integrity features are such that I'm not really worried - Except about this article's specific failure mode, which is pretty easy to deal with/avoid when you know about it, but is a substantial deficiency.


You don't actually have to restore the entire snapshot if you just want a single file! ZFS mounts snapshots read-only in an extra hidden .zfs/snapshot directory that doesn't even show up in ls -a unless you set snapdir=visible on the dataset but you can copy files out of there.

For example, cp /path/to/dataset/mountpoint/.zfs/snapshot/<snapshot_name>/path/to/file ~/path/to/file




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: