Hacker Newsnew | past | comments | ask | show | jobs | submit | more deknos's commentslogin

i though this uses coqui which is not really opensource?


1. i wish syncthing also would implement this 2. is it already postquantum secure?

(to all the quantum-computer-will-never-come-people: i like to be prepared in CASE it comes, otherwise no one will prepare and users are in the dust, once it is there)


I am not a cryptographer, but can explain that Magic Wormhole uses SPAKE2 to negotiate a shared secret (RFC9382 claims equivalent to gap Diffie-Hellman), and then uses NaCl SecretBox to symmetrically encrypt all data between the peers.

(If using the newer Dilation protocol -- which is true for many of the non-file-transfer tools like ShWiM, Git-WithMe or Fowl -- peer traffic uses this shared secret with Noise, specifically "Noise_NNpsk0_25519_ChaChaPoly_BLAKE2s")

One tool that does now use Magic Wormhole for "introduction" like this is EtherSync: https://ethersync.github.io/ethersync/


as long as we can convert the OCI to an bootable VM image, i am fine with that. But i also think, there's an size limit


There are still growing pains, but https://github.com/osbuild/bootc-image-builder exists and is likely to become exactly that in the general case (as it already is for the redhat family).


Oh those size limits are pushed plenty by AI images, no worries. I recently had a good laugh when I found a docker image that was 2 - 3 times as big as the OS partition of a lot of our smaller servers.

And our OS image build order would reuse layers better than those.


No doubt, I've regularly encountered ~2TB container images with enough layers to make one weep. SISO, slop in/slop out (sorry).


Time to find out if one can make a dockerbomb image >:-)


I believe in you, 'fallocate' can be put into entrypoint :P This way the size is a surprise, not constant


i think it's already been done with bootable containers.

redhat has recently GA bootable container as well.


how complex is this to understand for auditors? i fear of the ever increasing complexity of protocols which are security-relevant...


i am still of the opinion, if they would extend sieve quite a bit and standardize markdown/reST/asciidoc as rendered in emailreaders, we could probably get much more usage of mail again

(sieve would need additional features of sending/processing mails and reencrypting imho)

but mail is still less broken then mobile phone networks.


Yes. And we'd also need people to stop demanding non-semantic hard wrapping at 79/80 chars.


You know what's quite more important?

* Performant and safe standard library. * batteries included * a good way to actually care about managing dependencies, during build and runtime.

Okay, you got your stuff, please everyone now let's care about the standard library and that it really good.


i would prefer it, if they improve and extend the java standard library and the tooling for libraries

many people will tell you that the standard library is not as performant as it could be and does not have as many batteries as python and try managing your dependencies...

that would be far more important than the next super duper feature IMHO.


> many people will tell you that the standard library is not as performant

such as?


i run btrfs on servers and desktops. it's usuable.


So do I and BTRFS is extremely good these days. It's also much faster than ZFS at mounting a disk with a large number of filesystems (=subvolumes), which is critical for building certain types of fileservers at scale. In contrast, ZFS scales horribly as the number of filesystems increases, where btrfs seems to be O(1). btrfs's quota functionality is also much better than it used to be (and very flexible), after all the work Meta put into it. Finally, having the option of easy writable snapshots is nice. BTRFS is fantastic!


> It's also much faster than ZFS at mounting a disk with a large number of filesystems (=subvolumes), which is critical for building certain types of fileservers at scale.

Now you've piqued my curiosity; what uses that many filesystems/subvolumes? (Not an attack; I believe you, I'm just trying to figure out where it comes up)


It can be useful to create a file server with one filesystem/subvolume per user, because each user has their own isolated snapshots, backups via send/recv are user-specific, quotas are easier, etc. If you only have a few hundred users, ZFS is fine. But what if you have 100,000 users? Then just doing "zpool import" would take hours, whereas mounting a btrfs filesystem with 100,000 subvolumes takes a seconds. This complexity difference was a show stopper for me to architect a certain solution on top of ZFS, despite me personally loving ZFS and having used it for a long time. The btrfs commands and UX are really awkward (for me) compared to ZFS, but btrfs is extremely efficient at some things where ZFS just falls down.

The main criticism in this thread about btrfs involves multidisk setups, which aren't relevant for me, since I'm working on cloud systems and disk storage is abstracted away as a single block device.


Incidentally, the application I'm reworking to use btrfs is cocalc.com. One of our main use cases is distributed assignments to students in classes, as part of the course management functionality. Imagine a class with 1500 students all getting an exact copy of a 50 MB folder, which they'll edit a little bit, and then it will be collected. The copy-on-write functionality of btrfs is fantastic for this use case (both in speed and disk usage).

Also, the out-of-band deduplication for btrfs using https://github.com/Zygo/bees is very impressive and flexible, in a way that ZFS just doesn't match.


I seem to recall some discussion in one of the OpenZFS leadership meetings about slow pool imports when you have many datasets. Sadly I can't recall the details, but at least it seems to be on their radar.


As far as I understand, a core use case at Meta was build system workers starting with prepopulated state and being able to quickly discard the working tree at the end of the build. CoW is pretty sweet for that.


> But the reality is how can someone small protect their blog or content from AI training bots? E.g.: They just blindly trust someone is sending Agent vs Training bots and super duper respecting robots.txt? Get real...

baking in hashcash into http 1.0/1.1/1.2/2/3, smtp, imap, pop3, tls and ssh. then this will all to expensive for spammers and training bots. but IETF is infiltrated by government and corporate interests..


Spammers will buy ASICs and get a huge advantage over consumer CPUs


Wouldn't DNSSEC solve stuff like this?


How?


No.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: