That definitely helps, but I don't think it solves the compromised machine scenario.
If the attacker has shell access to the dev's laptop, they are likely just running commands directly from that machine (or proxying through it). So to GitHub, the traffic still looks like it's coming from the allowed IP.
Allowlists are mostly for stopping usage of a token that got stolen and taken off-device.
I've been using mailbox.org for 5 years and like it very much. Cost some 3 EUR per month (actually there's a 50% discount this week).
Dead simple email that just works. Their webUI is fine, but I almost exclusively use it on iOS or macOS with the default mail app. They also have some other features (calendar, office suite, video calls) that I don't use. I really like the option to create up to 25 email aliases.
First, congrats on the release. I’ve looked at flox and devenv for nixifying our container builds. Our distribution of languages is about 40/30/20/10 of Python, F#, R and nodejs.
A dilemma I’m facing is that the win from nix in terms of faster builds and smaller images would be largely from python and R images (where the average size is often 1Gi or larger). However, the developers that use Python or R are less likely to “get” the point of Nix and might have a steeper learning curve than F# developers (where the builds are quite efficient).
That was the context, my question is, how’s the integration with Flox and R/RStudio? I know there’s Rix[1] for managing R packages with Nix.
We use this for internal base images at $DAYJOB in order to get SIGTERM properly passed in Kubernetes, mostly a problem with Python, R and Elixir. Works wonders with only a default ENTRYPOINT in the base image, so it’s completely “hidden” from the developers (most of them don’t care, and rightly so).
I wouldn't use this if I were you. The last release was over 5 years ago, and there are multiple issues with the code - for instance, you can't even build it anymore if you're on CMake v4+. Basically, this project has been abandoned, and since you can no longer build it from source, you'd have to use the 5+ yr old binaries - which isn't advisable, especially in a production environment.
IMO they serve similar at a glance, but actually very different use cases.
SeaweedFS is more about amazing small object read performance because you effectively have no metadata to query to read an object. You just distribute volume id, file id (+cookie) to clients.
3FS is less extreme in this, supports actual POSIX interface, and isn't particularly good at how fast you can open() files. On the other hand, it shards files into smaller (e.g. 512KiB) chunks, demands RDMA NICs and makes reading randomly from large files scary fast [0]. If your dataset is immutable you can emulate what SeaweedFS does, but if it isn't then SeaweedFS is better.
[0] By scary fast I mean being able to completely saturate 12 PCIe Gen 4 NVMe SSD at 4K random reads on a single storage server and you can horizontally scale that.
My guess is going to be that performance is pretty comparable, but it looks like Seaweed contains a lot more management features (such as tiered storage) which you may or may not be using.
It’d be neat to use subtrace in an ephemeral pod for debugging purposes, that just runs alongside the regular pod.
For monitoring the network traffic for the whole cluster, the CNI and/or whatever ebpf-based runtime security stuff you’re using (falco, tetragon, tracee) is usually enough, but I can definitely see the usefulness of subtract for more specific debugging purposes. If run as a DaemonSet make sure to add some pod filtering such as namespace and label selectors (but I’m sure you’ve already thought about that).
> use subtrace in an ephemeral pod for debugging purposes
That's a great suggestion. It'd be like kubectl exec-ing into a shell inside the pod, but for network activity. I think I'm going to prototype this tonight :)
> pod filtering such as namespace and label selectors
Yep, Subtrace already tags each request with a bunch of metadata about the place where it originated so that you can filter on those in the dashboard :) Things like the hostname, pod, cluster, AWS/GCP location are automatically populated, but you can also set custom tags in the config [1].