Hacker Newsnew | past | comments | ask | show | jobs | submit | hinshun's commentslogin

No worries. I think Make vs Ninja is a whole another discussion but how it relates to this project is that Ninja has less features than Make (e.g. no globs, etc) which makes it attractive as a simple build graph format.

Nix provides the toolchain and dependency management (like where boost is from), as well as ability to remotely execute it which makes it attractive to scale out large builds like a `-j 999`. Nix also lets you do things like patching boost and then recompiling both boost and the downstreams (incrementally and with early cut-off if using nix-ninja) all in one build graph.

All in all, probably not useful if you're not already needing features from Nix. But if you are, this should speed up your builds by a significant amount.


Not doing nix.. I tried a few times.. maybe I'll adapt my docker-based tooling at a certain point.

Less features is probably good, because make originally was just the dependency thing, but got abused as a build system.

Any performance improvements such as incremental compilations / incremental builds / interpreters / copy-on-write / hot-code reloading are always welcome.


We're still missing this feature to fully incrementally compile nix: https://github.com/pdtpartners/nix-ninja/issues/19

Since there's a target that depends on a generated source file for Nix's bison parser. But other targets incrementally compile comparably with regular ninja. So far we observed that Nix's sandboxing overhead is neglible.


I'm not sure, but we're in contact with some core Nix folks to see what we can do help move it along.


It made sense internally and we also thought it'll be compelling for people interested in Nix since Nix itself uses meson and outputs ninja.

What other formats do you think are worth targeting?


Not particularly, I was just curious thanks!


Author of nix-snapshotter here.

Yes, one of the main downsides of Docker images using Nix is the 128 layer limit. It means we have to use a heuristic to combine packages into the same layer and losing Nix’s package granularity. When building containers with Nix packages already on a Nix binary cache you also have to transform the Nix packages into layer tarballs effectively doubling the storage requirements.

Nix-snapshotter brings native understanding of Nix packages to the container ecosystem so the runtime prepares the container root filesystem directly from the Nix store. This means docker pull == Nix substitution and also at Nix package granularity. It goes a bit further with Kubernetes integration that you can read about in the repo.

Let me know if you have any other questions!


What's the state of deployment for something like nix-snapshotter nowadays (with the realization that the answer depends on which of N k8s install methods might be in use)?

I assume it's mostly in the field of ... "you're making a semi-large investment on this enough that you're doing semi-custom kubernetes deployments with custom containerd?"

Or maybe the thought is nix-snapshotter users are running k8s/kubelet with nixos anyway so its not a big deal to swapout/add containerd config?


Yes it’s going to depend on which k8s distribution you’re using. We have work in-progress for k3s to natively support nix-snapshotter: https://github.com/k3s-io/k3s/pull/9319

For other distributions, nix-snapshotter works with official containerd releases so it’s just a matter of toml configuration and a systemd unit for nix-snapshotter.

We run Kubernetes outside of NixOS, but yes the NixOS modules provided by the nix-snapshotter certainly make it simple.


Sorry to bug you with more questions but I literally dreamed of nix-snapshotter for years, so I'm excited. Do you know how this (installation burden) translates in the real world (GKS? AKS? etc)?

Can one even get away with abusing DaemonSets/hostDir/privileged on hosted clusters to modify their own installations? Or is `nix-snapshotter` just sort of out of the question on those provided solutions?


Happy to help. For EKS there’s a blog (https://blog.realvarez.com/using-estargz-to-reduce-container...) that goes into using stargz-snapshotter which will be same initial setup, but you’ll need to install nix as well in the “managed node group”.

I’m not sure what you mean by modifying your own installation. Like running k8s on NixOS and then using a nix-snapshotter based DaemonSet to modify k8s on the host? At first glance it seems like vanilla k8s can do this already, nix-snapshotter just makes it more efficient / binary matching.


I mean, I guess to put it simply "SSH to the worker node to fix it" isn't really viable. At all.

Some folks used to use DaemonSets + hostDir to do that node configuration instead of SSH. Which is weird, but less weird than "you can't autoscale nodes anymore because you have a manual bootstrap step".

Or am I just absolutely missing something?/


Yes! Our talk wasn’t accepted but Robbie and I, and a few others from my team will be there. Come say Hi!


We will be running it on a non-NixOS Kubernetes cluster, all you need is a nix daemon running as a systemd service.

I have tested with EKS, and you should be able to use nix-snapshotter with GKE too. May be able to put together some docs for that later.

In the meantime, see this for running a different snapshotter on EKS: https://blog.realvarez.com/using-estargz-to-reduce-container...


I assume that if the nix daemon is being started with each pod, then you're downloading a fresh copy of everything every time, so for large closures you give up a lot of the benefit of the cluster being able to cache layers (as in, say, a nix2container- or nixery-style image) and achieve very fast subsequent startups.

I'm a k8s novice, but would there be a way to run the nix daemon/cache in a semi-persistent pod on each node, and then "attach" it to the actual worker pods?


> I assume that if the nix daemon is being started with each pod, then you're downloading a fresh copy of everything every time

I got the impression that it uses the node's nix daemon.

From the project readme's FAQ https://github.com/pdtpartners/nix-snapshotter#faq

> What's the difference between this and a nix-in-docker?

> If you run nix inside a container (e.g. nixos/nix or nixpkgs/nix-flake) then you are indeed fetching packages using the Nix store. However, each container will have its own Nix store instead of de-duplicating at the host level.

> nix-snapshotter is intended to live on the host system (sibling to containerd and/or kubelet) so that multiple containers running different images can share the underlying packages from the same Nix store.


Yeah, I think that's the "main" idea of it, but it was also mentioned that this can work in GKE/EKS if you start a new daemon each time.


The EKS/GKE integration involves modifying the host that the kubelet lives and adding nix-snapshotter as a sibling host service.

I didn’t mean running nix-snapshotter as a Kubernetes resource because then there’s a chicken & egg problem. Kubernetes needs nix-snapshotter image service to resolve the nix-snapshotter image.


I would definitely appreciate seeing some docs with quick pointers on setting that up for EKS at least, if not also GKE!


Thank you! We appreciate it.


If you are comparing non-Nix image build performance, Nix is comparable to BuildKit in that it’s able to parallelize a build graph and cache intermediary outputs. Nix doesn’t require a Dockerfile to build, and constructs the layers directly like in the article.

In terms of image size, since it’s dependencies are explicit (all the way down to glibc and lower), the Nix image is comparable to a scratch image with a single statically compiled binary.

Comparing to the existing Nix built images, we are moving image build performance from O(n) (n is number of Nix packages) to ~O(1) because the build process becomes just constructing a JSON referring to already built Nix packages.


I’m guessing it also makes building variations of images much quicker because you only need to build the “difference” between the images?


Yup. If you also deploy the Nix packages to bare-metal, then creating containers is at almost zero cost because the container image component is just JSON.


That's fair, adding another service is definitely a downside. If you are running nix-built images though, the underlying data still needs to land somewhere on disk. Nix-snapshotter lets you store less data and download less data due to its package granularity.

We already have rootless containerd and nix-snapshotter. And I believe there is work underway to run rootless nix (and rootless k3s too). You may be able to run the whole thing unprivileged one day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: