Hacker Newsnew | past | comments | ask | show | jobs | submit | moochmooch's commentslogin

unfortunately, make is more well written software. I think ultimately Dockerfile was a failed iteration of Makefile. YAML & Dockerfile are poor interfaces for these types of applications.

The code first options are quite good these days, but you can get so far with make & other legacy tooling. Docker feels like a company looking to sell enterprise software first and foremost, not move the industry standard forward

great article tho!


Make is timestamp based. That is a thoroughly out-of-date approach only suitable for a single computer. You want distributed hash-based caching in the modern world.

so use Bazel or buck2 if you need an iteration on make's handling of changed files. Bazel is much more serious of a project than buildkit. I'm not saying make is more functional that buildkit (it might be to some), I'm saying its better written software than buildkit. two separate things

Bazel just seems so... Academic. I can't make heads or tails of it.

Compared to a Dockerfile it's just too hard to follow


Oh I love Bazel. The problem is that it’s harder to adopt for teams used to just using make. For a particular project at work, I argued unsuccessfully for switching from plain make to bazel, and it ended up switching to cmake.

Now with AI bazel maintenance is almost entirely painless experience. I have fewer issues with it than the standard Go toolchain and C++ experience was always quite smooth.

Along similar lines, when I was reading the article I was thinking "this just sounds like a slightly worse version of nix". Nix has the whole content addressed build DAG with caching, the intermediate language, and the ability to produce arbitrary outputs, but it is functional (100% of the inputs must be accounted for in the hashes/lockfile, as opposed to Docker where you can run commands like `apk add firefox` which is pulling data from outside sources that can change from day to day, so two docker builds can end up with the same hash but different output, making it _not_ reproducible like the article falsely claims).

Edit: The claim about the hash being the same is incorrect, but an identical Dockerfile can produce different outputs on different machines/days whereas nix will always produce the same output for a given input.


> so two docker builds can end up with the same hash but different output

The cache key includes the state of the filesystem so I don’t think that would ever be true.

Regardless, the purpose of the tool is to generate [layer] images to be reused, exactly to avoid the pitfalls of reproducible builds, isn’t it? In the context of the article, what makes builds reproducible is the shared cache.


It's not reproducible then, it's simply cached. It's a valid approach but there's tradeoffs of course.

it's not an either or, it can be reproducible and cached

similarly, nix cannot guarantee reproducibility if the user does things to break that possibility


The difference is that you can blow the Nix cache away and reproduce it entirely. The same cannot be said for Docker.

That's not true

Docker has a `--no-cache` flag, even easier than blowing it away, which you can also do with several built in commands or a rm -rf /var/lib/docker

Perhaps worth revisiting: https://docs.docker.com/build/cache/


That will rebuild the cache from upstream but not reproducibly.

Ah you're right, the hash wouldn't be the same but a Dockerfile could produce different outputs on different machines whereas nix will produce identical output on different machines.

Producing different outputs isn't dockerfile's fault. Dockerfile doesn't enforce reproducibility but reproducibility can be achieved with it.

Nix isn't some magical thing that makes things reproducible either. nix is simply pinning build inputs and relying on caches. nixpkgs is entirely git based so you end up pinning the entire package tree.


If you are building a binary on different arches, it will not be the same. I have many container builds that I can run while disabling the cache and get the same hash/bytes in the end, i.e. reproducible across machines, which also requires whatever you build inside be byte reproducible (like Go)

> whereas nix will always produce the same output for a given input.

If they didn't take shortcuts. I don't know if it's been fixed, but at one point Vuze in nix pulled in an arbitrary jar file from a URL. I had to dig through it because the jar had been updated at some point but not the nix config and it was failing at an odd place.


This should result in a hash mismatch error rather than an output different from the previous one. If there is a way to locate the original jar file (hash matching), it will still produce the same output as before.

Flakes fixes this for Nix, it ensures builds are truly reproducible by capturing all the inputs (or blocking them).

Apparently I made note of this in my laptop setup script (but not when this happened so I don't know how long ago this was) so in case anyone was curious, the jar file was compiled with java 16, but the nix config was running it with java 8. I assume they were both java 8 when it was set up and the jar file upgraded but don't really know what happened.

No it doesn't. If the content of a url changes then the only way to have reproducibility is caching. You tell nix the content hash is some value and it looks up the value in the nix store. Note, it will match anything with that content hash so it is absolutely possible to tell it the wrong hash.

Not having a required input, say when you try to reproduce a previous build of a package, is a separate issue to an input silently changing when you go to rebuild it. No build system can ensure a link stays up, only that what's fetched hasn't changed. The latter is what the hash in nix is for. If it tries to fetch a file from a link and the hash doesn't match, the build fails.

Flakes, then, run in a pure evaluation mode, meaning you don't have access to stuff like the system triple, the current time, or env vars and all fetching functions require a hash.


Buildkit has the same caching model. That's what I'm saying. It doesn't force you to give it digests like nix functions often do but you can (and should).

You can network-jail your builds to prevent pulling from external repos and force the build environment to define/capture its inputs.

just watch out for built at timestamps

SRE here, I feel like both are just instructions how to get source code -> executable with docker/containers providing "deployable package" even if language does not compile into self-contained binary (Python, Ruby, JS, Java, .Net)

Also, there is nothing stopping you from creating a container that has make + tools required to compile your source code, writing a dockerfile that uses those tools to produce the output and leave it on the file system. Why that approach? Less friction for compiling since I find most make users have more pet build servers then cattle or making modifications can have a lot of friction due to conflicts.


This is cool, is there a docs website available?


Not yet, as its work in progress, but we want to push a initial docu how to set up our current progress this week.


That is a downside. However, it's better to look at Tauri like you might look at React Native. If complete consistency between platforms is required, Tauri still saves you time by letting you write a single codebase and then do a low effort port to each platform. For many applications, Tauri applications will look the same between platforms out of the box.

That's the pitch anyway. People can spend a lot of time on ios vs android in react native applications, so ymmv


it stops low effort grifting of network-enabled open source for cash.


That should not be the litmus test for writing software. Lots of software can have 1 user, yourself. Stability and ability to distribute be damned


ts-node introduces its own bugs and ergonomics inconsistencies on top of node. I've used it on multiple projects in the past and ended up removing it as a dep when I've been able to.

Not saying anyone who is using ts-node should immediately jump to Deno. Their use cases don't fully overlap.


whinging about supply chain security is the bikeshedding topic of 2022


It wouldn't be better, it would be arbitrary


JPL is such an outlier it's not even worth talking about in this discussion


> thoroughly vetting a candidate

Either you work at startups or you thoroughly vet candidates, not both.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: