Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I made no mention of Docker, VMs or any virtualization system. Those would be an implementation detail and would obviously change over time.

A container can be a .tar.gz, a zip or a disk image of artifacts, code, data and downstream deps. The generic word has been co-opted to mean a specific thing which is very unfortunate.



My point, which I guess I did not make clearly enough, is that container systems don't necessarily exist or remain supported over the ten-year period being discussed. The idea of ironing over long-term compatibility issues using a container environment seems like a great one! (For the record, .tgz -- the "standard" format for scientific code releases in 2010, does not solve these problems at all.)

But the "implementation detail" of which container format you use, and whether it will still be supported in 10 years, is not an implementation detail at all -- since this will determine whether containerization actually solves the problem of helping your code run a decade later. This gets worse as the number, complexity and of container formats expands.

Of course if what you mean is that researchers should provide perpetual maintenance for their older code packages, moving them from one obsolete platform to a more recent one, then you're making a totally different and very expensive suggestion.


Of course of course. I am not trying to boil the ocean here, or we would have a VM like wasm and a execution env like wasi and run all our 1000 year code inside of that.

The first step is just having your code, data and deps in an archive. Depending on the project and the age, more stuff makes it into the archive. I have been on projects where the source to the compiler toolchain was checked into the source repo and the first step was to boostrap the tooling (from a compiler binary checked into the repo).

We aren't even to the .tar.gz stage yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: