Your claims here are inaccurate. You can pass flags or define environment variables to get the behavior you want. Please spend some more time hitting the man pages and the guide.
> It indeed does not enforce (or even permit) robust isolation between the containers and the host, leaving large portions exposed. … More in detail, directories as the /home folder, /tmp, /proc, /sys, and /dev are all shared with the host, environment variables are exported as they are set on host, the PID namespace is not created from scratch, and the network and sockets are as well shared with the host. Moreover, Singularity maps the user outside the container as the same user inside it, meaning that every time a container is run the user UID (and name) can change inside it, making it very hard to handle permissions.
I actually went into every single line of the manuals and even discussed the matter on the official Singularity Slack.
In that blog post I wrote that it does not enforce. It is true that you can achieve some level of isolation by setting certain flags and environment variables explicitly, but this is (was?) quite hard to get working, moreover the user mapping inside the container is always host-dependant and there is just no network isolation.
To achieve something close to the behaviour "I wanted", I had to use a combination of the command line flags you mentioned (and in particular -cleanenv, -containall and -pid) together with custom-made, ad-hoc runtime sandboxing for directories which required write access (as /tmp and /home).
However, this is not the default behaviour and it is not how Singularity is used in practice by its users. But yes, I was able to achieve something close to the behaviour I wanted [1].
This said, if I am missing something, or if the project has evolved to allow for a better level of isolation by default, please let me know. That blog post is dated 2022 after all.
I agree to a certain level. However, it's hard to ensure dependencies to work in the right way without isolation. These two support tickets are a showcase of the essence of the problem: "Same container, different results" [1] and "python3 script fails in singularity container on one machine, but works in same container on another" [2]. In my experience with Singularity, there were many issues like these.
I am not sure why they had to call it a "containerization" solution. It gets a bit philosophical, but IMO containers are meant to "contain", not to just package. To me, Singularity is more a "virtual environment on steroids", and it works great in that sense. But it doesn't "contain".
The hard truth is that Singularity was designed more to address a cultural problem in the HPC space (adoption friction and push back of new, "foreign" technologies) rather than to engineer a proper solution the the dependency hell problem.
HPC clusters still use Linux users and shell access, meaning that it is up to the user to run the container: there is just no container orchestration. This means that the user has to issue a command like "singularity run" or "docker run". And since not long time ago, to let users do a "docker run" it meant to have them part of the docker group, which is a near-root access group. Just not doable.
Singularity also works more or less out of the box with MPI in order to run parallel workloads, either locally on multi-nodes. However, this has a huge price as it relies on doing an "mpi singularity run", and it requires to have the same MPI version inside and outside the container. To me, this is this is more a hacky shortcut than a reliable solution.
I believe that the definitive solution in the HPC word will be to let HPC queuing systems to run and orchestrate containers on behalf of the users (including to run MPI workloads), thus allowing to make use of any container engine or runtime, including Docker. I did some trials and it works well, almost completely solving the dependency hell problem and greatly improving scientific reproducibility. A solution like the one presented in the OP contributes in the discussion towards this goal, and I personally welcome it.
With respect to Singularity, I think they just had to name the project "singularity environments" rather than "singularity containers" and everything would have been much more clear.
[1] https://sarusso.github.io/blog/container-engines-runtimes-or...