It is insane to me that in 2025 there is no easy way for me to run a program that, say, "can't touch the filesystem or network". As you say, even a few simple, very coarse grained categories of capabilities would be sufficient for 95% of cases.
sure, the command line get bit verbose but nothing that an alias or small wrapper couldn't solve
the big problem is that modern operating systems have huge surface area and applications tend to expect all sorts of things, so figuring out what you need to allow is often non-trivial
I'm curious what all you want to run that you don't want to access the filesystem? Or the network?
Like, I get it for a few things. But it is a short path to wanting access to files I have created for other things. Pictures and videos being obvious files that I likely want access to from applications that didn't create them.
Similarly, it is a short hop from "I don't want any other application to be able to control my browser" to "except for this accessibility application, and this password management application, and ..." As we push more and more to happen in the browser, it should be little surprise that we need more and more things to interact with said browser.
I think you misunderstood. Among the coarse grained capabilities I mentioned would be "access to folder X and it's subfolders" (read or write).
But to answer your question there are, eg, tons of programming packages in any language that I want purely for their computational abilities, and I know this for certain when using them. In fact for the vast majority of GUI programs I use, or programming packages I use, I know exactly what kind of permissions they need, and yet I cannot easily restrict them in those ways.
It is specifically running applications that always trips me up here. As a user/operator of the computer, I have been bitten by applications being too locked down for them to be useful in the past. I /think/ we have gotten better such that it is easy to have better OS behavior when it wants to restrict an application. But specifically sandboxing by default has been a source of terrible application behavior for me, in the past. Is a lot like using a shadow banned account where everything looks correct, but nothing is actually showing up. Very confusing.
Now, I think your point on restricting the libraries that are imported to a program makes a ton of sense. I'm not entirely clear where I would want the "breaker box" of what an application is allowed to do to be located, but it is crazy how much just importing some library will do in many programs.
Well you are ofc free to give applications full reign if you want. But you should at least be able to say, "No, desktop calculator I just downloaded, you can't do anything but compute and draw things in your application window".
More broadly, creating a good UI around granting capabilities is non-trivial. But that's a separate problem from simply not being able to make even the most basic kinds of restrictions that you want in most cases.
Totally fair. I just don't know of that many (any?) "desktop calculator" applications that people download. I'm far more expecting that people are downloading and running social applications than they are isolated things.
Mostly fair that it would be good if we could say "on site foo.com, request for any access to not-foo.whatever that happens." I can't remember the last time I saw the sheer number of third party network accesses that happens on far too many sites. It was sobering.
Gaming, I'm willing to largely get behind as something that should be more locked down. Networked games, of course, are a thing. Single player games should be a lot more isolated, though.
Any sort of editing software, though, gets tough. That is precisely the are that I have had bad experiences in in the past. Would try to edit raw photos and export them to a place I could draw or publish with them. Using a shadow banned application is the only way I know on how to describe how that felt.
Now there is a more or less sophisticated permission system which users then bypass by still accepting any prompt if you promise them anything shiny...
I actually am less against these ideas on the phone. Quite the contrary, I think I'm largely agreed that more efforts need to be done to let people control those.
I am also sadly skeptical that this works, there. I've seen my family that is all too eager to just click "ok" on whatever an app says it needs. :(
I think the ideas in qubes OS (https://www.qubes-os.org/) is reasonable in implementation given today's applications, and the need for backwards compatibility.
Unfortunately, the performance is what suffers, and morse law hasn't kept up such that vm based OS can be used by the regular laymen.
I think this sort of stuff implies a switch to new APIs that understand that the app is inside a sandbox, instead of "just trying to do things". For example, XDG Portals for opening/saving documents, instead of open(2) syscall.
Anything really. I tried to print from a sandboxed application the other day and turns out it can't be done - or at least it can't be done by "common user". As an educated person who has been using unix of all sorts I could probably figure it out, but it isn't something I should have to figure out in 2025. (it was a pain in 1998, but it worked better than the snap sandboxes of today despite the pushers of snap from an organization claiming that ease of use is important)
> Right, but this is largely to my point? I said in another thread that sandboxing often feels like being shadow banned on your own computer.
> I get wanting "safe" computers. I'm not clear that we can technically define what legally "safe" means, though. :(
You are currently using a web browser. When you go to ycombinator, the site cannot read the contents of your email in the next tab. This isn't a shadow ban you on your own machine, it's just a reasonable restriction.
Imagine you just installed a new web browser (or pdf reader, tax software, video game, ...). It should not be able to read and send all the pictures in your camera roll to a third party.
> Imagine you just installed a new web browser (or pdf reader, tax software, video game, ...). It should not be able to read and send all the pictures in your camera roll to a third party.
But I use my web browser to upload my photos to the cloud, so it absolutely should.
(I do somewhat agree with the general point, but I find it very funny that your very first example would break my workflow, and I do think that highlights the problem with trying to sandbox general-purpose programs)
Cell phones show this can be done: you can pick individual files ot sets of files using system file picker, and that one file (and only that file!) is opened for browser.
If it needs more, there is always "access all photos" permission, and "access all files" too.. but this is explicit and requires user prompt. And the last part ia very important - if freshly installed browser requires full files access without explanation, this is likely for spyware, so uninstall it and leave bad review.
Moving out of the world of "applications" into shell commands, we're gonna need a new shell that understands that `wget -o myfile https://example.com` needs to be handed a capability to write data, or we need change our habits a lot into always shuffling everything over pipes or such. In either scenario, if you want that level of granularity, I don't think UNIX will survive as we remember it.
(More likely path for now: start a new sandbox, run things in it, put result files in an "outbox", quit sandbox, consume files from outbox. Also not very convenient with current tools.)
Most things you run in a pipeline don't need access to the filesystem or the network.
Something dangerous like ffmpeg would be better if the codecs were running without access to files or the network, although you'd need a not fully sandboxed process to load the media in the first place.
Many things do need file access, but could work well with an already opened fd, rather than having to open things themselves (although forcing that results in terrible UX).
Of course, filesystem access gets tricky because of dynamic loading, but lets pretend away that for now.
ping(8) has no particular access to the filesystem, and can only do inet and stdio. At least on OpenBSD. I have a modified version of vi(1) that cannot write outside of /tmp or ~/tmp, nor access the internet, nor can it run programs. Other text editors could easily access ~/.ssh keys and cloud them. Whoops? sshd(8) and other daemons use privsep so that the likely to be exploited bits have no particular access to the system, only pipes off to other portions of the OpenSSH complex.
Maybe if rsync were better designed exploits could be better contained; alas, there was a recent whoopsiedoodle—an error, as Dijkstra would call them—and rsync can read from and write to a lot of files, do internet things, execute whatever programs. A great gift to attackers.
It may help if the tool does one thing and one thing well (e.g. the unix model, as opposed to the "I can't believe it's not bloat!"™ model common elsewhere) as then you can restrict, say, ping to only what it needs to do, and if some dark patterner wants to shove ads, metrics, and tracking into ls(1) how about a big fat Greek "no" for those network requests. It may also help if the tool is designed (like, say, OpenSSH) to be well partitioned, and not (like, say, rsync) to need the entire unix kitchen.
Image libraries have had quite a few CVE or whoopsiedoodles over the years, so there could be good arguments made to not allow those portions of the code access to the network and filesystem. Or how about a big heap of slow and expensive formal verification… what's that, someone with crap security stole all your market share? Oh, well. Maybe some other decade.
A non-zero number of people feel that "active content" e.g. the modern web is one of the worst security missteps made in the last few decades. At least flash was gotten rid of. So many CVE.
P.S. web browsers have always sucked at text editing, so this was typed up in vi yielding a file for w3m to read. No, w3m can't do much of anything besides internet and access a few narrow bits of the filesystem. So, for me, web browsers are very much in the "don't want to access the filesystem" category. I can also see arguments for them not having (direct) access to the network, to avoid mixing the "parse the bodge that is HTML and pray there are no exploits" with the "has access to the network" bits of the code, but I've been too lazy to write that as a replacement for w3m.
Simple example: Third party SW in a corporate context. Maybe you want to extend some permissions to some internal sites/parts of the FS, but fundamentally, there's limited trust.
This is an odd one. At face value, I want to agree. At the same time, if you don't trust the operator of the computer with access to data, why are we also worried about programs they run? If you don't trust them with access, then just don't give them access?
I'm open to the idea that some people are locked down such that they can't install things. And, that makes a lot of sense. You can have a relationship that is basically, "I trust them with access to data running this closed set of applications." Managing system configurations makes a ton of sense.
But, as soon as you have full trust of system management on a group, you start getting in odd worlds where you want to allow them to have full access, but want to stop unauthorized use. Which, we don't have a way to distinguish use from access for most data.
Trusting the user does not transitively extend to the software they use. You might be OK with them e.g. looking at company financials, but you'd really like to be sure e.g. that the syntax highlighter they use doesn't go and exfil that data. You still want them to be able to use the syntax highlighter. (Yes, it's an obviusly made-up example_
You _can_ fully vet apps, each and every one. Or you can choose a zero-trust approach and only vet the apps where it's necessary to extend trust.
The key requirement to solve this problem is that you can ensure that third party libraries get a subset of the permissions that the code calling them has. E.g. My photo editor might need read and write access to my photo folder, but the 3rd party code that parses jpegs to get their tags needs only read access and shouldn't have the ability to encrypt my photos and make ransom demands.
Deno took a step in a good direction, but it was an opportunity to go much further and meaningfully address the problem, so I was a bit disappointed that it just controlled restrictions at the process level.
Kind of two different things being addressed here. The article is talking about doing this at the granularity of preventing imported library code from having the same capabilities as the caller, which requires support from the language runtime, but the comment being responded to was saying there is no way in 2025 to run a program and keep it from accessing the network or the filesystem.
That is simply not true. There are many ways to do that, which have been answered already. SELinux. Seccommp profiles. AppArmor. Linux containers (whether that be OCI, bubblewrap, snap, app images, or systemd-run). Pledge and jails.
These are different concerns. One is software developers wanting to code into their programs upper limits to what imported dependencies can do. That is poorly supported and mostly not possible outside of research systems. The other is end users and system administrators setting limits on what resources running processes can access and what system calls they can make. That is widely supported with hundreds of ways to do it and the main reasons it is perceived as complicated is because software usually assumes it can do anything, doesn't tell you what it needs, and trying to figure it out as an end user is an endless game of playing whack-a-mole with broken installs.
Deno controls access at the process level, so it's better than nothing but it doesn't really help with this specific problem. Also it delegates setting the permissions up to the user, and we know that in practice everyone is just going to --allow-all.
OpenBSD has had “pledge” for quite a while. I think it’s a good idea, I wish it was supported by Linux because, as you note, a few basic patterns could help immensely.
Doing this at the program level is implemented in Linux by SELinux, which defines mandatory access controls (aka limitations on capabilities). This was difficult to get right by default and make a smoothly functioning distro with policies enabled. But it is enabled by default in Fedora.
To enable this at the programming level would require an enforcement mechanism at the level of a language VM or OS. It would require more overhead to enforce at that level, but the safety benefits within a language may be worth it.
Windows pro also has a sandbox that can disable filesystem and network access, but they wasted the opportunity by allowing only one sandbox process at a time.
It is insane to me that in 2025 there is no easy way for me to run a program that, say, "can't touch the filesystem or network". As you say, even a few simple, very coarse grained categories of capabilities would be sufficient for 95% of cases.