Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There's no way to take a random executable piece of code and run it safely.

Right, because there is actual computer science theory that explains why this is difficult.



What CS theory is that?

The temptation is to cite the halting problem, but that's not it.

The Bell-LaPadula model[0] has been proven to be able to provide security, but due to historical accident, you generally can't implement it in Linux, MacOS, Linux, etc al.

The necessary CS theory was developed to make things safe in the 1970s, as I've stated, over and over in various threads here[1-4] and elsewhere on the internet[5], ad nauseum. But we, the programmers, "software engineers", hackers, whatever you want to call us, don't apply those systems, and continue to pile layer upon layer of band-aid to systems that are insecure by design.

It's like building all of your bunkers out of crates of TNT, and wonder why they keep blowing up at the first rifle shot, then concluding you need thicker walls.

[0] https://en.wikipedia.org/wiki/Bell%E2%80%93LaPadula_model

[1] https://news.ycombinator.com/item?id=36717861

[2] https://news.ycombinator.com/item?id=36652789

[3] https://news.ycombinator.com/item?id=36623992

[4] https://news.ycombinator.com/item?id=36442874

[5] https://twitter.com/mikewarot/status/1607769510542544899


Rice’s theorem. btw capabilities are cool but it’s very difficult to actually deploy them to real-world software without accidentally giving things more access than intended.


Rice's theorem can quickly prove that antivirus software is always a waste of time, so that's a plus. It can not, however, do anything to help jailbreak out of a process that has no access to the outside, no matter how evil or clever it (the code in that process) is.

Think of capabilities as cash in your wallet... fairly easy to manage, as long as you can secure the wallet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: