Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we should seriously consider something like a ts clearance as mandatory for work on core technologies. Many other projects, both open and closed, are probably compromised by foreign agents.


> I think we should seriously consider something like a ts clearance as mandatory for work on core technologies.

Was xz/lzma a core technology when it was created? Is my tiny "constant time equality" Rust crate a core technology? Even though it's used by the BLAKE3 crate? By the way, is the BLAKE3 crate a core technology? Will it ever become a core technology?

With free software in general, things do not start a "core technology"; they become a "core technology" over time due to usage. At which point would a maintainer have to get a TS clearance? Would the equivalent of a TS clearance from my Latin America country be acceptable? And how would I obtain it? Is it even available to people outside the military and government (legit question, I never looked)?


We probably shouldn't use your code at all, is the real answer. You can get TS, it just costs a lot of money.


In United States, you cannot apply for a clearance. You must get a job that requires a clearance, then start application process and wait.


Who is "we"? Are you from the US by any chance? Do you mean that the US government should rewrite every piece of core architecture (including Linux, Ssh, Nginx...) from scratch? Because they are all "contaminated" and actually were created by non-americans.

If that's the case, you do you. Do you also think that all other countries should do the same, and rewrite everything from scratch for their government use (without foreign, for example American, influence)? And what about companies? Should they be forced to switch to their government's "safe" software, or can they keep using Linux and ssh? What about multi-national companies? And what even counts as a "core" software?

So yeah, I don't think it's a good idea.


We can keep it between NATO plus friends.


Wow, I can't decide which is the bigger act of sabotage to open source, your ideas or the actual backdoor.


The Linux kernel is complaining about a lack of funding for CI-one of the highest visibility projects out there. Where will the money come from for this?

Corps? Aside from Intel most of them barely pay to upstream their drivers.

The govt? The US federal government cut so much of it's support since the 70s and 80s.


You're right, but accepting code from random Gmail accounts can't be the solution. Honestly the Linux kernel is a bloated mess, and will probably never be secured.


Accepting code from any source without properly reviewing it is surely the actual problem, no? This person only infiltrated this project because there was no proper oversight.

Maintainers need to be more stringent and vigilant of the code they ship, and core projects that many other projects depend upon should receive better support, financial and otherwise, from users, open source funds and companies alike. This is a fragile ecosystem that this person managed to exploit, and they likely weren't the only one.


Maintainers can't fully review all code that comes in. They don't have the resources. Even if they could give it a good review, a good programmer could probably still sneak stuff in. That's assuming a maintainer wasn't compromised, like in this case. We need a certain level of trust that the contributors are not malicious.


Definitely this.

I’ve been a package maintainer for a decade. I make it a habit to spot check the source code of every update of every upstream package, hoping that if many others do the same, it might make a difference.

But this backdoor? I wouldn’t have been able to spot it to save my life.


This wasn't caused by not reviewing the code of a dependency. This was a core maintainer of xz, who gradually gained trust and control of the project, and was then able to merge changes with little oversight. The failure was in the maintenance of xz, which would of course be much more difficult to catch in dependent projects. Which is why it's so impressive that it was spotted by an OpenSSH user. Not even OpenSSH maintainers noticed this, which points to a failure in their processes as well, to a lesser degree.

I do agree that it's unreasonable to review the code of the entire dependency tree, but reviewing own code thoroughly and direct dependencies casually should be the bare minimum we should expect maintainers to do.


> Not even OpenSSH maintainers noticed this, which points to a failure in their processes as well, to a lesser degree.

The OpenSSH project has nothing to do with xz. The transitive dependency on liblzma was introduced by a patch written by a third party. [1] You can't hold OpenSSH project members accountable for something like this.

[1]: https://bugs.debian.org/778913


Alright, that's fair. But I mentioned them as an example. Surely liblzma is a dependency in many projects, and _none_ of them noticed anything strange, until an end user did?

This is a tragedy of the commons, and we can't place blame on a single project besides xz itself, yet we can all share part of the blame to collectively do better in the future.


One of the primary responsibilities of a maintainer is to ensure the security of the software. If they can't keep up with the pace of development in order to ensure this for their users, then this should be made clear to the community, and a decision should be made about how to proceed. Open source maintenance is an often stressful and thankless role, but this is part of the problem that allowed this to happen. Sure, a sophisticated attacker would be able to fool the eyes of a single tired maintainer, but the chances of that happening are much smaller if there's a stringent high bar of minimum quality, and at least one maintainer understands the code that is being merged in. Change proposals should never be blindly approved, regardless of who they come from.

At the end of the day we have to be able to answer why this happened, and how we can prevent it from happening again. It's not about pointing fingers, but about improving the process.

BTW, there have been several attempts at introducing backdoors in the Linux kernel. Some manage to go through, and perhaps we don't know about others, but many were thwarted due to the extreme vigilance of maintainers. Thankfully so, as everyone is well aware of how critical the project is. I'm not saying that all projects have the resources and visibility of Linux, but clearly vigilance is a requirement for lowering the chances of this happening.

> That's assuming a maintainer wasn't compromised, like in this case.

What makes you say that? Everything I've read about this (e.g. [1]) suggests that this was done by someone who also made valid contributions and gained gradual control of the project, where they were allowed to bypass any checks, if they existed at all. The misplaced trust in external contributions, and the lack of a proper peer review process are precisely what allowed this to happen.

[1]: https://boehs.org/node/everything-i-know-about-the-xz-backdo...


My understanding is that the attacker was the only maintainer of xz, that was trusted by upstream maintainers. They couldn't realistically check his work. The defence against this can't be "do better, volunteer maintainers". Maybe we could have better automated testing and analysis, but OSS is allergic to those.


Sure, I'm not saying this is the only solution, or that it's foolproof. But this should be a wake up call for everyone in the OSS community to do better.

Projects that end up with a single maintainer should raise some flags, and depending on their importance, help and resources should be made available. We've all seen that xkcd, and found it more amusing than scary.

One idea to raise awareness: a service that scans projects on GitHub and elsewhere, and assigns maintenance scores, depending on various factors. The bus factor should be a primary one. Make a scoreboard, badges, integrate it into package managers and IDEs, etc. GitHub itself would be the ideal company to implement this, if they cared about OSS as much as they claim to do.


Okay, so instead of one random Gmail account taking over a critical project, we need two or three?


That's a very US centric view and would practically split the open source community along the atlantic at best and fracture it globally at worst. Be careful what you wish for.


I trust NATO members.


Oh, how generous.


Even Turkey?


That's hard to do when the development of these libraries is so international. Not to mention that it's already so hard to find maintainers for some of these projects. Given that getting a TS clearance is such a long and difficult process, it would almost guarantee more difficulty in finding people to do this thankless job.


It doesn't need to be TS for open source(but for closed, I'm leaning yes). But all code for these core technologies need to be tied to a real person that can be charged in western nations. Yes, it will make it harder to get people, but with how important these technologies are, we really should not be using some random guys code in the kernel.


Don't forget that the NSA bribed RSA (the company) to insert a backdoor into their RNG. Being in western jurisdiction doesn't mean you won't insert backdoors into code. It just changes whom you will target with these backdoors. But they all equally make our technology less trustworthy so they are all equally despicable.


It will significantly cut down on Russian and Chinese back doors, which is still an improvement, Mr. Just Made an Account.


That just means the bad actors will all have clearance while putting in a bunch of hurdles for amateur contributors. The only answer is the hard one, constant improvement in methods to detect and mitigate bugs.


"Constant improvement" sounds like "constantly playing catch-up". Besides that, someone with TS can be arrested and charged, and I don't want amateur contributors.


>and I don't want amateur contributors.

And you're free to not accept amateur contributions to the OS projects you maintain. Hell, you can require security clearance for your contributors right now, if you want.


Software like that already exists. I'm saying open source should do better.


This only ensures the backdoors are coming from governments that issued the clearances, nothing more. I prefer more competition, at least there is incentive to detect those issues.


It will ensure that my OS doesn't have code from random Gmail accounts. If someone with U.S clearance submits a backdoor, they should either be charged in the U.S, or extradited to somewhere that will charge them. We have no idea who this person is, and even if we did we probably could not hold them accountable.


how many people in PRISM had such clearance? and how many of them would i trust? precisely zero.


Killing your pipeline for innovation and talent development doesn't make you secure, it makes you fall behind. The Soviet Union found this out the hard way when they made a policy decision to steal chip technology instead of investing in their own people. They were outpaced and the world came to use chips, networks, and software designed by Americans.


That's the exact opposite of what I'm saying we do. We need to invest in engineers we can trust, and cut off those we can't.


Who's we? Americans? Sure that's fine for you, but Americans aren't exactly trustworthy outside of the US either and I say that as someone who's usually pro US. This sort of mentality just shows a lack of understanding of how most of the world sees the US. Even in places like say, france, the us is seen as an ally but a very untrustworthy one. Especially since out of all the confirmed backdoors up until now, most of them were actually US made.

If this backdoor turns out to be linked to the US, what would your proposal even solve?


"We" doesn't have to be the U.S. This is a false dichotomy that I see people in this thread keep pushing. I suspect in bad faith, by the people that want to insert backdoors. As a baseline, we could keep the contributors to NATO and friends. If a programmer is caught backdooring, they can be charged and extradited to and from whatever country.


If it's just an extradition issue, the US has extradition treaties with 116 countries. You'd still have to 1) ensure that user is who they say they are (an ID?) and 2) they are reliable and 3) no one has compromised their accounts.

1) and 3) (and, to an extent, 2) )are routinely done, to some degree, by your average security-conscious employer. Your employer knows who you are and probably put some thought on how to avoid your accounts getting hacked.

But what is reliability? Could be anything from "this dude has no outstanding warrants" to "this dude has been extensively investigated by a law enforcement agency with enough resources to dig into their life, finances, friends and family, habits, and so on".

I might be willing to go through these hoops for an actual, "real world" job, but submitting myself to months of investigation just to be able to commit into a Github repository seems excessive.

Also, people change, and you should be able to keep track of everyone all the time, in case someone gets blackmailed or otherwise persuaded to do bad things. And what happens if you find out someone is a double agent? Rolling back years of commits can be incredibly hard.


Getting a TS equivalent is exactly what helps minimize them chances that someone is compromised. Ideally, such an investigation would be transferable between jobs/projects, like normal TS clearance is. If someone is caught, yes rolling back years isn't practical, but we probably ought to look very closely at what they've done, like is probably being done with xz.


I guess it depends on the ultimate goal.

If the ultimate goal is to avoid backdoors in critical infrastructures (think government systems, financial sector, transportation,...) you could force those organizations to use forks managed by an entity like CISA, NIST or whatever.

If the ultimate goal is to avoid backdoors in random systems (i.e. for "opportunistic attacks"), you have to keep in mind random people and non-critical companies can and will install unknown OSS projects as well as unknown proprietary stuff, known but unmaintained proprietary stuff (think Windows XP), self-maintained code, and so on. Enforcing TS clearances on OSS projects would not significantly mitigate that risk, IMHO.

Not to mention that, as we now know, allies spy and backdoor allies (or at least they try)... so an international alliance doesn't mean intelligence agencies won't try to backdoor systems owned by other countries, even if they are "allies".


The core systems of Linux should be secured, regardless of who is using it. We don't need every single open source project to be secured. It's not okay to me that SSH is potentially vulnerable, just because it's my personal machine. As for allies spying on each other, that certainly happens, but is a lot harder to do without significant consequences. It will be even harder if we make sure that every commit is tied to a real person that can face real consequences.


The "core systems of Linux" include the Linux kernel, openssh, xz and similar libraries, coreutils, openssl, systemd, dns and ntp clients, possibly curl and wget (what if a GET on a remote system leaks data?),... which are usually separate projects.

The most practical way to establish some uniform governance over how people use those tools would involve a new OS distribution, kinda like Debian, Fedora, Slackware,... but managed by NIST or equivalent, which takes whatever they want from upstream and enrich it with other features.

But it doesn't stop here. What about browsers (think about how browsers protect us from XSS)? What about glibc, major interpreters and compilers? How do you deal with random Chrome or VS Code extensions? Not to mention "smart devices"...

Cybersecurity is not just about backdoors, it is also about patching software, avoiding data leaks or misconfigurations, proper password management, network security and much more.

Relying on trusted, TS cleared personnel for OS development doesn't prevent companies from using 5-years old distros or choosing predictable passwords or exposing critical servers to the Internet.

As the saying goes, security is not a product, it's a mindset.


We wouldn't have to change the structure of the project to ensure that everyone is trustworthy.

As for applications beyond the core system, that would fall on the individual organizations to weigh the risks. Most places already have a fairly limited stack and do not let you install whatever you want. But given that the core system isn't optional in most cases, it needs extra care. That's putting aside the fact that most projects are worked on by big corps that do go after rogue employees. Still, I would prefer if some of the bigger projects were more secure as well.

Your "mindset" is basically allowing bad code into the Kernel and hoping that it gets caught.


>Your "mindset" is basically allowing bad code into the Kernel and hoping that it gets caught.

Not at all. I'm talking about running more and more rigorous security tests because you have to catch vulnerabilities, 99% of which are probably introduced accidentally by an otherwise good, reliable developer.

This can be done in multiple ways. A downstream distribution which adds its own layers of security tests and doesn't blindly accept upstream commits. An informal standard on open source projects, kinda like all those Github projects with coverage tests shown on the main repo page. A more formal standard, forcing some critical companies to only adopt projects with a standardized set of security tests and with a sufficiently high score. All these approaches focus on the content, not on the authors, since you can have a totally good-willing developer introducing critical vulnerabilities (not the case here, apparently, but it happens all the time).

On top of that, however, you should also invest in training, awareness, and other "soft" issues that are actually crucial in order to actualy improve cybersecurity. Using the most battle-tested operating systems and kernels is not enough if someone actually puts sensitive data on an open S3 bucket, or if someone only patches their systems once a decade, or if someone uses admin/admin on an Internet-facing website.


This seems infeasible for projects like LLVM that depend on international collaboration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: