Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think a problem is that there doesn't seem any way to automatically check this. If we assume that anything that is used during build time can be malicious then figuring out those dependencies is already hard enough. Mapping that to organizational stability is one step further.


This is where the but FOSS is reviewable so it is trusted falls down. This situation is a prime example of how that fallacy is misconstrued. By being FOSS didn't make it trustworthy, it just meant that people had a fighting chance to find out why when something does happen. That's closing the barn door after the horses already left.

I'm not knocking FOSS at all. I just think some people have the concept twisted. Just like the meme of being written in Rust means the code is fast/safe from the mere fact it was written in Rust. I don't write Rust, but if I did, I guarantee that just from sheer not knowing WTF I'm doing would result in bad code. The language will not protect me from myself. FOSS will not protect the world from itself, but it does at least allow for decent investigations and after action reports.


We should not think in absolutes, but in terms of tools. What risks come with using a certain tool.

In your Rust example, using C is like using a power tool without any safety measures. That doesn't mean that you are going to get hurt, but there is an expectation that a sizable fraction of users of such tools will get hurt.

Rust is then the same tool with safety measures. Of course it is still a power tool, you can get hurt. But the chances of that happening during normal operation is a lot lower.

I think xz is a good example where open source happened to work as intended. Somebody noticed something weird, alerted and other people could quickly identify what other software might have the same problem.


You don't think every nation state has people inside private software shops? Especially big tech?

Look at stuff getting signed with MS keys, hardware vendors with possible backdoors.

Social engineering is social engineering and it can happen anywhere no matter the profit motivation or lack there of.

Money interest in software won't save you any more than Foss.


It'd be naive to assume that nation state actors are not trying to penetrate the supply chain at all levels, as it just takes a single weak link in the chain. That weak link could be behind corporate doors or in the open.

The main issue is that this attack shows how a relatively unknown component, as part of a much larger and more critical infrastructure, is susceptible to pressure as a result of "this is a hobby project, lend a hand".

At what point do these components become seen as a utility and in some way adopted into a more mainline, secure, well-funded approach to maintenance? That maintenance can, and probably should, happen in the open, but with the requisite level of scrutiny and oversight worthy of a critical component.

We got very lucky, _this time_.


I think googles program to hire security researchers was a minor step in the right direction, but it would behoove big tech and or various governments, to do the same thing these state intelligence actors are doing, and take a look at all of these projects that touch core infra and investigate the maintainers and their vulnerability.

I would bet that some of these projects like xz would show enormousness benefits from one paid person working on it 1/4 time, leaving room for a couple more projects per dev. Additionally, a couple places providing relatively minor grants would probably help a dev buy back some of their time so the can work on their project some other time then 'after the kids are in bed'


> this attack shows how a relatively unknown component

why just this one? do we collectively have the memory of a gold fish? just recently, log4j had a similar blast radius. is it because one was seemingly malicious that the other doesn't count?


While blast radius of both is large, there are major differences between them. Log4J was a largely app-level vulnerability affecting Java-based systems.

This vulnerability, had all gone to the attackers plan, would have been present on the major distros next major releases through a key infrastructure component which would have been installed far more widely, IMO.

Another major difference is that Log4J is already part of the Apache Foundation, which means it should have greater oversight/security maintenance anyway, while this is an attack against a solo developer.

It's definitely not to downplay the severity of the Log4J incident, by any means. But they are decidedly different.


> That's closing the barn door after the horses already left.

I don't think that's quite true - maybe a couple horses got out, but this was caught early and did not get to infect very many machines because someone completely unaffiliated could review it and find it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: