Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if we're going to end up in an arms race between AIs masquerading as contributors (and security researchers) trying to introduce vulnerabilities into popular libraries, and AIs trying to detect and fix them.


Why would it be like that instead of the way we already handle low-trust environments?

Projects that get a lot of attention already put up barriers to new contributions, and the ones that get less attention will continue to get less attention.

The review process cannot be left to AI because it will introduce uncertainty nobody wants to be held responsible for.

If anything, the people who have always seen code as a mere means to an end will finally come to a forced decision: either stop fucking around or get out of the way.

An adversarial web is ultimately good for software quality, but less open than it used to be. I'm not even sure if that's a bad thing.


What I'm suggesting is: what if AIs get so good at crafting vulnerable (but apparently innocent) code than human review cannot reliably catch them?

And saying "ones that get less attention will continue to get less attention" is like imagining that only popular email addresses get spammed. Once malice is automated, everyone gets attention.


Significantly easier to detect than create? Not quite NP, but intuitively an AI which can create such an exploit could also detect it.

The economics is more about how much the defender is willing to spend in advance protection vs the expected value of a security failure


I think the issue I have with this argument is that it's not a logical conclusion that's based on technological choice.

It's an argument about affordability and the economics behind it, which puts more burden on the (open source) supply chain which is already stressed to its limit. Maintainers simply don't have the money to keep up with foreign state actors. Heck, they don't even have money for food at this point, and have to work another job to be able to do open source in their free time.

I know there are exceptions, but they are veeeery marginal. The norm is: open source is unpaid, tedious, and hard work to do. It will get harder if you just look at the sheer amount of slopcode pull requests that plague a lot of projects already.

The trend is likely going to be more blocked pull requests by default rather than having to read and evaluate each of them.


If you are doing security of all things - why wouldn't you verify the provenance of your tooling and libs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: