> people who get attacked, which is something that e.g. civil engineers certainly don't get blamed for (i.e. if a terrorist blows up their bridge).
There's a really big difference though. In the physical world, an "attack" is always possible with enough physical force -- no matter how good of a lock you design, someone can still kick down the door, or cut through it, or blow it up. But with computer systems, assuming you don't have physical access, an attack is only possible as a result of a mistake on part of the programmers. Practically speaking, there's no difference between writing an out-of-bounds array access that BSoD's millions of computers, and writing an out-of-bounds array access that opens millions of computers to a zero-day RCE, and the company should not be shielded from blame for their mistake only in the latter case because there's an "attacker" to point fingers at.
Over the past few years of seeing constant security breaches, always as the result of gross negligence on the part of a company -- and seeing those companies get away scot free because they were just innocent "victims of a cyberattack", I've become convinced that the only way executives will care to invest in security is if vulnerabilities come with bankrupt-your-company levels of liability.
Right now, the costs of a catastrophic mistake are borne by the true victims -- the innocent customer who had their data leaked or their computer crashed. Those costs should be born by the entity who made the mistake, and had the power to avoid it by investing in code quality, validating their inputs, using memory-safe languages, testing and reviewing their code, etc.
Yes, we can't just all write bug-free code, and holding companies accountable won't just stop security vulnerabilities overnight. But there's a ton of room for improvement, and with how much we rely on computers for our daily lives now, I'd rather live in a world where corporate executives tell their teams "you need to write this software in Rust because we'll get a huge discount on our liability insurance." It won't be a perfect world, but it'd be a huge improvement over this insane wild west status quo we have right now.
> In the physical world, an "attack" is always possible with enough physical force -- no matter how good of a lock you design, someone can still kick down the door, or cut through it, or blow it up. But with computer systems, assuming you don't have physical access, an attack is only possible as a result of a mistake on part of the programmers.
It's exactly the opposite.
In the physical world, you mostly only have to defend against small-time attackers. No bank in the world is safe from, say, an enemy army invading. The way that kind of safety gets handled is by the state itself - that's what the army is for.
In the digital world, you are constantly being attacked by the equivalent of a hundred armies, all the time. Hackers around the world, whether criminals or actual state-actors, are constantly trying to break into any system they can.
So yes, many breaches involve some kind of software issue, but it is impossible to never make any mistake. Just like no physical bank in the world would survive 1000s of teams trying to break in every single day.
> state-actors, are constantly trying to break into any system they can.
I thought state actors prefer to buy over build. Do they really need to build a Botnet over your personal computer over just expanding their own datacenter ?
> In the digital world, you are constantly being attacked by the equivalent of a hundred armies, all the time. Hackers around the world, whether criminals or actual state-actors, are constantly trying to break into any system they can.
This is why I think cyberattacks should be seen from the "victim"'s perspective as something more like a force of nature rather than a crime -- they're ubiquitous and constant, they come from all over the world, and no amount of law enforcement will completely prevent them. If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.
(I'm not saying that we shouldn't prosecute cyber crime, but that companies shouldn't be able to get out of liability by saying "it's the criminals' fault").
> So yes, many breaches involve some kind of software issue, but it is impossible to never make any mistake.
It's not possible to never make a mistake, no. But there's a huge spectrum between writing a SQL injection vulnerability and a complicated kernel use-after-free that becomes a zero-click RCE with an NSO-style exploit chain, and I'm much more sympathetic to the latter kind of mistake than the former.
The fact is that most exploits aren't very sophisticated -- someone used string interpolation to build an SQL query, or didn't do any bounds checking at all in their C program, or didn't update 3rd-party software on an internal server for 5 years. And for as long as these kinds of mistakes don't have consequences, there's no incentive for a company to adopt the kind of structural and procedural changes that minimize these risks.
In my ideal world, companies that follow good engineering practices, build systems that are secure by design, and get breached by a nation state actor in a "this could have happened to anyone" attack should be fine, whether through legislation or insurance. But when a company cheaps out on software and develops code in a rush, without attention to security, then they shouldn't get to socialize the costs of the inevitable breach.
> If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.
I genuinely have no idea how liability for civil engineering works, but the evidence of my eyes is that entire Oklahoma towns built by civil engineers get wiped off the map by tornadoes all the time. Therefore I assume either we can't design a tornado-proof building, or civil engineering gets the same cost-benefit analysis as security engineering. The acceptable cost-benefit balance is just different. But we can't be selling $10 million tornado-proof shacks, and we can't be selling $10 million bug-proof small business applications, if either is even possible.
> If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.
This is why I liken it to protecting from an army. Wanting to protect a building from rain is fine - rain is a constant that isn't adapting and "fighting back".
Find me a building that is able to keep its occupants safe from an invading army, and then we'll talk. It's impossible. That's what we built armies for.
> But there's a huge spectrum between writing a SQL injection vulnerability and a complicated kernel use-after-free that becomes a zero-click RCE with an NSO-style exploit chain, and I'm much more sympathetic to the latter kind of mistake than the former.
To be clear, I agree that there's a spectrum, and I wouldn't want to make it so that companies can get away with everything. But I'm not sure we have a good solution for "my company has 10k engineers, one of them five years ago set up a server and everyone forgot it exists, now it's exploitable". Not in the general case of having so many employees.
> The fact is that most exploits aren't very sophisticated -- someone used string interpolation to build an SQL query, or didn't do any bounds checking at all in their C program, or didn't update 3rd-party software on an internal server for 5 years. And for as long as these kinds of mistakes don't have consequences, there's no incentive for a company to adopt the kind of structural and procedural changes that minimize these risks.
I'm not a security researcher, but I'd guess that most exploits are even simpler - they don't even necessarily rely on software exploits, they rely on phishing, on social engineering, etc.
I've seen plenty of demos of people being able to "hack" many companies by just knowing the lingo and calling a few employees while pretending to be from IT.
This doesn't even include "exploits" like getting spies into a company, or just flat-out blackmailing employees. Do you think the systems you've worked on are secure from a criminal organization applying physical intimidation on IT personnel? (I won't go into details but I'm sure you can imagine worst-case scenarios here yourself.)
> But when a company cheaps out on software and develops code in a rush, without attention to security, then they shouldn't get to socialize the costs of the inevitable breach.
I agree, but there's a huge range between "builds software cheaply" and "builds software which is secure by default" (the second being basically impossible - find me a company that has never been breached if you think it's doable).
We want to make companies pay the cost when it incentivizes good behavior. That's sometimes the case, hence my agreeing with you for many cases.
But security is a game of weakest links, and given thousands of adversaries of various levels of strength, from script-kiddies to state actors, every company is vulnerable on some level. Which is why, in addition to making companies liable for real negligence, we have to recognize that no company is safe, even given enormous levels of effort, and the only way to truly protect them is via some state action.
The reason your bank isn't broken into isn't just that they are amazing at security - it's that if someone breaks into your bank, the state will investigate, hunt them down, arrest them and imprison them.
Show me a company that claims it's never been breached in some way, and I'll show you a company that has no clue about security, including their prior breaches.
Every country in the world would see this as their big chance to overtake the US. Russia, China, you name it.
You would have to be an idiot to start a software company in the US. High regulation, high cost of living, high taxes, high salaries, personal liability, and a market controlled by monopolies who have the resources to comply.
They’ll leave. The entire world will be offering every incentive to leave. China would offer $50K bonuses to every engineer that emigrated the next day.
There's a really big difference though. In the physical world, an "attack" is always possible with enough physical force -- no matter how good of a lock you design, someone can still kick down the door, or cut through it, or blow it up. But with computer systems, assuming you don't have physical access, an attack is only possible as a result of a mistake on part of the programmers. Practically speaking, there's no difference between writing an out-of-bounds array access that BSoD's millions of computers, and writing an out-of-bounds array access that opens millions of computers to a zero-day RCE, and the company should not be shielded from blame for their mistake only in the latter case because there's an "attacker" to point fingers at.
Over the past few years of seeing constant security breaches, always as the result of gross negligence on the part of a company -- and seeing those companies get away scot free because they were just innocent "victims of a cyberattack", I've become convinced that the only way executives will care to invest in security is if vulnerabilities come with bankrupt-your-company levels of liability.
Right now, the costs of a catastrophic mistake are borne by the true victims -- the innocent customer who had their data leaked or their computer crashed. Those costs should be born by the entity who made the mistake, and had the power to avoid it by investing in code quality, validating their inputs, using memory-safe languages, testing and reviewing their code, etc.
Yes, we can't just all write bug-free code, and holding companies accountable won't just stop security vulnerabilities overnight. But there's a ton of room for improvement, and with how much we rely on computers for our daily lives now, I'd rather live in a world where corporate executives tell their teams "you need to write this software in Rust because we'll get a huge discount on our liability insurance." It won't be a perfect world, but it'd be a huge improvement over this insane wild west status quo we have right now.