Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does that apply here? The BSD guys chose security over speed, as is their mantra, but companies that run linux for profit prioritize speed and cost per computing unit over security. I think 'disable hyperthreading' would be a difficult sell even for Steve Jobs.


This is why we need enormous fines for security breaches, and smaller fines just for not following best practices.

Right now, only the people worried about paying more for performance, dev time, or security engineers are listened to. We need the legal teams inside companies to have something more substantial than possible negative publicity with which to motivate the CEO and CTO as a countervailing balance.

Just like the majority of industries, we need real negative consequences for when we dump incompetent code out into the world. We've tried the "no consequences at all" plan for a long time and it's gotten us, well, continual data breaches via the easiest possible things to control. S3 buckets and databases open to the world. An inability to patch known CVEs in under 3 months (hi Equifax!).


The problem with fines is that they happen after the fact and only if the worst actually happens. Tons of companies have totally abominable security and never get breached only out of dumb luck. So you'll still get lots of companies playing Russian Roulette where they make higher profits for ten years before they may or may not suffer a breach and get fined into oblivion, at which point they file for bankruptcy and start over.

You also end up creating a lot of really perverse incentives, like nefarious companies not disclosing data breaches because disclosing them would result in liability even though that's necessary for the victims to take steps to mitigate the damage. There's a reason the NTSB does no-fault investigations.

And a lot of mediocre but still harmful incentives like cargo culting decades-old security checklists to satisfy compliance requirements even though they don't actually result in improved security, but do create a false sense of security.

More than that, the problem is that humans are fallible, so even if you do 99.9% of everything right you can still make a mistake. A company with one security vulnerability can get just as compromised as a company with ten thousand. Does it really make sense to destroy OpenBSD with fines as soon as they have one security vulnerability? Or every random company that uses OpenSSH on a day that a not publicly known 0-day is being exploited in the wild? Or a company that updates to the latest version of some software that claims to have fixed a CVE even though it didn't?

The real problem here is architectural. It shouldn't be possible for someone to breach Equifax and get all your information because they shouldn't have that information to begin with. They shouldn't exist. Your data should be yours, on your device, so that it isn't possible for someone to get it by breaching a third party because the third party doesn't have it.


If you make the fine large enough that it may cause the company to go under, you can bet they'll buy some insurance. And you can bet the insurance companies will have some standards to reduce the risk of a company getting breached, such as doing audits regularly.

For example, if Equifax faced a fine of $5B (more than 1/4 of their market cap) instead of $500M, you can bet they'd be more serious about audits in the future. However, we've conditioned business to expect minor consequences for breaches, so security becomes an afterthought. Likewise, the $5B fine against Facebook is unlikely to change anything, though a $200-300B (20-30% market cap) fine would be much more convincing.

The point isn't necessarily to ruin companies, but to set a precedent that says these types of issues will not be tolerated. It'll force companies to get insurance, and the insurance will have an incentive to avoid collection on the policy.


Using fines that large is how you get them to not buy insurance, because it would cause the insurance to be prohibitively expensive, assuming you could even find someone to sell you a policy that large.

It also doesn't make any sense to base fines on market cap because the two things have nothing to do with one another. All that would really do is cause corporations to restructure their operations to separate the entity that does all the dirty work from the one that owns all the assets, so that the entity that exists in your jurisdiction and is susceptible to being fined is renting/leasing everything and has only a nominal market cap, whereas the one with all the assets is a totally independent company that isn't even in your jurisdiction and never does anything "wrong" because all it ever does is lease and license things to a different entity.

It also seems kind of obvious that even if you could try to impose a fine equal to 20-30% of a company's global market cap, all that would do is cause the local entity declare bankruptcy, dissolve and abandon your jurisdiction without actually paying the fine, because that large of a fine would exceed the long-term value of operating there. Especially when there isn't any guarantee it won't happen again if they stay. For that matter it would tend to make companies not want to operate there to begin with, because it's possible to do your best and still fail, and that kind of uncertainty is precisely how you drive businesses away.

But most importantly, it still generally isn't the large tech companies who are the ones with poor security. It's the other industries, especially finance and government, that are collecting just as much data but then doing a much worse job of securing it. What does a fine mean to the DMV or OPM?


These large rich tech companies are really responsive to 'compliance' with the letter and spirit of laws that otherwise might cause severe losses. Look at, eg, gpdr, and google suddenly getting religion about you being able to mass-download your data. Yes you can legislate solutions to corporate behaviours.


"These large rich tech companies" are not the ones getting breached. The likes of Google and Microsoft take security seriously already. The problem is the likes of Equifax and Capital One and government databases with poor security that nonetheless contain all kinds of sensitive information that they shouldn't be aggregating and retaining to begin with and they certainly shouldn't be required by law to collect and store, even though they frequently are right now.

Also:

> and google suddenly getting religion about you being able to mass-download your data.

They had that even before the GDPR.


Letter, yes. Spirit, I'm not so sure, it feels like Google and FB want to keep doing what they're already doing, and comply where they have to, instead of reconsidering whether they actually need all that data and need these dark patters for consent (which would be the spirit of GDPR)

And the smaller-than-FAANG companies... too many checklists, contracts and theater ("GDPR requires us to disable autofill on this form") and not enough actual rethinking what they're doing and if they should change their approach to data... so we'll still be seeing plenty of breaches where they shouldn't even be having the breached data

It'll probably be a decade before we see real effect from the GDPR...


What ever incentive you create and analogy you make. The hardware / software / internet has no parrellel. I’ve been thinking about the place of software in the context of other disciplines and here is the thing. If you are thrown back to the prehistory with 50 man dream team of engineers, and are told to recreate ... something. Let’s say the train station I was just in. A rudimentary train network could be created in maybe 50 years? ( starting with, how to make steel. ) Wat ever estimate you have, the work required before they recreate the lcd screens showing the time of arrival is easily double that.

With that as a barrier to entry , the only solution I could see working for security is: public domain hardware and software.

The only solution I believe


Don't know why you're comment is grayed, we absolutely need heavy monetary penalties for the worst kinds of data breaches. The abstract idea of a class action lawsuit isn't enough, even after the Equifax breach.


Is there anything about how breaches are currently remediated that might contribute to better outcomes than if we adopted a higher and harsher penalty system?

It seems like it might create some perverse incentives as the risk escalates.


That's true. I'm sure that the perverse incentive could be resolved with some system for self-reporting and fixing.


Do you have a similar opinion in regards to crimes? Do you think that there will be less crime if there are harsher prison sentences? Are you in favor of mandatory minimum sentences?

If not, why do you think harsher punishments are needed here but not for crimes?


White collar crimes (like this should be) are all about making value calculations. Take the famous Ford Pinto memo. They decided the risk to their customers' lives was smaller (in terms of pure dollar amount, after potential litigation) than fixing the gas tank issue. If you penalize reckless security practices that lead to data breaches companies will be far more inclined to look after their customers. We already issue fines like this with COPPA, so it's not a new concept.

Street crimes have a far different cause and should be treated differently. I'm surprised I even have to type that, it seems obvious.


Do you think there would be less murders or more if there was no punishment at all for murdering people?

That's where we are atm with security breaches.


If street crime had lesser penalties than the profit of said crime, yeah, I'd be pushing for harsher sentences, yes.

That's pretty exclusively the purview of white collar crime behind a corporation though.


Compared to effectively zero penalty, probably.


In all honesty, security is just really hard and we're really bad at it. Perhaps an alternative would be to establish standards when it comes to security team headcount and salary in an organization? That way they're incentivized to follow the rules and you have more leeway to punish them if they don't follow the baseline.


The solution to being bad at security isn't to establish quotas (that's a great way to make sure DevOps engineers get rebranded as Dev-Ops-Sec engineers, and not much else), but to get better at security.

Imagine if any other field said that. "Not burning people's houses down with electrical wiring is just really hard and we're really bad at it." "Keeping bridges standing is just really hard and we're really bad at it." "Flying across the country without killing any passengers is just really hard and we're really bad at it."


Isn't GDPR supposed to be an attempt at this kind of thing, treating privacy issues as a punishable negative externality similar to pollution?

I only ask because that all makes perfect sense to me, but I see a lot of negativity about GDPR on here, that all it ever does is stifle innovation and produce ever more cookie-agreement popups.


The EU has seen poor results with fines. The big tech companies (Google, Amazon, etc) pay them with the change they find in their couch cushions. Then, they continue doing whatever they want to do. It doesn't dissuade them.


We haven't really seen the "end game" deployed by the EU yet (4% of global annual turnover fines).

I suspect when that happens the companies will launch a massive PR campaign and fight it in court but eventually lose. If they pull out of the EU or pay I have no idea.

Edit: seems like 4% of alphabets 2018 global revenue [1] is "only" 5.44 billion dollars. Wonder if it can be applied multiple times.

[1] https://www.statista.com/statistics/266206/googles-annual-gl...


Not taking anything away from your point, I think we should also have real negative consequences for the people who commit security breaches.

There is a real social stigma with regard to committing robbery, burglary, breaking and entering, etc. I feel like there isn't so much with online crime. As a community we really pile the blame on the victim for not be prepared and seem to give the perpetrators a pass for taking advantage of the situation.

Also, there is a real tension between anonymity on the Internet and the ability to identify perpetrators. It is a difficult tradeoff.


> It is a difficult tradeoff.

It's not a tradeoff we can make because the nature of computer security is that unless you fix the software and networks, you can't even identify the criminals, let alone catch them, presuming they're even in your legal jurisdiction. There's a tremendous asymmetry between attacker and defender in terms of cost+benefit, and it heavily favors the attacker.

In any event, computer crimes are punished with an iron fist in the U.S. What's not criminally prosecuted and punished very well is harassment. Yes, if social media platforms offered less anonymity, we could deal with harassment easier. But organized criminal organizations don't need the anonymity of Twitter to pilfer and fence credit card numbers; they have the anonymity of zombie networks and stolen accounts. And you can't address that with harsher penalties. If you penalized that activity with summary execution, the problem would substantially remain. And in fact in some respects it could get worse by deterring security research.

We have no choice but to fix the vulnerabilities. We have to make it more difficult to execute these attacks from a technical perspective, dramatically increasing the likelihood of identification and capture, before we can even hope of using criminal penalties as a substantial deterrent. We're a long way off from that day.


I agree with you about the asymmetry, which I was alluding to but didn't really spell out. I also agree with you that we are limited by our current software/network infrastructure and fundamental changes in that area may be necessary to get to a better security "story".




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: