There are different kinds of regulation. They don't necessarily need
to be prescriptive or to test static quality in the same way that
drugs are regulated. (But I think that some security people would like
them even less than flat-out being told what code to put in their
sites.)
Instead of regulating the product, you regulate the processes around
it. Why?
The assumption is that users are stupid, but security people are
clever. A side-effect (lemma of this axiom) is that security is for
control rather than safety. Ergo - safety emerges from control.
Security becomes power.
But as we see, frequently mother does not know best. The fact is that
some security people are stupid (it's difficult and not the highest
paying job out there) and a few users are actually very clever. Now,
if the clever ones are malicious (bad hackers) you've got
problems. But in reality far more clever users are benevolent and
would choose to participate in a "security culture" if it were
encouraged rather than imposed on them like children. It's their data.
As it stands our security culture leads not just to dismissive
authoritarianism but unassailable systems that may not be questioned.
Regulation that puts much more power into the hands of all
stakeholders can be a great alternative to ever more compliance and
auditing imposed top-down (which is really a weak solution to a
dynamic problem: security changes almost daily).
Consider a regulatory mechanism like the GDPR that allows users not
just to know what data is held, but how it is protected and to
request (with some force) changes to that protection.
Taken to the limit, let's call it "User Side Security" (USS), we build
interfaces so that the user gets to decide their chosen security
solutions (obviously compartmentalised so as not to affect any other
users assets or choices).
(I feel a tremor in the Force, as if a million security people
suddenly cried out in horror then suddenly fell silent.)
But this would provide the bottom-up incentive for firms to get their
PII-security systems back on track without Byzantine top-down regs
which I guess the industry fears more.
> The fact is that some security people are stupid
Bold of you to assume that these companies have any security staff at all.
> Taken to the limit, let's call it "User Side Security" (USS), we build interfaces so that the user gets to decide their chosen security solutions (obviously compartmentalised so as not to affect any other users assets or choices).
> (I feel a tremor in the Force, as if a million security people suddenly cried out in horror then suddenly fell silent.)
And rightly so. There's a lot of things broken in the security industry, but letting users pick: "Hmm, i want AES 256 ECB instead of AES 128 GCM, because 256 > 128" is not the answer.
> And rightly so. There's a lot of things broken in the security
> industry, but letting users pick: "Hmm, i want AES 256 ECB instead
> of AES 128 GCM, because 256 > 128" is not the answer.
Not quite the granularity I had in mind. But please say more. What I'm
interested in specifically is whether or not you believe the owners of
data have no stake in it's protection and no say in how that's done?
I wouldn't say they should have no stake, just that its impractical to ask them.
First of all, security is context dependent. Even a security expert will have trouble making good choices if they don't have the full picture of how the business operates. A non-expert has basically no chance. Just look at how many B2B security companies are basically preying on ignorance to sell useless security solutions. They sell them to businesses which should in principle be able to rationally evaluate the offering, and yet still manage to swindle them. What hope does the average person have?
Second, if you give users real choice, that means you have to implement all the choices, which means you have to spread your focus. Complexity is the enemy of security. The more complexity the more likely you will miss some unintended interaction.
Then there is the other trade-offs. Some security controls can have very real productivity and business trade-offs. For example, if one of your controls is that all staff have to get manager sign-off before accessing any machine with user data on it, that is going to slow down work. Often that is worth it, but the productivity loss can be significant depending on how the business is set up. I'm not sure it makes sense for users to control something like that, except in the sense they should be informed of protection in place and can freely decide if they want to continue doing business. Not to mention how can you do something like that for half your users?
My general view is that companies should be more transparent in what they do so that people can vote with their feet. Companies should also be liable for breaches, especially ones that would have been prevented by best practises. This punishes companies who play fast and loose, and also might in theory put pressure via insurance requirements. A big part of the problem right now is that it is generally more profitable to not invest in security. Breaches have very minor impacts, even major ones usually just mean a very small temporary dip in the stock price. Companies aren't going to care about security unless it affects the bottom line.
Thanks for this thoughtful response bawolff. I need to digest it but
you make good points that all tally with ny experience. Yet I remain
convinced that a regulatory approach needs to include the end user as
a firts-class stakeholder. How to do this without making the life of
security professionals an untennable misery is where I want to focus.
After all people look after their own money, their own homes and their
own health. Why do we carve out an exception for their data?
Is it weird to say that these issues are too stupid for software engineer licensing to be a good answer? It's like buying a fleet of cherrypickers to pick lettuce.
And if we can't hold people accountable for their actual products being laughably insecure, I don't see how licensing enforcement is going to go better. For starters, the question of who should be required to hire licensed engineers is the same as who should be regulated/sued into compliance/oblivion regardless of licensing, and we clearly can't do that.
Everyone operates at the limit of their knowledge of the world (and their available resources).
It's just that some peoples knowledge is waaay more limited than others. And all we have is some form of self-regulation - from a science Viva to a engineering degree, we have no option other than to say "we think we have a measure of all human knowledge in this subject, and so we can judge if anyone else has same knowledge"
Just look at any "building disasters" TV show where unsafe extensions were added to houses etc. At some point someone says "that meets a standard"
do we do it before the guy leaves college? Do we do it during the build using independnat inspectors ala building codes? Do we do it in court after it's all fallen over?
I am not convinced that's software regulation is correct - I prefer to see software asa form of literacy and as such I am really reluctant to reign in "speech". I think software is so open to composability that best practises can come almost for free. Security is just one of those areas where you need a good understanding of the fundamentals.
> It's just that some peoples knowledge is waaay more limited than others.
This excuse ends when an expert reaches out to you and explains the exploit. At that point you've chosen the way of pain, one way or another.
> I am really reluctant to reign in "speech".
OTOH this question is pretty easy to answer: Regulation (of whatever type) applies to deployments, not code. Deployments are where the harm happens. I think this approach would even align the incentives correctly w.r.t. maintenance funding.
Can you give an example of any other domain where the entire domain is changed every decade?
Space exploration comes to mind, but again, still based on physics and chemistry.
The problem with the digital domain is we're literally just dreaming this stuff up and then being surprised when everyone has a hard time securing those dreams.
I’m very new to this concept.
What would that actually look like? What kind of rules and regulations would be put into place and how would that affect e.g. building websites with logins?
Well I am not 100% convinced but the mid-19C move to better steam boiler design is instructive - boiler explosions were so common that there was a economic effect as well as human cost. In the US only one insurance company would take the risk - and they only would insure when their standards were met. The industry as a whole improved