for a long time, the US considered cryptography algos as a munition. Needed some arms license to export.
Also, US tried to convince the world only 56 bits of encryption was sufficient. As SSL (I don’t think TLS was a thing back then) was becoming more mainstream, US govt only permitted banks and other entities to use DES [1] to “secure” their communications. Using anything more than 56 bits was considered illegal.
Even now, if you join a discussion on crypto and say something like "Why don't we double the key length" or "Why not stack two encryption algorithms on top of one another because then if either is broken the data is still secure", you'll immediately get a bunch of negative replies from anonymous accounts saying it's unnecessary and that current crypto is plenty secure.
The head of security for Golang, a google employee, was also part of the TLS 1.3 committee and in Golang, it's impossible by design to disable specific ciphers in TLS 1.3
The prick actually had the nerve to assert that TLS 1.3's security is so good this should never be necessary, and that even if it were, they'll just patch it and everyone can upgrade.
So someone releases a 0-day exploit for a specific TLS cipher. Now you have to wait until a patch is released and upgrade your production environment to fix it - all the while your pants are down. That's assuming you're running a current version in production and you don't have to test for bugs or performance issues upgrading to a current release.
Heaven fucking forbid you hear a cipher is exploitable and be able to push a config change within minutes of your team hearing about it.
I'd place 50/50 odds on it being a bribe by the NSA vs sheer ego.
Seems like a stupid design, if only for the fact that some uses of TLS, where a very specific client is connecting, you might want to enable precisely the one cypher suite you expect that client to use.
Then all your performance tests can rely on the encryption and key exchange will always use the same amount of CPU time etc.
Well, I think that would sevearly inhibit future development. Scaling on bitcoin has been a delicate game of optimizing every bit that gets recorded, but also support future developments that dont even exist yet, there is no undo button either. New signature schemas and clevar cryptography tricks can do quite a bit, but when you slap another layer of cryptography on you will inevitably make things worse in the long run.
Histories biggest bug bounty is sitting on the bitcoin blockchain, if it were even theoretically plausible to crack sha-256 like that then we would probably know, and many have tried.
If you reveal you have broken sha-256, then your bug bounty becomes worthless. The smart move is to steal and drain a few wallets slowly.
And that's exactly what we see - and every time it happens, the bitcoin community just laughs that someone must have been bad at key management or used a weak random number generator.
> management or used a weak random number generator.
Except that has been the case in every instance thus far. The dev that lost his bitcoin last year was using arcane software, after a biopsy they found the library being used only had like 64 bits of entropy.
The real security of Bitcoin is the choice of secp256k1. Basically unused before Bitcoin, but chosen specifically because he was more confident it wasn’t backdoored.
And ed25519 was out of the question, since -- being brand new -- its use would have given away the fact that DJB was among the group of people who presented themselves as Satoshi Nakamoto.
The best is the claim that multiple encryption makes it weaker or that encryption is the weaker of the two. If that were true we'd break encryption just by encrypting once more with a weaker algo.
The invalidity of that claim is a bit more nuanced. Having an inner, less secure algorithm may expose timing attacks and the like. There are feasible scenarios where layered encryption (with an inner weak algo and outer strong algo) can be less secure than just the outer strong algorithm on its own.
two encryption algorithms will mean needing two completely unrelated , unique passwords. this can be impractical and increase odds of being locked out forever
Do you have more on the legality aspect? I knew NSA pressured for a weaker key but what aspect could be made illegal? I had to write an undergrad paper on the original DES and I never saw an outright illegality aspect but wouldn’t be surprised. They also put in their own substitution boxes which I surprisingly never found much info on how exactly NSA could use them. So much speculation but why no detailed post mortems in the modern age?
In the US, since the 1950s, you need a permit to export any product which has encryption. There are fines if you don't file the right paperwork. In the 1970s and 80s they would only approve keys of 40 bits or less.
It seems that they changed the S boxes to make them more resistant to differential analysis (which they knew about but the public didn't). So this is actually a case of them secretly strengthening the crypto.
Presumably this is because they didn't want adversaries being able to decrypt stuff due to a fundamental flaw. I guess it's possible they also weakened it in another way, but if so nobody has managed to find it.
Also, US tried to convince the world only 56 bits of encryption was sufficient. As SSL (I don’t think TLS was a thing back then) was becoming more mainstream, US govt only permitted banks and other entities to use DES [1] to “secure” their communications. Using anything more than 56 bits was considered illegal.
https://en.m.wikipedia.org/wiki/Data_Encryption_Standard