Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Decentralized SSL cert verification (perspectives-project.org)
49 points by gnosis on July 14, 2013 | hide | past | favorite | 22 comments


Sounds very similar to Convergence. In fact, it turns out that "Convergence is based on previous work from the Perspectives Project at Carnegie Mellon University".[0] I don't think this (or Convergence) will see widespread adoption though. For now, public key pinning (e.g. Tack.io) seems like the best way forward.

For what it's worth, many CA's are trying to raise the bar by way of the CA/Browser Forum.[1] The public mailing list is quite interesting at times.

[0] https://en.wikipedia.org/wiki/Convergence_(SSL)#Background

[1] https://www.cabforum.org


I tried to figure out the difference between Convergence and Perspectives but I ended up getting frustrated and gave up. If anyone knows the differences please speak up.


The detailed explanation is in this BlackHat talk where I introduced Convergence: http://www.youtube.com/watch?v=Z7Wl2FW2TcA


Is it still maintained? I used to use it but it didn't seem like it tracked well with FF updates. Just tried on 22 and doesn't seem to work correctly anymore. Bummer. :(


Thanks! Youtube videos are my research blindspot.


Ugh, Tack looks interesting but provides no quick, high-level overview. Do you mind explaining how the initial key bootstrapping happens?


The relevant bit of information you're looking for is this part of the standard:

"Sharing pin information:

It is possible for a client to maintain a pin store based entirely on its own TLS connections. However, such a client runs the risk of creating incorrect pins, failing to keep its pins active, or failing to receive min_generation updates. Clients are advised to make use of 3rd-party trust infrastructure so that pin data can be aggregated and shared. This will require additional protocols outside the scope of this document."

All that TACK does on its own without an external "trusted source" for pinned keys is provide a mechanism for a client to ask a site "Do you have a pinnable key?" and then compare / track that over time to determine if the TLS connection has changed vs. what was expected.

As a hypothetical, if your client starts with zero knowledge of a site, the best TACK can do is tell you, over time, "Yes, this is the same TLS key." It cannot, on its own, tell you "You should trust this key." It simply tells you "You can trust this connection has/hasn't changed" and "This site would like to you consider this key pinned".

The part where it removes the dependency on a CA is where the standard says that a client may validate the signature chain of a pinned key but that it is not required to do so.

Chrome pins certain keys from Google and a few other third parties in the client itself. The TACK proposal basically adds a mechanism for dynamic pinning with an optional (and recommended) additional mechanism for gathering trusted pinned certificate lists (without specifying how to do this part).


Ah, thanks. Tack isn't a replacement for (or comparable to) Convergence, as the latter also tells you that you can trust a site, since its key has been the same for X months, even if you've never seen it before.

Unfortunately, Convergence has the disadvantage of relying on central servers, which have to be maintained by someone.


Where Convergence is about changing the way the TLS trust infrastructure works, TACK is about reducing the exposure that a client has to that trust infrastructure.

Currently, your browser is dependent on the CA system for every TLS connection that it makes to a server. The hope for TACK is to reduce your browser's dependence on the CA system to only the first TLS connection that it makes to a server.

At that point, not only do breaches in your trust infrastructure do less damage, but it hopefully becomes easier to solve the problem of transforming the way that trust infrastructure works. This is why most development on Convergence is blocking on TACK.


Oh, I didn't realise TACK was your proposal. Now I'm wondering why it wasn't included in TLS in the first place. It sounds very reasonable, but the more valuable (to me, at least) functionality is the ability to have even self-signed certificates be trusted (i.e. what Convergence does). Why is Convergence blocking on TACK, by the way? It sounds like the two would be complementary. Is it just the notary load that is generated when looking up the certificate on every request? Can't that just be cached?


I'm sorry, but doesn't DANE RFC (DNS-based check of certs validity) solve the same problem and already implemented in Chrome and (via a plugin) Firefox?

With DANE, one doesn't even need CA to issue the cert -- self-signed will work just fine.

https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


DANE isn't without its own problems. Moxie Marlinspike wrote an excellent blog post about it.

http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authe...


I can't find the link, but I remember reading answer why it won't work:

• Chrome would have to ship with some default Perspectives servers that are able to handle massive traffic.

• Therefore those servers would have to be run by a major player, probably Google or Verisign.

• We're back to square one where most users by default "trust" same few large companies.


Due to the features of public-key cryptography, we could let Google mirror all the small parties and still be sure that the information comes from them and is up to date. I don't see this as a valid argument.


Some similarities to the EFF's proposed 'Sovereign Keys' system:

https://www.eff.org/sovereign-keys


Kudos to the author for pointing out that the SSL PKI infrastructure is a terrible model in _practice_. In _theory_, it may be otherwise, but that does nothing for the user.

Usage of the internet will always require trust. But the idea of a mandatory "chain of trust" system to which _every_[1] computer user on the internet must submit is neither realistic nor reasonable, IMO. Nerds might be comfortable with these mechanisms, but nerds know how the mechanisms work. Most users do not.

1. Certificates might still be a great fit for _internal use_ within an organization.

The idea of "chain of trust" is fine in theory. In the real world, you see it everywhere. People trust middlemen all the time. From movie reviews to background checks. We routinely build trust of a certain person indirectly through others.

But I think people are good enough at building their own systems of trust. And they may not all follow the same steps to arrive at trust. Maybe they prefer one middleman in the chain, or maybe they don't mind several of them, or maybe they prefer to cut out the middlemen and go directly to the source. And maybe they don't agree on the same middlemen. IMO, no one should be forced into using a system where they must accept a predermined chain of "trustworthy" parties, whether in the real world or in the world of computers.

For some reason (maybe because OpenSSL is a mix of too many things at once?), people seem to have a difficult time getting their head around the SSL PKI system. "Certificates" are anything but user friendly (even if you are technically minded). Collecting them, verifiying them, generating them, installing them. None is easily understood. Not to mention protecting certificates from theft. (I'll bet the NSA's collection of stolen private certificates is a beauty.)

You can't even _compile_ OpenSSL without Perl. What does that suggest about the authors' threshhold for complexity? Maybe nothing. But to me it signals that simplification is not a top priority. OpenSSL is a complex conglomeration of software from disparate sources; considering the trust we place in it, maybe that's relevant. Anyway...

Contrast the hassle of certificates with the concept of authentication in SSH, specifically the "known_hosts" file. Not perfect, but much simpler, IMO. You keep a unique string of characters called a "host key" in a file on your computer. When someone wants to connect to your computer, they can verify it's really your computer by checking the computer on the other end for the presence of that unique string. If it's there, they know it's you.

Bob met Alice and gave her a card with her host key printed on it. Bob goes home and copies Alice's host key into his "known_hosts" file. If she runs an SSH daemon, Bob can now use an SSH client to identify Alice's computer on a network.

We might call this "SSH authentication."

SSH allows the use of SSL style certifcates as well. But my understanding is they are rarely used. My guess is because to the users, they're not needed: SSH authenication is good enough for users.

The concept is sufficiently simple and the system is sufficiently manageable in practice. IMO.


What about if the UX of the Root Certificates management gets betters? In OSX keychain you can just click on e.g. the DoD CA root certificates and permanently un-trust them globally.

We need an easy and good UX for every new thing, so why not improve the usability of the existing X.509 world instead of re-inventing the wheel?

Eg. S/MIME support in recent versions of Mail.app and iOS mail got really nice. There is improvement, it's currently just enterprise-driven.

We need to talk to the industry leaders like Google and Mozilla to fix their horrible X.509 browser experience and make this thing available/usable to/for consumers, too.


If users can't configure their Facebook privacy settings I'm not too optimistic about their ability to manage root certificate trust bits. It's a good start though.

Ideally OS/Browser vendors would drop compromised CAs. That happened with DigiNotar. It's not always an option though — the three largest CAs have a 83.97%[0] market share. Dropping any one of them breaks the internet.

Unfortunately a lot of people seem hell bent on destroying rather than improving on top of it.

[0] https://ssl.netcraft.com/ssl-sample-report/CMatch/certs


It's really hard sometimes to find DoD CA certs. It could be hidden in some chain starting with proper non-DoD CA for example.


Here is an idea. One can timestamp a certificate in the Bitcoin blockchain and renew it using the same private key to prove the same ownership.

More than that. You can build a name protocol by timestamping names the same way, so you'll get uncensorable DNS and blockchain-as-certificate-authority in one protocol. Without worrying that some verisign gives keys to NSA or your domain name is taken away.

Namecoin tries to do that, but being an extra chain to mine slows it down. I'd support name system on top of Bitcoin to avoid extra mining.


What happens when you lose the private key? Lost dns?


The protocol may require "pinging the blockchain" by creating another transaction every N days. E.g. every month or year. If the name's config was not updated in a long time, you or someone else can claim it again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: