> But what alternative is there for securing the dns chain?
You're asking the wrong question. The better question would be: What do I want to achieve by securing the DNS chain, and can I do that in a different way?
What you ultimately want is to make sure you're communicating with the correct other party. And the way to achieve this is TLS with certificates, validated via the WebPKI.
I'm not saying TLS+WebPKI is perfect. But it works certainly a lot better than DNSSEC (which, let's be honest, does not work in practice).
Yeah, ok, but then you need to ask if DANE achieves that. And the answer to that is clearly "no".
DANE never made it past an experimental browser plugin, and as far as I know that isn't even developed any more. In the time where DANE achieved nothing, we got Let's Encrypt, we got CT, and a whole bunch of other improvements to the WebPKI.
DANE technically achieved that, I played around with OpenSSL plugin for DANE back in 2010 (when the . was signed). But yeah, I get your point.
Right now, the path ahead is stapled DANE TLS extension.
Basically, the idea is that you can use simple unauthenticated DNS to get the domain name, just like now. Then you get the complete DNSSEC-authenticated DNS chain for the DANE record as a part of the TLS handshake: https://datatracker.ietf.org/doc/rfc9102/
This seems to be the best of both worlds:
1. We keep DNS as a simple and nimble UDP-based service, without trying to cram the signature in each packet.
2. Since we HAVE to use TLS anyway to achieve any meaningful security, we can just transmit the full DNS chain (up to the root zone!) with signatures easily as a part of the TLS handshake.
3. The client then just needs to validate this chain, and it only needs to have the root zone's key as the root of trust.
4. The root zone's key changes fairly infrequently (once in a decade), so IoT devices can use it to bootstrap themselves.
The stapling extension for TLS is a dead letter. Work on it was abandoned a couple years ago, when it finally occurred to the working group that control of a CA would allow you to strip the stapled DANE proof off the handshake, which defeats the entire purpose of the extension. DANE advocates counterproposed a pinning mechanism and got laughed out of the discussion by the browsers, for whom "pinning" is a cursed word.
How would that work if you only trust DANE? And if you have a control over a CA, then you can just issue yourself a certificate for any host anyway, so it's a moot point.
What exactly did DANE achieve? Replacing CAs? I must've missed something, last time I checked every webpage on the planet was still using CA-signed certificates.
> Right now, the path ahead is stapled DANE TLS extension.
This thing I find really funny. It basically boils down to "we secured DNS. ok, we figured out it doesn't really work. Ok, how about let's stuff our DNS security system into TLS, because over DNS it does not work? Then we have secure DNS!"
Though I've heard this idea many years ago, and it appears deployment is nonexistent, so... probably not gonna happen either.
The RFC provided a standard, but people choose to ignore it.
> This thing I find really funny. It basically boils down to "we secured DNS. ok, we figured out it doesn't really work. Ok, how about let's stuff our DNS security system into TLS, because over DNS it does not work? Then we have secure DNS!"
Not quite. DNSSEC secured DNS, but it turns out that this by itself is not important in the today's world.
However, we do have a very fragile situation where half the Internet now depends on the goodwill and competency of just one organization for their security (I'm talking about Let's Encrypt).
> Though I've heard this idea many years ago, and it appears deployment is nonexistent, so... probably not gonna happen either.
TLS extension is a fairly new idea, and it only now is starting to get some traction.
This is the second time on this thread you've claimed that a DANE stapling extension is in the works. But it's not: it was proposed and shot down, years ago.
That's factually untrue. RFC 9102 was published two years ago in order to gather experimental data: https://datatracker.ietf.org/doc/rfc9102/ The relevant RFC for DANE is still active, and it hasn't been obsoleted ("shot down").
The major vulnerability of DANE is MITM attacker's ability to "downgrade" attacks into regular PKI in mixed PKI+DANE deployments. So DANE is at worst no better than PKI.
The other problem is the lack of something like certificate transparency for DNS updates and the inability to quickly invalidate erroneous records in caching resolvers, and the general complexity of DNSSEC setup.
Geoff Houston's basically says that _browsers_ bailed on the stapled DANE TLS extension, which is quite true. It doesn't mean that the work on DANE and DNSSEC has stopped. Yes, it's taking time, but it's still ongoing.
As for CT, we don't need _everyone_ to adopt it. Just the most relevant TLDs would be enough. And there is work going on adding something like it for DANE. It's also slow, because there's a desire to obscure the DNS names from the CT logs (current CT logs expose ALL the DNS names, allowing to easily map the internal infrastructure of any company depending on PKI).
Meanwhile: there will never be anything like CT for DANE and DNSSEC, for at least two big reasons:
1. Browser vendors have no leverage with DNS registrars to force them to deploy it, and that kind of leverage was required to get CT as far as it has been.
2. The DNS top level domains that are required to implement something like CT are controlled by sovereign states, which aren't going to participate in a single global transparency log; in particular, you'd be surprised to see any of the US or UK TLDs conceding to anything like this.
You have links here to DNSSEC automation work, but nothing to any kind of "DANE Transparency", because, so far as I know of, nothing like it has even been meaningfully proposed.
> What you ultimately want is to make sure you're communicating with the correct other party. And the way to achieve this is TLS with certificates, validated via the WebPKI.
That really only works well for web servers. If your service isn't the one found at the A/AAAA lookup for a given name it's much more difficult to obtain a cert for that name, and without that clients are reliant on what's in DNS alone to make the association.
I'm not entirely sure what scenario you're talking about, I can't think of one. If your issue is that you can't get a cert without an A record, well, set an A record. I don't see how that's a problem, except if you make it one.
WebPKI, while called that, is not just used for the web. It is used to secure e-mail servers (for IMAP+POP3 trivially, for SMTP it needs a bit more work -> MTA-STS). It also works for all kinds of more obscure things like IRC.
Most modern services are HTTPS under the hood anway, but there's really nothing stopping you from using TLS+WebPKI for other services, too.
I don't know how to put this in a more palatable way but you're looking at the world from the confines of a webdevs point of view. Can you really not imagine that there's any other protocol on the internet than HTTP? That there might already be a web server listening at a given name? That the person in control of that webserver isn't you?
Regardless of protocol, yes only one person/org should be in control of (and able to obtain certs for) a particular domain.
Unless you're suggesting Person A should have port 443 on www.something.com and Person B should have port 444, and each gets their own (valid) www.something.com cert? Because that some very clear problems.
Generally services not found directly at A/AAAA records for a name are found via another record that contains a hostname (HTTPS/SVCB, SRV, etc) at a leaf node below the name. So `_xmpps-server._tcp.example.net` might contain the hostname `hosted-provider.example.com` in which case `hosted-provider.example.com` will need to respond with a certificate for `example.net`, unless you trust the DNS, in which case it can respond as `hosted-provider.example.com`.
Well it looks like they are in this instance and neither they nor you have done anything to suggest otherwise. TLS and WebPKI have great usability for webservers but non-webservers cannot even approach being as smooth as for example Caddy's "Automatic HTTPS" configuration.
> What you ultimately want is to make sure you're communicating with the correct other party. And the way to achieve this is TLS with certificates, validated via the WebPKI.
The thing is TLS with certificates doesn't always achieve this and we know this because there have been real world successful attacks. E.g. intermediate CA certs being issued to bad actors, certificates for major domains being issued to people who don't control them, etc. The issue with the CA ecosystem is not that they're evil, it's just that they don't validate what regular users expect them to validate and that they are also subject to human error, governmental and commercial pressure.
The more common issue with the validation CA's can perform for DV certs is that it just tells me that you controlled a domain for a given moment in time at some point in the past (pretty much up to a year ago). To fix this we introduced ways to mitigate the vulnerabilities of TLS, watching CT logs for your domains, HSTS, CAA record restrictions. Unfortunately if you do secure your infrastructure in this ways you have added complexity and will likely discover how your infrastructure can break in new and interesting ways too.
We could skip all the issues we have with DNSSEC using just the same technology and security model that TLS+WebPKI has. It is called using DNS over TLS and talking directly with the authoritative server rather than using caching resolver. With no (non-local)caching you don't have any caching invalidation for keys which is one of the major problems in deploying dnssec.
Severs talking to servers do have a bit different handling of errors than browsers. A browser can ask the users if they want to ignore the error, but with a server the answer need to be to close the connection. If the certificate for the DoT server is broken then the domain goes down, just as a TLS+WebPKI SaaS service will break with broken certificates. The impact however will be just as major as with dnssec errors.
I have personally tried to advocate a bit that people should give up public/shared revolvers. The delay in response time in modern machines quite minor, and in term of loading a web page or sending a email, a few milliseconds here and there doesn't really have a large impact. My argument for that has primarily been about privacy and security (no insecure path between stub and resolver), but I have not considered in the past the major benefit of removing caching in terms of stability.
This is false! The point isn't that TLS 1.3 is a better protocol that DNSSEC (though: it extremely is). It's that the entire WebPKI ecosystem has features DNSSEC does not have, has no timeline towards having, and probably never will have, because DNSSEC is de jure controlled by sovereign countries who will not agree to a global transparency log.
I think you are missing the whole point of what is being suggested. There is no DNSSEC if clients are talking directly with authoritative services using DNS over TLS. No signatures, no sovereign countries who controls keys, not anything which is defined by the DNSSEC. Once a client is securely connect to an authoritative service using TLS, everything that the server is transmitting is by definition authenticated. The client need to traverse the tree of authoritative servers by itself, but that is the only drawback.
You're asking the wrong question. The better question would be: What do I want to achieve by securing the DNS chain, and can I do that in a different way?
What you ultimately want is to make sure you're communicating with the correct other party. And the way to achieve this is TLS with certificates, validated via the WebPKI.
I'm not saying TLS+WebPKI is perfect. But it works certainly a lot better than DNSSEC (which, let's be honest, does not work in practice).