Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Let's Encrypt's old root certificate is expiring (scotthelme.co.uk)
195 points by zdw on Sept 20, 2021 | hide | past | favorite | 77 comments


I can see this being an issue for embedded systems that were set up before Let’s Encrypt added their new root authority. Some of these embedded systems won’t have an automatic way to update their trust anchors like browsers and operating systems typically do and so 30 September 2021 will be their Y2K date. RIP oh ye IOT toaster!


If those devices require LE and still haven't received (automatic) updates already, I'd consider them IoT trash regardless. It's probably part of some IoT botnet already.

Embedded devices that aren't updated regularly for long periods of time are the reason we can't have nice things on the internet.


>Embedded devices that aren't updated regularly for long periods of time are the reason we can't have nice things on the internet.

No. Embedded devices should generally not be on the Internet at all. Rather, they should have local management/update points and zero access to the Internet.

We still wouldn't be able to have nice things on the Internet, due to all the general purpose devices that have been compromised, but there's no reason a light bulb, a refrigerator, a TV or (especially) a camera or microphone should have any contact with the external world.

But that breaks the "we're going to display ads on stuff you paid for, and if we spy on you, we can charge more for those ads" business model, and as such, I won't hold my breath and will continue to aggressively manage my DNS and network egress rules.


> No. Embedded devices should generally not be on the Internet at all. Rather, they should have local management/update points and zero access to the Internet.

This! So much this. There should exist a gateway between these devices and Internet. IoT devices should not have to send data across the Interent in the first place unless user wants to access his home remotely.


I would consider IOT trash any IOT device that demands/requires an internet connection to be functional. The same would apply to any smartphone that requires a sim on first start (or after factory reset). Embedded devices should not even be alowed online.

Automatic updates I have mixed feelings about because they (sometimes) have a bad habit of breaking things and give me a ton of hassle.


Honestly, if the internet isn't required for the device to function, I think it shouldn't even be IoT in the first place.

I think automatic updates should be part of the device's default lifecycle because nobody will download a ZIP and stick it onto a flash drive to update their fridge. If an automatic update bricks a device, it should be exchanged for free by either the point of sale or the manufacturer because it broke during normal operation. IoT vendors have been given far too much leeway and it's time they're held responsible for the crap they put on the internet.


You can have IoT devices with LAN only. IoT doesn't mean that it needs to understand DNS and access WAN. For that you can have IoT gateways with proper security in place.

The problem with "part of the device lifecycle" is planned obsolescence. I would be up for it though if manufacturers are forced to release the source code and all service information they have the same day they drop support/updates for a product.

Edit: I don't think you realise how many applications of lot of "IoT" things have where they are only on a LAN with 0 internet access. Internet access should be optional, not mandatory.


IoT without Internet is just oT. I agree that not all embedded devices with networking capabilities necessarily access the internet, but I don't think you can call something that doesn't use have WAN access "IoT" just by definition.

I think many devices without WAN access would fall under the "ICS" or even SCADA umbrella.

Personally, I'd support a law that forces companies to expose their source code when they stop supporting IoT devices. Forcing companies to balance their trade secrets with customer updates should help the internet get more secure, even if it raises the prices at first.


Thread is the future of IoT. Finally a local mesh network that works.


Not with that idiotic name. The authors deserve a strong paddling.


I imagine Botnets may even have an interest in updating the devices themselves..


Botnets definitely tend to patch or secure devices they've taken over. Plenty of them kill the ssh or telnet daemon after getting access, and some even patch known vulnerabilities in the web interface. It's a weird battle for control over someone else's hardware.


The Invisible Hand keeping your IoT devices chugging along... interesting thought!


It's not hard to imagine someone patching firmware out of sheer annoyance/efficiency. In fact, I bet someone can provide examples of product improvement along the journey of nefarious takeover.


>[moved] to their own root, ISRG Root X1, that expires on 4th June 2035, giving us quite a number of years.

This seems like it makes the problem even worse. If these certs expired every year or so, embedded/IoT vendors would not get away with fixing something that eventually expires since it would expire too quickly.


Do you have a solution for these certificates expiring which:

(a) Doesn't involve creating a different certificate which never expires, and

(b) Still works if a user leaves a device powered off in storage for a few years

If so I'd love to hear it!


Have a certificate updater system which will respect an expired certificate for the sole purpose of fetching the latest certificates. What difference is there between having a cert that lasts 15 years or one that lasts a single year but can fetch a new list even if expired.

Everything else can still be confined to the regular expiry rules, just the certificate renewal job is exempt.


Good idea, let's change that slightly.

Require a new cert to be available at a standard path, not less than 10 days before the previous expiration date for ordinary websites, and not less than 1 year before the previous expiration date for long-lived certs.

Then part of the standard utilities in any SSL library should be the autoupdater, which recognizes that a cert should have an update available and takes care of that, emitting loud warnings if the replacement is not available in the appropriate window.


We should move to never expires. What good does a cert that expires in 15 years do other than setup a ticking time bomb.

On the other hand, leaf certs should expire every day. Or at most a week. Move to new root certs when the tech calls for it (like SHA1->SHA256)


The old Let's encrypt root certificate (DST Root CA X3 [1]) was from 2001, was signed with sha1, and is 2048 bit RSA.

What criteria would you use to decide expiration dates for 20 years in the future?

[1]: https://crt.sh/?id=8395


To be clear, the self-signature on a root certificate is meaningless (it only exists so that root certificates can be parsed with the same code as intermediate and leaf certificates); using SHA-1 there is not a risk.

https://www.entrust.com/blog/2014/04/need-sha-2-signed-root-...


Let's Encrypt was only founded in 2014, and left beta in April 2016; they just used that cert to start with IIRC. So it's been about 5 years for LE users, not 20.


The only practical benefit I can imagine is that it puts a deadline on the damage from a root compromise. Not sure how much that's actually worth though.

If you're designing a device that cannot be updated (or won't be), you probably shouldn't use Web PKI. Even if the roots didn't expire, I'm sure the BRs would eventually evolve to preclude issuing leaf certs compatible with the devices.


Why not just use self-signed certificates for those sorts of devices with a 50 year expiry or something like that? Surely setting up your CA or even embedding a single long term certificate (or chain) within your own device isn't too much a hassle, right?

Realistically, your toaster, fridge or lightbulbs will only talk to a particular set of hosts within the vendor's walled garden, so there's no need to have a trust chain for the entirety of the Internet anyways. Provided that your IoT devices aren't doing anything shady otherwise. Assuming that you even need or want IoT devices.


I am deploying a system where IoT devices first contact a bootstrap endpoint, authenticated by a private CA whose root cert is hardcoded in the device firmware. The bootstrap endpoint communicates the URL and CA root cert of the actual web services. This allows us to have the services themselves implemented as serverless functions using the cloud provider HTTPS/MQTT facilities.

Does anyone see potential issues with that?


The only issue is if the private key of your root Cert gets leaked. Then you won't have a way to issue a new one to your device.


Even if the root cert is hardcoded, the root cert might still have an expiration date, and the firmware code might still check that.


Good point. When this happens I'll be dead though, and so will be those devices. If we both turn out to be immortal I'll need to provide a firmware upgrade with a new CA root, possibly 10 years or so ahead of the expiration so every device has a chance to auto-upgrade.


It could quite possibly be easy to simply disable or remove the expiration check from the firmware code. My point was only that it’s an issue that one might not immediately think of.


Let's hope for can't-connect-to-anything failure modes.

The only thing worse than bricked electronic trash is functional, malicious electronic trash.


This reminds me of how forests actually benefit from the occasional wildfire.


It's integral to many ecosystems.

I am also reminded of ChaosMonkey, the tool that would periodically take parts of your infrastructure down so you can make sure your failsafes actually work in the real world.


Yeah such systems will be problematic. Those type of systems also have similar problems in cases where a certificate has been revoked. Thankfully that isn't a very big problem (at least at the moment).


Is there any point in a root certificate that expires after 20 years? Any compromised root certificate will almost certainly be revoked before it naturally expires.

Why not just make them root certificates indefinite and revoke them as part of the system trust anchor update process, at the same time as downloading new root certs? That way, you know that a revocation is only going to happen when it also pulls down replacement certificates.


> Any compromised root certificate will almost certainly be revoked before it naturally expires.

How long will revocation lists have to be in the year 2100 if nothing expired?


I'm guessing root cert expiration data growth is much lower than storage density improvements. And of course you don't need to revoke a certificate that was never on the device in the first place.


This is why noone should ever have adopted things like Let's Encrypt for their blogs and random portfolio websites in the first place.

Especially not with disabling regular HTTP access.

Will this be a wake-up call for everyone? Highly doubt it. The presumed "correct" solution is depreciation of all of those perfectly capable devices with gigabytes of memory and storage as outdated and broken, even though Google Search still works on all of them, as does any other website that's HTTPS-free, as would any other website that's using Let's Encrypt for no particularly good reason had they simply not adopted Let's Encrypt in the first place. HTTP still works and doesn't suffer from any of these compatibility problems.

Anyone cares to calculate the environmental impact of this planned obsolescence action?

The thing that's most annoying, however, is the loud insistence by a minority group that it's the devices that are at fault, not their insistence on using Let's Encrypt to secure things that are public information and aren't meant to be secured.


I tried my site from both Debian and iOS. (“openssl s_client” 1.1.1k and the HE.net app, respectively.)

Debian shows a chain ending with the ISRG X1 cert that was signed by DST X3.

iOS shows a chain ending with the self-signed ISRG X1.

This happened even though Debian has the self-signed ISRT X1 installed as a root, and even if I delete the DST X3 from my list of roots.

I don’t really understand how this can happen.


I noticed that some clients/parsers, like OpenSSL are sensitive to the order of the certs in the chain (according to spec) whereas others try to “reconstruct” the tree visually.


So does this mean anything for clients worth supporting? I don’t care if some museum piece like an iPhone 4 no longer works.


If you're targeting general consumers (ie, not the HN crowd or only people who upgrade their phones every year to two), there's still double digit percentage of Android devices in the wild running older than version 8 Android.

https://gs.statcounter.com/android-version-market-share/mobi...

Some of the inexpensive phones you can buy new right now (where I am) come with Android 7.


Honestly as a society we should not be putting any effort into supporting manufacturers who are producing hardware that are immediately e-waste.

Right to repair needs to cover this issue. Either keep devices in your ecosystem up to date, or open it up for third party support.


> Honestly as a society we should not be putting any effort into supporting manufacturers who are producing hardware that are immediately e-waste.

I am staring right now at two mobile dev test devices I have on my desk, a Samsung Galaxy S6 and an iPhone 6. iPhone is running iOS 15 released this week. Galaxy is stuck on Android 7 because Samsung don't bother providing updates to devices beyond 2 years.

It's no coincidence that my latest personal phone purchase was from Apple.

(Well, that's not _quite_ true. I have a PinePhone on the way. But I'm expecting everything I deserve for buying a beta version of a Linux phone from a niche manufacturer with _that_ one...)


I have a Samsung Galaxy J1 (2016) that runs Android 5 and was (I later realised) already out of manufacturer support when I bought it new at retail in April 2018. That’s insane.


It’s as if Samsung was selling dishwashers and Apple was selling entry into an ecosystem.


Glad that this behavior from Sams is changing. I have S10, that were on Android 9, now it's updated to 11. I guess there will be no major releases, but I still will and do get now security updates for a few years. At least there is now predictable pattern on device support.


(iPhone 6s). The latest release for the iPhone 6 is iOS 12 due to having lower RAM (1GB vs 2GB).

Also didn't see it mentioned in your comment, but both were released in the same year (2015). Additionally, although the iPhone 5s (2013) and 6 (2014) are stuck on iOS 12, they're still getting security updates! [0] That's 8 years of support so far.

I'm still using a 6s, but planning on upgrading in the next few months. It's nice to have the option to keep using the phone though without losing support.

[0] https://support.apple.com/en-us/HT212548


I know this ship has sailed but I think building on a trust system that by design has an expiration and no built-in method for rotating certs date would go a long way in helping the e-waste problem.


> Either keep devices in your ecosystem up to date, or open it up for third party support. Infinite support for old devices is not viable so you are just trying to force open source.


The brand new Garmin GPS I purchased in June is running Android 6. Completely absurd. Granted, it does its job of navigating fine, but it still has wifi connectivity.


At least, one can hope, that whatever ssl certs it relies on to do the job you bought it for - are under control of Garmin who can pay for non LetsEncrypt certs with Root CA trust chains that will work with their devices. How much non-map based access to Android does it expose? It'd suck if it loses Google Traffic API connectivity, for example.


It doesn't directly expose anything more or less, every app you have access to by default is Garmin controlled (except perhaps iOverlander). There's no play store or services, as far as I can tell. You can sideload whatever you want though.


My "museum piece" (Galaxy Note4) still works and I want to keep it that way. Why should I throw away perfectly good hardware?

It is rooted, so I can fix the problem, but still, I find the idea that smartphones should only last 3 or so years when their hardware is good for 10 to be... annoying. Especially considering that modern smartphones tend to be more expensive, without a removable battery or headphone jack, and not fundamentally better than smartphones from 5 years ago. And the custom ROM community that kept old phones running is not what it used to be, especially considering that more and more apps refuse to run if you are not running the official (outdated) ROM. Planned obsolescence replaced real obsolescence.


If you care mostly about web browsers or equivalent technology, because you make an actual web site, for visiting with web browsers, probably no, everything is fine, anybody who would be broken by this is probably already annoyed for something else your modern site does.

If your only TLS clients are under your control (e.g. you own an app, the app uses HTTPS or some other TLS protocol to talk to your servers) then you should be able to easily work out if there's any problem [edited to add: Hint, use a tool to wind their clock forward past the deadline by a day or so and see what happens] and decide whether and how to mitigate such problems.

But, if you allow third parties (customers, suppliers, or worst of all, the General Public) to use some protocol (whether that's a RESTful HTTPS server or some custom text protocol from the 1980s) over TLS, you are potentially in for a lot of trouble when DST Root CA X3 expires and you should be on top of that by now.

* If you need everybody and their dog to connect using rusty garbage from last century, too bad, figure out which certificates you need to pay $$$ for to make that happen and authorise the purchases today. In this case though I'm astonished Let's Encrypt has ever worked for you. DST Root X3 was widely accepted, but it was hardly the most popular.

* If you can live with some clients dropping away e.g. because it's a free service anyway, or your SLA or other Ts&Cs say that you use Let's Encrypt and so it's actually in some sense "their problem" when this breaks, or you just hate money and love being sued, you can do nothing.

* Otherwise you really need to figure out what fraction of your users will blow up. You should be capturing User Agent info (if HTTPS) and metrics on the types of cipher suite used and other fingerprints that can help you figure out what's connecting and thus what to expect. You need to be contacting "at risk" customers if possible to warn them what's about to happen and point them to resources for what they should do:

1. Software upgrades. Archaic OpenSSL is the biggest problem, if they can go to a much new OpenSSL it's much less likely they're suffering.

2. Trust store updates. By now, just telling them to rip out the (expiring) DST Root CA X3 cert if they have it might be easier than trying to figure out whether it's safe to leave it. But keep it or leave it, they need to make sure they trust ISRG Root X1.


I'm concerned that my smart tv might start to fail, or that older embedded devices that use http like my irrigation controller that gathers weather data from the internet might run into problems.


Does this even matter? If LE only issues 90 day certs there shouldn’t be any certs they’re signing with this CA Cert that are even valid now either way.

If so then it’s yet another testament to their opinionated decision to only issue 90 day certs. (Which I strongly support)


Nobody uses roots to sign End Entity certificates. That's been prohibited for years, and it was always a stupid practice even before it was specifically prohibited.

But this is about root trust. Ultimately, why should your software trust this certificate, but not trust some random certificate I just made? If your system only trusts a Let's Encrypt certificate because of the DST Root CA X3 which is from like 2000, and then DST Root CA X3 expires, does your system no longer trust those certificates?

If you make actual trust decisions, presumably you've made a few new ones since before the war in Afghanistan started and hopefully you trusted ISRG Root X1, from the organisation running Let's Encrypt. It's been around for a few years, you had time.

More likely, you let your OS vendor make such decisions, don't worry all major OS vendors trusted ISRG Root X1 years ago. Apple, Microsoft, and (on behalf of the Free Unixes) Mozilla all OK'd this years and years back.

But, if your software is a bit long in the tooth and was poorly made back then, that might not help. OpenSSL 1.0.0 was exciting when it happened, but that was a long time ago too and it didn't know how to handle this correctly.

Old enough GnuTLS likewise.


Right but my point still stands.

Even if you look at the trust chain:

They would’ve signed intermediate certs along the way.

The intermediate certs would’ve had Expirations before this month

Certs signed with the intermediates would’ve expired sooner too.

They would’ve stopped signing Intermediate certs for this a few years ago.

So what’s the actual risk here? Are there certs out there that have this Cert as a root, with expirations past the lifetime of this root cert? How would that have even been allowed?


> The intermediate certs would’ve had Expirations before this month

No. Nobody does that. Why would you do that?

I think you're most likely getting muddled because, in common with many people, you've been thinking about the Web PKI as a graph in which certificates are nodes.

It isn't. It's a graph in which the nodes are key pairs, and the certificates are labelled, directional edges.

You're suggesting that the labels on any one edge ought to be constrained based on all other labels on edges leading toward the node it's from, and that makes no sense, unsurprisingly we don't do that.

> They would’ve stopped signing Intermediate certs for this a few years ago.

Again, nobody does this.

> So what’s the actual risk here? Are there certs out there that have this Cert as a root, with expirations past the lifetime of this root cert? How would that have even been allowed?

Imagine somebody loudly insisting that negative integers don't exist. That they can't exit, they just don't make any sense, obviously there can't be a "minus six" what would that even mean? Frustrating right. That's what this is like.


But doesn’t antose comment still apply? If all the certificates issued by letsencrypts old CA have expired (since they’ve stopped issuing certs for a year, and their certificates only last 3 months) wouldn’t it stand to reason that if a certificate on and old CA have been replaced with a new CA certificate? And it would then stand to reason any devices that aren’t updated would have slowly watched the web degrade already? In other words with letsencrypt there’s no cliff everyone walks off when the root ca expires, it’s more a 90-day descending hill.


You're failing to think about it as a graph. Here's a graph to look at:

https://letsencrypt.org/certificates/

Let's try a concrete example, https://letest.tlrmx.org/ is a simple test site, it has a normal Let's Encrypt certificate. This certificate was issued by R3, the same intermediate issuing most (but not all) Let's Encrypt end-entity certificates today and this certificate doesn't expire until November.

On the graph that certificate is a labelled edge from what we'll call the R3 keypair to my keypair for letest.tlrmx.org, saying that R3 certifies this is really letest.tlrmx.org (and www.letest.tlrmx.org)

But, because it's a graph there are two certificates (actually three but we'll ignore that) for that R3 keypair. One of them is signed by DST Root CA X3, this one says R3 is certified as a CA by DST Root CA X3, and since DST Root CA X3 expires later this month, that certificate too expires later this month. However, there is also a certificate for R3 saying it's a CA, signed by ISRG Root X1 and that doesn't expire until 2025.

If we trust DST Root CA X3 this is all fine... for about a week. But if we trust ISRG Root X1 it's still fine until 2025, which is ages.

Immediately one bug rears its head. Software such as older OpenSSL versions may trust DST Root CA X3, notice it's expiring and then falsely conclude this is untrustworthy, even though it also trusts ISRG Root X1. That's a pretty bad idea, but you can bet some people are still running this.

We can continue following the graph to ISRG Root X1 but is this enough that you now see why atonse's simple view of the world is wrong?


> Imagine somebody loudly insisting that negative integers don't exist. That they can't exit, they just don't make any sense, obviously there can't be a "minus six" what would that even mean? Frustrating right. That's what this is like.

Context matters. If your task is to count the number of gumballs in a jar, the person loudly insisting upon the non-existence of negative integers would be correct. Within this context, "minus six" doesn't exist and doesn't mean anything.


> > The intermediate certs would’ve had Expirations before this month > No. Nobody does that. Why would you do that?

Ok, you can issue an intermediate cert that outlives your root cert, but why would you? Most cert validation libraries will reject a cert whose parent has expired even if it's not expired itself.

Source: have accidentally done it, nearly caused major outage.


You say "parent" but as I keep explaining it's a graph. It's not a tree. It's not even a DAG, it's a full blown "Oh dear that looks like a hard problem" graph.

There actually is a good reason to issue certificates from an expired root in particular, and in fact Let's Encrypt has one, a certificate for ISRG Root X1 signed by the expiring DST Root CA X3 for some few extra years. Why do this? Android clients don't care about expiry in root certificates. In the absence of actual root store updates they will continue to trust this root, which makes some sense.

But that's not what this is about (although it's interesting). The key thing to understand is that it's a graph.


Each cert was begotten from a specific signer. That’s all that is meant by “parent”. And the distinction between a tree and a graph is irrelevant for purposes of TLS cert validation. Each “Certificates” message should contain a linear chain, and each thing in the chain has another thing in the chain that signed it (which I am referring to as its parent). Walk that chain and verify that all of the certs in it are valid and eventually end in a valid cert that you trust a priori (and in many stacks, this MUST be a self-signed cert). If a particular platform chooses to continue trusting expired roots, that’s peculiar to that platform. I get it in the case of embedded stuff that may or may not ever get updated, but it’s strange nonetheless.


> Each cert was begotten from a specific signer. That’s all that is meant by “parent”.

Then this parent is a key pair, and can't expire. Certificates expire and certificates don't sign anything.

> That’s all that is meant by “parent”. And the distinction between a tree and a graph is irrelevant for purposes of TLS cert validation.

And this, ladies and gentlemen, is how you get bugs of the sort which will trip people up about a week or so. "I can't see how to solve this hard problem, but I wrote a good solution to a different easier problem, I hope that's enough".

> Each “Certificates” message should contain a linear chain, and each thing in the chain has another thing in the chain that signed it (which I am referring to as its parent).

This is truly how TLS 1.2 and earlier described the message. It hasn't been a good idea for years, and TLS 1.3 finally explains that no, you probably don't want to do this with the Certificates message (a "chain" in the usual parlance).

> Walk that chain and verify that all of the certs in it are valid and eventually end in a valid cert that you trust a priori (and in many stacks, this MUST be a self-signed cert).

Yup. This bad algorithm is exactly what's going to blow up for some people in just over a week. Don't do this.


I had to dash off to play D&D, so I stopped once I had written a bare explanation, but let's take an extra moment for why exactly this is going to blow up for some people.

> Walk that chain and verify that all of the certs in it are valid and eventually end in a valid cert that you trust a priori (and in many stacks, this MUST be a self-signed cert).

Imagine we're following this naive algorithm, we trust both ISRG Root X1 (which is many years from expiring) and DST Root CA X3 (which expires next week), and imagine that it's October 1st 2021 and we are seeing a pretty typical HTTPS server with a Let's Encrypt certificate.

Walking the chain we see several certificates, our algorithm tells us to check all of them:

1. An end entity certificate, for example.com signed by R3, this certificate seems fine, hasn't expired, valid, tick, moving on

2. An intermediate certificate, for R3 signed by ISRG Root X1, this certificate seems fine, it has CA:TRUE as necessary, hasn't expired, valid, tick, moving on

3. Another intermediate certificate, for ISRG Root X1, signed by DST Root CA X3, this certificate seems fine, it has CA:TRUE as necessary, hasn't expired, valid, tick, moving on

4. DST Root CA X3 self-signed certificate, which has expired. Invalid, fail, this chain is untrustworthy.

But wait, after step 2 we reached ISRG Root X1, which we know is trustworthy. We were actually done! Why are we looking at these other certificates at all, much less failing the whole chain?

And this is the bug in older OpenSSL versions (and older Libressl, and older GnuTLS) which is why if your clients can upgrade to OpenSSL 1.1.0 or newer that's key.


In this scenario, most (not all) stacks will stop when they reach ISRG Root X1, which is trusted because it's in your trusted roots store. You're correct, there is no reason to continue validating because we have run into a cert that is explicitly trusted. That should be fine. The chain only needs to continue until it hits an explicitly-trusted cert. My recollection of the TLS RFC is that it will stop there. In fact, the Certificates message doesn't even need to have an ordered chain, and the entire chain doesn't even have to be relevant. You can have your end entity cert, some random irrelevant intermediate, the root cert, and then the intermediate and (again, this is from memory, it's been a while since I actually read it carefully) RFC says that you just must be able to build and verify the cert path up to a trusted root by finding all the intermediates in that list of certs.

The fact that some software stacks also happen to insist that that final step lands on a self-signed cert is the problem that will cause this cert validation to continue up the chain past ISRG Root X1 and blow up at the expired DST Root CA X3. It's not wrong because it's validating the expiration date on the root cert, it's wrong that it's ever reaching that root cert at all.

If you have references that say I'm wrong about this, I'm honestly interested. I just don't think the solution to this type of problem is ignoring the expiration dates, I think it's pushing out a trusted cert that's further down the chain, a la ISRG Root X1.


> In this scenario, most (not all) stacks will stop when they reach ISRG Root X1, which is trusted because it's in your trusted roots store.

That isn't what you wrote. Machine don't do "What I really meant", only what you actually wrote. Unfortunately the people who programmed libraries like OpenSSL had the same attitude as you (until e.g. OpenSSL 1.1), and so in about a week a bunch of people are going to reget that.

> My recollection of the TLS RFC is that it will stop there.

The TLS RFCs don't offer any opinion about how you should make trust decisions. Accordingly even before RFC 8446 was finished, the most important clients (the web browsers) treat Certificates as one end-entity certificate plus some number of other documents which might be useful or might be irrelevant.

This also makes them robust against the most common misconfiguration which is failure to provide the intermediates. Since they don't care they will soldier on anyway, either by AIA chasing or, in the case of Firefox, by including the entire set of trusted unconstrained intermediates in every install.

> The fact that some software stacks also happen to insist that that final step lands on a self-signed cert is the problem that will cause this cert validation to continue up the chain past ISRG Root X1

That's not really key. They just don't stop, the self-signed certificates are a coincidence not a necessary element.

> If you have references that say I'm wrong about this, I'm honestly interested.

What does it mean to be "wrong"? I'm sure you will continue to believe you were correct, and yet the exact steps you wrote down are going to blow up for people in just a few days. As a programmer, I call that "wrong" but I'm quite sure you feel you weren't wrong at all, just misunderstood.


> That isn't what you wrote. Machine don't do "What I really meant", only what you actually wrote.

This isn't code for a machine, it's a conversation. If you interpreted it in a way that sounds broken to you, please tell me exactly how and I'll either clarify or learn something new. All I'm doing is casually describing the RFC, and yes, I may have stated something ambiguously or incorrectly. Help me out here. Are you saying the RFC describes a bad algorithm or that I've described it badly?

> The TLS RFCs don't offer any opinion about how you should make trust decisions

The TLS RFCs refer you to RFC 5280 for validation, which does explicitly specify the algorithm.

> I'm sure you will continue to believe you were correct, and yet the exact steps you wrote down are going to blow up for people in just a few days

I'm not seeing how my steps will blow up, although the phrase "eventually end in a valid cert that you trust a priori" could be interpreted differently than I meant it. "Eventually end in" here is referring to the validation algorithm ending, not the cert chain. Can you elaborate on how you think this algorithm fails?

> As a programmer, I call that "wrong" but I'm quite sure you feel you weren't wrong at all, just misunderstood.

Could I ask you to be a little less aggressive in this conversation? We aren't having a fight here.

Edit: I think we're actually in violent agreement here. Your statement "But wait, after step 2 we reached ISRG Root X1, which we know is trustworthy. We were actually done!" is exactly my point. You don't need to ignore the expiration date on the old root if you have issued a new trusted authority below that root and the validation can stop there. You aren't ignoring the expiration on the old root, you're ignoring the entire old root cert because you found a new trusted cert that supercedes it. if you did still have an end entity cert that was issued via a path to the old root that did not include the newer, explicitly trusted cert lower down the chain, THAT end entity validation would fail.


It still blows my mind that we’ve focused so much on writing robust software that can handle so many edge/failure cases… yet, an expired certificate blows everything up. As though the simple passage of time should dictate what I can and cannot trust or choose to do. Just ask me, the developer, if I should trust something or not, so I can pass that question on to the user, if it makes sense.


I think that's largely because "blow everything up" is arguably the correct thing to do on an expired certificate, at least compared to "silently do an insecure thing". If you visit a website with an expired certificate, most browsers do what you suggest and ask the user if they want to proceed at their own risk. All the software that doesn't have a direct user to ask, like basically anything running on a server, has to either fail with an error when something is wrong with the certificate, or open up a security hole that will probably eventually be exposed and exploited.


All software has a user, otherwise, what is the point of having the software in the first place? A server’s user is an operator, devops, or whatever the org calls it.


Not all software has a user present when it runs that can make decisions like that. Probably the vast majority doesn't. If an automated update process that runs on thousands of servers encounters an expired certificate, it's not like it can pause and pop up a prompt to ask me if it should continue or not. I do a lot of immutable infrastructure and don't even have shell access on those machines, so even if I wanted to, it wouldn't be possible.


This seems pretty easy to mitigate if someone created a IGNORE_EXPIRATION environment var where any matching cert fingerprint would ignore the expiration date. I have a feeling the more paranoid people would hate it, but it’s better than what we have now where it’s all-or-none ignoring cert validity.


The risk is, older devices. Many devices aren't updated anymore, so they won't have newer root certs. You could argue that those users shouldn't be using those devices with the web, but they do.


I think they still support a cross-signed root, so hopefully those users will still be OK.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: