Hacker Newsnew | past | comments | ask | show | jobs | submit | lub's commentslogin


> One of the big advantages of DDG's bangs is that you don't have to re-type/paste the whole search query. Instead, you click the search bar in DDG, type `<space>!g<enter>`. That's vastly simpler and faster than re-typing the query, then `<space>!g<enter>`.

At least in Firefox you can also re-enable the search box and just use that. It's even accessible via it's own shortcut (ctrl+k).


Didn't actually think of that. I'd forgotten that was an option, however, I've never liked the separate search box. It just feels like a waste of space. I don't want to think about whether I should press Ctrl+L or Ctrl+K. Half of the time I don't yet know whether I want to search or go to a site.


> Moving the feature into a plug-in was a really dumb move though. It totally killed discoverability of this unique selling point of Firefox.

It's actually usable now without extra addons. Just right click the "+"/new tab button or go to about:preferences#containers diretly.


Oh did they put it back in the mainline? I still have the addon installed. I'll give it a try, thanks for the heads up.


Did you try this?

  networks:
    - host
https://docs.docker.com/network/host/


Maybe it has changed since I built it, but I wasn't able to get this working with Swarm services. I had to convert them to docker-compose to make it work. The docs suggest it should work with Swarm mode though, so maybe I need to try again.

Ideally I'd like to give each service its own IP on the network, which was possible with how I had k8s setup.


That's one of the reasons I like matrix. Real multi-client support within the same account and each client is independent of each other.


Well sure that's a strong selling point of both matrix and XMPP. What matrix clients are you using and which do you recommend for desktop use? I stopped using matrix because Element was really slow/buggy, but decent CLI/desktop clients with tor integration do interest me (there was just none of those at the time).


Sounds like the customers' data itself is still available?

> All customers should immediately download backups of their websites and databases through cPanel.


I noted that too and it's really weird. So, they do no have backup of their part of the data (or they don't want/are not able to restore it) but they still have the customers data?


Perhaps things were running in maintenance mode already, and there is no longer the desire to run this part of the business, so they took this unfortunate opportunity to wind things down.


Maybe they’re compromised but data seems intact, as in it’ll be irresponsible to keep serving on the Internet but most of it are probably not maliciously altered?


Yeah, it's pretty common for hackers to upload backdoors to random web sites when they can and exploit them at a later date. If we're talking about a full server compromise then I wouldn't use those downloaded data for anything except for analysis/archival purposes, unless it's been thoroughly cleaned first.


I currently use sway on a 35" WQHD and a 14" notebook and IMO tiling window managers are actually more useful on large monitors.

On small screens I often have programs in basically fullscreen, because of the limited space. But the bigger the screen the better you can actually tile it beyond simple half/quarter splits.


TLS doesn't necessarily mean everything is encrypted with FTP, because data and control channel have to be encrypted independently.

See for example https://security.stackexchange.com/a/115565 or https://en.wikipedia.org/wiki/FTPS#Secure_data_channel for more detailed explanations.

Most of the time both are encrypted when configuring TLS, but it's not as easy as with IMAP or SMTP where you basically disallow all commands except STARTTLS.


If you use implicit tls/ssl, not using starttls, the data channel should be using tls/ssl too.


I think the author is confusing three things:

* Let's Encrypt - the actual CA

* certbot - probably the mentioned python software

* ACME - the protocol, which had a change in the supported challenges

Also there are alternatives for all three points. There are other CAs implementing ACME, there are other ACME clients and there are still CAs supporting the old 'send an email and click a link' domain validation.


Historically the Certbot software was named "letsencrypt" which certainly didn't make this easier to understand.

The not-for-profit is named ISRG (Internet Security Research Group) and so that's the entity trusted to actually run the Certificate Authority. Let's Encrypt is in some sense branding for this their main (only?) activity the same way you can still buy certificates with Thawte branding even though Thawte hasn't really existed for many years.

ACME is an IETF standardized protocol and so one of the things which has changed at Let's Encrypt is they gradually migrated from the ACME prototype they'd built and shipped to something that's (more or less) compliant to the IETF standard.

This is akin to how today Google's own web sites can talk Google's QUIC protocol (sometimes referred to as "gQUIC") but Google intends to rip that out once the IETF QUIC standard is published and have their sites just speak the standard QUIC instead (there may be a brief overlap where they speak both but it's likely to be very short because maintaining two protocols is far from free)


> Historically the Certbot software was named "letsencrypt" which certainly didn't make this easier to understand.

And to this day strongly promoted (recommended first option) right from the letsencrypt.org starter page:

https://letsencrypt.org/getting-started/


That doesn't seem to be the case:

> Let’s Encrypt is a CA.

> We recommend that most people with shell access use the Certbot ACME client.


I don't understand you. You say it doesn't seem to be the case and then quote the exact part that proves that it does.


I (native English) first interpreted your comment as saying that "letsencrypt" is promoted. The other person probably read it the same way.


Huge stumbling block and source of confusion for me. During the steep learning curve setting it up, I'd often search online for help/tutorials and they'd all reference "certbot" which I couldn't find anywhere on my system. As a newbie, I frequently said "WTF is this certbot thing, I'm using letsencrypt!" Wasn't clear at all. At some point during my system updates, /usr/bin/letsencrypt became a symbolic link to /usr/bin/certbot and it became obvious.


Sounds like you were running a distro that didn't keep its packages updated. By the time everyone was calling it 'certbot', well, it was called certbot.


If you run something that issues 100 million or more certificates per year then backwards compatibility is not something that you toss out just because you can. Forcing that many web properties to upgrade their software (regardless of which party produced what) is discounting the combined effort that will take on the part of the users/sysadmins of those systems for something that could have just as easily been avoided.

You deprecate interfaces like these but you don't just shut them down, especially not when they are still seeing major use.

Just imagine that tomorrow IPV4 would be shut down because we've all had enough time to switch by now.


The reason the old interface was deprecated was that a security hole was found in the protocol. That is one of the few cases where it is reasonable to break backward compatability in this manner.

Especially when dealing with certificates, where the security is one of the top reasons to want to go there.


> that a security hole was found in the protocol

Is there any supporting evidence for that because the only thing I have been able to find so far is that it was simply superseded by a newer version, mostly to support wildcard certs. What holes there were in V1 were closed within a day or two at most.


The article says "then the challenge protocol was changed" so that's why people are talking about the protocol.

The only challenge which changed was tls-sni-01 which was removed and eventually replaced with tls-alpn-01

The tls-sni-01 challenge is safe unless there are bulk hosting sites whose web server for some crazy reason accepts SNI for names that are nonsensical, and then serves up answers chosen by an attacker who is also one of the customers on that server instead of from a victim on the same server.

Unfortunately somebody actually did ship software which is crazy in that specific way, and it's named Apache HTTPD server. You might have heard of it. So that's a problem.

So, Let's Encrypt deprecated this challenge and you can no longer use it. They did tell everybody affected, by email to the address they provided for contact. Since they are not psychic they don't have a way to reach out to people who felt they didn't need to be contacted.

I suspect given you mention wildcards you're thinking of ACMEv2 which isn't a challenge protocol. But again there were plenty of email notifications about the ACMEv2 upgrade, and you've in fact encountered exactly the anticipated scenario, you decided to build out a new thing using the old service and it told you not to do that. Your old things are still working, for almost another year, after already two years notice that this was going away, it's just that new things can't be launched against this already deprecated service.


You know this, but for the benefit of the thread: to say "tls-sni-01 is safe unless there are bulk hosting sites that break it" is to say that tls-sni-01 is unsafe. The "crazy" sites you're referring to included AWS and Heroku.

This all happened 2 years ago, so it's a bit odd to see it litigated today.


We briefly describe this history on page 6 of

https://jhalderm.com/pub/papers/letsencrypt-ccs19.pdf

in case anyone is more interested (there are also references there for further details). Twice, methods that seemed plausible for proving control over domain names turned out to make assumptions that were potentially violated by shared hosting environments.

Jacques, I'm really sorry for the hassle that these changes caused you.


Thanks for the link Seth. I wasn't aware this existed and it's sometimes nice to have something specific to cite as well as convenient that it's all in one place like this.

Edited to add: Wow the Sankey diagram (showing changes in which CA if any is used by a site) is something I hadn't seen anywhere else and is especially useful. Thanks again.


Heroku and (so far as I can tell) Cloudfront independently re-invented this stupidity. But if it was "just" say Heroku and Cloudfront you can imagine plausibly notifying those two providers to fix their broken infrastructure and then you're good.

Apache makes it unsalvageable by sheer numbers the same way it had already for HTTPS in http-01, so that's why I focused on Apache.

It's entirely possible for some fool to ship an exciting new cloud service that lets people bind to arbitrary ALPN values on a shared service and thereby re-introduce this problem for tls-alpn-01 - but unlike with tls-sni-01 that's not a bug common to hundreds of small bulk hosts using out of box Apache so I assume we'd tell the exciting upstart to knock it off and warn their customers what they're doing is inherently unsafe, rather than requiring Let's Encrypt to stop offering tls-alpn-01.

In fact we're already on the other side of this for the ordinary version of http-01 for a different reason. Apache really does potentially let an attacker who controls aaa-aardvark.example at some bulk host perform http-01 challenges for www.some-custom-site.example that has created A records pointing to the bulk host but hasn't currently actually got them serving www.some-custom-site.example maybe due to a typo or unpaid bill.

But most bulk hosts have specifically configured Apache to show a default "Did you pay? / Have you configured your hosting properly?" type site which is harmless in this case, and for the few that haven't users can understand that um, if they visit www.some-custom-site.example in their browser they get to the attacker's site, so like yeah, that's where the problem is, nothing new with http-01


I did provide an email address, never got any mail (I did actually check that).

> it's just that new things can't be launched against this already deprecated service.

Yes, I noticed. So, I now have the entirely unforced option of re-imaging a machine that is working just fine besides this little detail, which is in fact just one very small thing of a whole pile of much bigger things that run on that particular box. Not to mention migrating twentynine years of email to a new mail server.

I'm sure there is a lesson in there somewhere, but I'm not sure I'm overly receptive today, I had a lot of other stuff on my agenda.


If you let a server lag in OS version, at some point in time you're going to hit this kind of problem. If not with Let's Encrypt, then with some other dependency. I know, I've been in the exact same spot. I just don't blame the dependency, and included server OS updates as part of a yearly maintenance cycle.


I find that really ridiculous. Not you, but the fact that an OS needs to be upgraded because of some application level stuff that has to do with a protocol that is being run on some other server.

That's the kind of dependency snowball that we should work hard to avoid, not accept as some kind of new normality.

Servers should be able to live for years without re-imaging.


Is there a reason you can't just upgrade that one component on the server, why do you have to re-image it from scratch?

If you have external dependencies they are going to move around from time to time throughout their lifetimes, especially if they are beta. LetsEncrypt may not have signaled beta with v1, but I've been a cert-manager user for years in pre-1.0 and I've known that meant I might need to come up for air and read the docs for a specific upgrade instruction from one pre-1.0 minor version to another at any time.

Now cert-manager is 1.0+ and my expectations can change. It should remain backwards compatible until the next major version (hopefully for a while! And they will provide a migration path when that comes, with clear instructions and a fairly long sunset, godwilling)

But cert-manager depends on letsencrypt, and I depend on cert-manager, all of which depends on a protocol called acme, and this is the arrangement. We made this deal because it was going to turn out less complicated than managing the certificates by hand, and they made that deal because it was going to turn out better than rolling their own protocol from absolutely scratch, similarly. Eyes on the prize.

If you didn't want LetsEncrypt as a dependency there are other ways to connect cert-manager or another tool like it, including other acme providers... they all depend on the acme protocol, (or there might be some other protocol that you can use, with its own characteristics of change or stability, or roll your own) at some point you have to roll the dice and bet on something.

Occasionally these things happen. You suggest that servers should be able to go for years, (but they have allowed years for this transition! What more can be expected, realistically?)


> Is there a reason you can't just upgrade that one component on the server, why do you have to re-image it from scratch?

Yes, I did this now and I have it working. But it leaves things in a messed up state and I don't like that so I will go back to this in a short while and fix it properly.

What I still wonder about is why their warning email never reached me, that I really need to figure out because then at least I would have dealt with this under a lot less time pressure.

> If you didn't want LetsEncrypt as a dependency there are other ways to connect cert-manager or another tool like it, including other acme providers...

There are some very good suggestions in this thread, I will probably adopt one of them.

> You suggest that servers should be able to go for years, (but they have allowed years for this!)

And somehow I missed that memo. Even so, I am still not convinced of the necessity, it is possible that it exists but I have yet to see a valid reason for shutting down the old protocol for new registrations like this. There also seems to be some confusion with people saying it should have worked for the same account, which I can prove did not work.


> But it leaves things in a messed up state and I don't like that so I will go back to this in a short while and fix it properly.

You say this with confidence, I wish my own situation provided me with the confidence to say this and mean it. We do not have reproducible systems and depend in many ways wholly on backup images of live production systems. Someone is going to say this makes my life simpler than yours by some twisted math, but I have a doubt about that myself.

We are still talking about migrating from Amazon Linux v1 to Amazon Linux v2, and with a recent announcement from AWS, the pressure is off! We'll be able to continue talking about this transition for a good long time to come. Again, mixed blessing, is it better to have an operating system that can crawl along on life support? For those that can't upgrade, sure, it is better to get security maintenance than to have zombie servers which are not upgradeable, but who is to say what opportunity costs will arise because we are not on a formally supported leading-edge version of the platform.


Agreed, reproducible systems are an absolute must and it is a shame that we are still not even close to having a solid foundation under all this mission critical stuff we build.

It feels like we are building these huge castles on quicksand.

At the same time I think the whole 'treat your servers like cattle, not like pets' is exactly because we don't know how to do this properly. It is the cloud equivalent to hitting ctrl-alt-delete to solve issues.


I know you solved your issue; for others in the same boat, look into acme.sh. it's a shell only implementation, no python, no loads of dependencies. I used that to keep let's encrypt running on an ancient server (firewalled) that I cannot upgrade for reasons.


I decided to go with acme.sh instead of certbot on some servers because I am hoping that upgrading acme.sh will cause fewer headaches. But who knows...


Why TLS-SNI-01 was disabled: https://community.letsencrypt.org/t/2018-01-09-issue-with-tl...

Explanation that renewals will be disallowed after 1 year deprecation period: https://community.letsencrypt.org/t/march-13-2019-end-of-lif...

And as you seem to be talking about ACMEv1/v2 instead of TLS-SNI-01 (which I originally thought); it will be supported as long as June 2021 in some cases: https://community.letsencrypt.org/t/end-of-life-plan-for-acm...

ACMEv2 was introduced, because it is much closer to the actual spec. Enforcing this ensures that there are actually ACME implementations out there, instead of proprietary "Let's Encypt ACME" implementations. https://tools.ietf.org/html/rfc8555 https://github.com/letsencrypt/boulder/blob/master/docs/acme...

To me this seems like a sensible compromise between backwards compatibility and their mission for standardized automated renewals.


Yes, but that particular hole was fixed, wasn't it?


You can't "fix" the tls-sni-01 hole except by going back in a time machine to when Apache implements SNI and spraying all the involved developers with water. "No, bad developer, no biscuit. Do what the protocol specification actually says not whatever half-arsed nonsense you thought would work".

If there were like six web servers in the whole world that got this wrong, we could say "Fix those servers, fools" and sleep soundly knowing that those six servers are all that's affected. But Apache makes the scope too big to do that reasonably. It's a judgement call, but in this case the call was very easy.


I don't see what that has to do with me because there is no Apache on that server (just Nginx).


But Let's Encrypt is part of the Web PKI, and the Web PKI is for all names on the public Internet, not just any operated by Jacques Mattheij. You sought certificates from the Web PKI, probably because you wanted somebody else other than Jacques Mattheij to trust them.

A large fraction of public Internet HTTPS servers run Apache, which means tls-sni-01 is unsafe for a non-trivial fraction of names, which means we need to tell Certificate Authorities not to use this method or those like it. Specifically 3.2.2.4.10. TLS Using a Random Number has to be approached differently if it's to be attempted. The tls-alpn-01 challenge implements 3.2.2.4.10 using ALPN instead of SNI and appears to be safe in practice.


There was this joke when I was a fledgling programmer 35 years ago: If engineers would build bridges the way programmers build software the first woodpecker to come along would destroy civilization as we know it.

I think your comment is a nice illustration of that.

To me if a piece of software has a problem then it is that piece of software that should be fixed, not to push the burden onto everybody else as well. That's just so wrong.

But that does not mean I don't follow your reasoning and understand why this decision was made, still, the amount of waste here is incredible.


My interest in cryptography and the Web PKI is part of my interest in how to improve things (and the intersection with my interest in the Network), so that means as well as reading about stuff like Bleichenbacher I am interested in bridges.

Contrary to what your joke suggests, bridges do fall down because engineers don't always do a great job or learn from previous mistakes, they are after all only human.

England (where I live) has many railway bridges constructed by Victorian engineers using cast iron. The Victorians were not necessarily careful to properly document everything about these structures. And so a modern engineering team, responsible for safety of hundreds of these structures has to make some assumptions or else it would need a tremendous amount of investment to either replace every bridge or take it to pieces to figure out how it was built.

One obvious assumption is, if you can get at the two cast iron beams on the outer edges of a bridge that's clearly supported on four beams, the two inner beams you can't reach without demolishing the bridge are presumably identical.

So on particular bridge, over a road, the routine inspection team climbed into the guts of the bridge, they measured the accessible outer beams and concluded these were of a sufficient thickness that, allowing for the inevitable corrosion by the elements and the permitted loading of trains, the bridge should be "good" for another 20-25 years before needing replacement.

The bridge fell down before the next routine inspection. Fortunately nobody was killed, but the collapse exposed that Victorian engineers as well as not documenting their design work saw an opportunity from the fact that the inner beams aren't accessible. They'd used cheaper, thinner metal for those beams and really the only problem with doing that is that bridge will fall down sooner than otherwise expected...


This wasn't Let's Encrypt's choice. The Web is secure due to a series of rules agreed upon between the browsers and the CAs. If vulnerabilities are found, there are deadlines for fixing them; in fact, the deadline for revoking misissued certificates is 24 hours, and Let's Encrypt couldn't prove existing certificates weren't misissued, but they were able to get away without revocations, which is a huge benefit for their subscribers.

The point of these rules are to keep the web safe. The choice here is between inconveniencing Let's Encrypt users (forcing some of them to upgrade or switch validation methods), but keeping the web safe, or making the web unsafe, period, forever (because there is no way to force users of broken web servers and web hosts to upgrade to fix the approaches to certificate management that caused this problem). The only reasonable choice was the first.

I had to change all my servers from TLS-SNI-01 to another mechanism, and I absolutely do not blame Let's Encrypt for this. They did the right thing.


>Just imagine that tomorrow IPV4 would be shut down because we've all had enough time to switch by now.

Honestly? I would absolutely love to watch that shitshow.


The anticipated order of events goes something like this:

Firstly the islands of IPv6 grow until they begin to dwarf the supposed generally interoperable ocean of IPv4. Big home ISPs, major CDNs, bulk hosts, AWS, and so on.

Somewhere around this time you'd start to see events reported where "the Internet" was down for lots of people but it was the IPv4 Internet, which they are increasingly not using so they didn't actually notice. "Your Internet was down" "No it wasn't, I was on Facebook all afternoon" "Right yeah, but other than Facebook" "I watched a movie on Netflix" "OK, other than Facebook and Netflix" "I got a mail from Jeremy on GMail" "OK, other than Facebook and Netflix and GMail" "Not much of an Internet". Happy Eyeballs, the algorithm that allowed IPv6 to be deployed in dual stack environment successfully, now allows IPv4 to ramp down imperceptibly.

Now, with the "ocean" so small, increasingly medium sized operators ignore it entirely, opting just to maintain translators at the edge of the IPv4 Internet, maybe your ISP does this, and you can't get "real" IPv4 addresses, although many of you already don't so this wouldn't be a change.

The last major steps taken by "the Internet" look like this:

The tier one providers who by that point are also more or less the global telecommunications companies, begin to deprecate IPv4 service, seeing it as a niche product that can better be serviced by specialists in your locale. Increasingly the only practical route from one IPv4 address to another IPv4 address is via two translators and IPv6.

The RIRs discontinue management of the namespace/ numberspace for IPv4 and so the allocation of IPv4 addresses ceases to be globally co-ordinated. The IPv4 Internet no longer formally exists, just many islands of legacy IPv4 in an IPv6 ocean which happen to have mostly discontiguous addressing.


Can you please wait until I'm past my 'best before' date when you pull that particular plug?


Why should the author care? The point is the automated process isnt so automated.

Incidentally, my imaps certificate isnt renewing automatically any more. Cant bring myself to debug it. Which just proves the TFAs point.


the author should care because he's writing an article about it, and taking the time to make sure you have the terminology correct is a good thing to do.

but your main point is good: why should the average end user care. i've had the same frustrations with certbot - it tries to be too smart and too magic, and i want to better understand what it's doing. thankfully, the ansible letsencrypt module exists, and behaves in a much more understandable way to me, and that's why it's important to point out the distinction between letsencrypt and certbot - you don't have to use certbot, and everybody who uses letsencrypt should be aware of that. because the author is correct, certbot is kind of a turd, but letsencrypt is awesome and it would suck for people to stop using letsencrypt just because they don't like certbot.


Ok what's ansible? :)

Some script bunch that adds layers upon layers of VMs?

My website is all static (and private use) so I don't really have a reason to run SSL on it. Make it hard, and I'll give up on the security theater.


Can we stop with this nonsense about static sites not needing https? It's not just there to protect secrecy. Integrity is vital with how many parties are happy to inject content into any unencrypted https connection these days. Browsers should be able to know that they're receiving the same bytes the server is sending.


I don't know, maybe the solution is to get a pro consumer FCC in the US :)


That would be great for a lot of reasons, but it's not a satisfying answer here.

For one, it'd only help web traffic that was entirely within the US, and only traversed networks within the scope of the FCC's enforcement operations. Do you really trust that coffee shop wifi? I don't, and I don't think it'd ever be big enough for the FCC to care.

Second, such an FCC could always revert back to current behavior. It's not any more reliable to depend on politics to fix a technical problem than it is to depend on technology to fix a political problem.


And sid (unstable) is the neighbor who breaks the toys, which is why the name never changes :)


Story I heard is that it started with “sid” being an acronym for “still in development”, but once that caught, it blazed the trail for the rest of the Toy Story characters....


I heard it was because when Bruce Perens was Debian Project Leader, he was working for Pixar around the time of the Toy Story release, and chose the naming scheme. The descriptive name for unstable has always been unstable, not "still in development".


Looking back, Sid was an artist being smothered by mundane suburbia, while Andy was a milquetoast who couldn't look after his stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: