Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It still astounds me that the big dogs still do not phase config rollouts. Code is data, configs are data, they are one and the same. It was the same issue with the giant crowdstrike outage last year, they were rawdogging configs globally and a bad config made it out there and everything went kaboom.

You NEED to phase config rollouts like you phase code rollouts.



The big dogs absolutely do phase config rollouts as a general rule.

There are still two weaknesses:

1) Some configs are inherently global and cannot be phased. There's only one place to set them. E.g. if you run a webapp, this would be configs for the load balancer as opposed to configs for each webserver

2) Some configs have a cascading effect -- even though a config is applied to 1% of servers, it affects the other servers they interact with, and a bad thing spreads across the entire network


> Some configs are inherently global and cannot be phased

This is also why "it is always DNS". It's not that DNS itself is particularly unreliable, but rather that it is the one area where you can really screw up a whole system by running a single command, even if everything else is insanely redundant.


I don’t believe that there is anything necessarily which requires DNS configs to be global.

You can shard your service behind multiple names:

my-service-1.example.com

my-service-2.example.com

my-service-3.example.com …

Then you can create smoke tests which hit each phase of the DNS and if you start getting errors you stop the rollout of the service.


Sure, but that doesn't really help for user-facing services where people expect to either type a domain name in their browser or click on a search result, and end up on your website every time.

And the access controls of DNS services are often (but not always) not fine-grained enough to actually prevent someone from ignoring the procedure and changing every single subdomain at once.


> Sure, but that doesn't really help for user-facing services where people expect to either type a domain name in their browser or click on a search result, and end up on your website every time.

It does help. For example, at my company we have two public endpoints:

company-staging.com company.com

We roll out changes to company-staging.com first and have smoke tests which hit that endpoint. If the smoketests fail we stop the rollout to company.com.

Users hit company.com


That doesn’t help with rolling out updates to the DNS for company.com which is the point here. It’s always DNS because your pre-production smoke tests can’t test your production DNS configuration.


If I'm understanding it right, the idea is that the DNS configuration for company-staging.com is identical to that for company.com - same IPs and servers, DNS provider, domain registrar. Literally the only differences are s/company/company-staging/, all accesses should hit the same server with the same request other than the Host header.

Then you can update the DNS configuration for company-staging.com, and if that doesn't break there's very little scope for the update to company.com to go differently.


The purpose of a staged rollout is to test things with some percentage of actual real-world production traffic, after having already thoroughly tested things in a private staging environment. Your staging URL doesn't have that. Unless the public happens to know about it.

The scope for it to go wrong is the differences in real-world and simulation.

It's a good thing to have, but not a replacement for the concept of staged rollout.


But users are going to example.com. Not my-service-33.example.com.

So if you've got some configuration that has a problem that only appears at the root-level domain, no amount of subdomain testing is going to catch it.


I think it's uncharitable to jump to the conclusion that just because there was a config-based outage they don't do phased config rollouts. And even more uncharitable to compare them to crowdstrike.


I have read several cloudflare postmortems and my confidence in their systems is pretty low. They used to run their entire control plane out of a single datacenter which is amateur hour for a tech company that has over $60 billion in market cap.

I also don’t understand how it is uncharitable to compare them to crowdstrike as both companies run critical systems that affect a large number of people’s lives, and both companies seem to have outages at a similar rate (if anything, cloudflare breaks more often than crowdstrike).


https://blog.cloudflare.com/18-november-2025-outage/

> The larger-than-expected feature file was then propagated to all the machines that make up our network

> As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.

I was right. Global config rollout with bad data. Basically the same failure mode of crowdstrike.


It seem fairly logical to me? If a config change causes services to crash then rollout stops … at least in every phased rollout system i’ve ever built…


In a company I am no longer with I argued much the same when we rolled out "global CI/CD" on IAC. You made one change, committed and pushed, wham it's on 40+ server clusters globally. I hated it. The principal was enamored with it, "cattle not pets" and all that, but the result was things slowed down considerably because anyone working with it became so terrified of making big changes.


Then you get customer visible delays.


Because adversaries adapt quickly, they have a system that deploys their counter-adversary bits quickly without phasing - no matter whether they call them code or configs. See also: Crowdstrike.


You can't protect against _latent bugs_ with phased rollouts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: