"My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting. Outsourcing that single point of failure doesn't cure my design of that flaw, especially when that architecture's intended use-case is to provide redundancy and fault-tolerance.
The problem with pursuing efficiency as the primary value prop is that you will necessarily end up with a brittle result.
> "My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting.
This is a simplistic opinion. Claiming services like Cloudflare are modeled as single points of failure is like complaining that your use of electricity to power servers is a single point of failure. Cloudflare sells a global network of highly reliable edge servers running services like caching, firewall, image processing, etc. And more importas a global firewall that protects services against global distributed attacks. Until a couple of months ago, it was unthinkable to casual observers that Cloudflare was such an utter unreliable mess.
Your electricity to servers IS a single point of failure, if all you do is depend upon the power company to reliably feed power. There is a reason that co-location centers have UPS and generator backups for power.
It may have been unthinkable to some casual observers that creating a giant single point of failure for the internet was a bad idea but it was entirely thinkable to others.
> Your electricity to servers IS a single point of failure, if all you do is depend upon the power company to reliably feed power.
I think you quite didn't got the point. The whole point is that putting together a system architecture that considers Cloudflare is a single point of failure is like designing a system architecture that considers a power supplier a single point of failure. Technically they can be considered that if you really really want to, but not only are things irredeemably broken when those failure modes are triggered but also they themselves are by far expected to be the most reliable components of your systems due to their design and SLAs that is pointless to waste time and resources mitigating such a scenario.
You're arguing from an end-user perspective, I'm pointing out that the Internet wasn't designed to solve easy but fragile problems but instead was intended to be a resilient network capable of surviving failures and route around them.
"I want to use a power tool and simply plug it into a wall" is not the same class of problem as "we're using a heart-lung machine during this bypass operation and power loss results in dead patients."
The widespread dependence upon Cloudflare has resulted in the "heart-lung machine" problem of DNS, among other things, being "solved" by a "power tool" class of solution.
No.I am arguing from a software engineer's perspective tackling a systems design problem.
> I'm pointing out that the Internet wasn't designed to solve easy but fragile problems but instead was intended to be a resilient network capable of surviving failures and route around them.
Irrelevant. Engineers design systems that remain functioning in spite of their failure modes. Some failure modes are irredeemable. Even structural engineers don't design structures to withstand all conceivable earthquakes, because they understand that mitigating that failure modes is unrealistic.
The same goes for software. You do not build your WebApps to remain working when half of the internet dies. This means scenarios such as AWS, GCP or Cloudflare being out.
You do know that data centers use backup generators because electricity is a single point of failure right? They even have multiple power supplies plugged into different circuits.
> You do know that data centers use backup generators because electricity is a single point of failure right?
How many times do you account for the existence of backup generators in a data center when you design, say, a SPA? How do you expect to be treated if you even suggest such a thing in a design requirements meeting? Do you understand what I am saying?
The problem with pursuing efficiency as the primary value prop is that you will necessarily end up with a brittle result.