This issue goes beyond CrowdStrike and points to the general approach to security that is buying products off the shelf to satisfy regulators and insurers while not actually caring what it does or how it works.
I'm not saying tech shouldn't be regulated, but our current model of "buy this thing to shed liability" doesn't work. The worst part is, the people who saw this coming (i.e. your IT department) probably can't do a damn thing about it because it's mandated at high levels in the company either for "cyber insurance" requirements or some other regulation. Madness.
> The worst part is, the people who saw this coming (i.e. your IT department) probably can't do a damn thing about it because it's mandated at high levels in the company either for "cyber insurance" requirements or some other regulation.
I've worked with many excellent IT people who feel this way, but the vast majority of my experience with IT departments has been that as long as the contract covers what it needs to, they don't actually care if it solves the problem or not. At a previous job, software similar to crowdstrike was installed on my workstation over a weekend, and I came back to 20% slower compile times (I was working on them at the time so I had dozens of measurements). I had ETL traces showing the problem was the software, but IT refused to acknowledge it because the vendor contract said there was no performance impact expected for our workload.
That is my experience, too. I attribute it to IT / sysadmin jobs having a lower bar to entry and becoming more of a "watered down" business unit that just follows orders without much say or care for anything.
Most IT departments wouldn’t have seen this coming, and certainly would’ve been right to not base their entire security strategy around it. I’m not sure where this narrative is coming from. Falcon delivered and still delivers real, genuine security benefit to its customers. That does not mean that it eliminates all risk, and does not mean that it doesn’t introduce risk of its own.
It’s literally a game of trafeoffs like all engineering problems. This shouldn’t be that foreign to anyone here. Suddenly HN is full of security experts that are fuelled with 20/20 hindsight and recency bias, explaining how companies could’ve dodged this bullet without considering what very real bullets were being dodged by using Falcon in the first place.
> Suddenly HN is full of security experts that are fuelled with 20/20 hindsight and recency bias
That is incorrect, many in tech saw these blanket IT policies being implemented and didn't like the prospects but couldn't change anything. At my workplace, policies like password rotations every 90 days (NIST recommends against), resource heavy machine scans, and nonsensical firewall rules are all a result of the company buying "cyber insurance".
> It’s literally a game of trafeoffs like all engineering problems
Adding a single point of failure to all of your systems is a pretty big tradeoff to make for questionable gains.
> Falcon delivered and still delivers real, genuine security benefit
Rhetorical question but I'll ask why some of the machines affected in the CrowdStrike outage even needed EDR software installed in the first place? Examples are flight status displays, critical 911 and healthcare machines, warehouse cranes, etc. things that don't immediately pass the smell test for having an internet connection.
To your final question, those machines were likely to have a connection to the internet at some point or indirectly through something else, of which may have left it vulnerable.
It speaks to more than just EDR solutions but appropriate segmentation of critical endpoints on the network. Flight status displays may have definitely had an internet connection.
To your middle point, I don't think perhaps people understood the reality of how/if Crowdstrike would become a single point of failure on their systems. We now know it was a single point of failure that caused systems to completely shutdown, but up until that point I don't think that potential was overly understood nor considered how possible it was going to be.
I'm not saying tech shouldn't be regulated, but our current model of "buy this thing to shed liability" doesn't work. The worst part is, the people who saw this coming (i.e. your IT department) probably can't do a damn thing about it because it's mandated at high levels in the company either for "cyber insurance" requirements or some other regulation. Madness.