This isn't a fair or useful comparison. The USSR was a command and control political and economic system which creates a kind of capture in decision making that is meaningfully different from what we encounter in the U.S. To put it simply, they got Chernobyl and the U.S. got three mile island which is a very different kind of nuclear (and governmental) failure.
This is an interesting example but I have to wonder if it may inadvertently highlight a disconnect between the parties involved.
I'm no expert in this area but many stories have surfaced over the years regarding high levels of stress and (in extreme cases) meaningful declines in the mental health of social media content moderators.
I recently encountered a twitter thread from an early LiveJournal employee (1) @rahaeli "about Trust & Safety work, the toll it takes on you, the things you see, and the human misery, suffering, and death that happens when you fuck it up, including murder and child sex abuse." and I am inclined to believe that, while relatively simple and un-engaging, managers and end-users likely would have found it important.
Hmmm... It's hard to gauge without more detailed information but this kind of problem may possibly be caused by the payment processor (I used to work at Google's processor but only had limited interactions with the Google account). The fraud filters at the processing end aren't the most sophisticated and can/will flag a lot of transactions as suspicious. It's hard to get a meaningful manual review because big G's payment volume is enormous and they (the processor) don't have a dedicated team to handle requests for Google so individuals with small accounts don't get addressed unless someone from Google makes it a priority.
I vehemently disagree with the big G edicts here but to put forth an argument that automatic updates to a known user software tool is THE problem in an era where automatic updates have reduced the attack footprint for a variety of applications is... confusing.
As 'anti-choice' as it may appear (AND IS) automatic updates for net connected software is how we have improved the general base level of security for the net.
Unfortunately that has been completely dependent on providers of widely used software focusing on security and brand image over self serving interests. Nothing has ever prevented these companies from doing what could be considered the "wrong thing" over the right thing to date and, in general, blocking updates would be the wrong thing.
With that said (written).... A project like this with an (mostly) open code base should have a veto mechanism to push the developers towards a different solution when something like this comes up.
If you're interested Adam Savage was allowed to use a 'Spot' robot for several months and shares some observations and information on actual usage on his youtube channel.
reading between the lines, as of Q4 last year they're still in _extremely_ early in any meaningful deployment. Lease-based alpha access to select customers, "hey we don't have to send a 12-man team out every month anymore", and carefully PRd information is certainly some kind of progress, but it's not anything like normal use.
Definitely not normal use, probably something akin to controlled alpha or beta testing. The only recent article a saw about the fall 2019 trials/leases was this article that didn't provide any meaningful details on usage.
> This work was supported in part by the NSF (CCF-0954024, CCF-1116289, CDI-1124931, EF-1124931); Air Force (FA8750- 15-2-0075); Virginia Commonwealth Fellowship; Jefferson Scholars Foundation; the Virginia CIT CRCF program under grant no. MF14S-021-IT; by C-FAR, one of the six SRC STARnet Centers, sponsored by MARCO and DARPA; a grant from Micron Technology.
according to a 2011 cnet (1) article it was months away from release but "Courier was cancelled because the product didn't clearly align with the company's Windows and Office franchises"