There are a lot of people that believe most of us will die within the next 10 years, and a rational discussion of these subjects is largely based in the fact that for the last three generations, we have faced numerous existential threats that instead of solving them, have instead all had the can kicked down the road.
Eventually what inevitably happens is you get convergence in time where you simply do not have the resources, and with the risk factors today, that convergence my cause societal failure.
Super Intelligent AI alone, yeah that probably is not a threat because its so highly (astronomically) unlikely, but socio-economic collapse to starvation; now that's a very real possibility when you create something that destroys the ability for an individual to form capital, or breaks other underlying aspects which underpin all of societal organization going back hundreds of years.
Now these things won't happen overnight, but that's not the danger either. The danger is the hysteresis, or in other words by the time you find out and can objectively show its happening to react, its impossible to change the outcome. Your goose is just cooked as a species, and the cycle of doom just circles until no ones left.
Few realize that food today is wholly dependent on Haber-Bosch chemistry. You get 4x less yield without it, and following Catton in a post-extraction phase sustainable population numbers may be fractional compared to last century (when the population was 4bn). People break quite easily under certain circumstances and so any leaders following MAD doctrine will likely actually use it when they realize everything is failing and what's ahead.
These are just things that naturally happen when the mechanics of things that are long forgotten which underpin the way things work fail to ruin. The loss of objective reality is a warning sign of such things on the horizon.
Eventually what inevitably happens is you get convergence in time where you simply do not have the resources, and with the risk factors today, that convergence my cause societal failure.
Super Intelligent AI alone, yeah that probably is not a threat because its so highly (astronomically) unlikely, but socio-economic collapse to starvation; now that's a very real possibility when you create something that destroys the ability for an individual to form capital, or breaks other underlying aspects which underpin all of societal organization going back hundreds of years.
Now these things won't happen overnight, but that's not the danger either. The danger is the hysteresis, or in other words by the time you find out and can objectively show its happening to react, its impossible to change the outcome. Your goose is just cooked as a species, and the cycle of doom just circles until no ones left.
Few realize that food today is wholly dependent on Haber-Bosch chemistry. You get 4x less yield without it, and following Catton in a post-extraction phase sustainable population numbers may be fractional compared to last century (when the population was 4bn). People break quite easily under certain circumstances and so any leaders following MAD doctrine will likely actually use it when they realize everything is failing and what's ahead.
These are just things that naturally happen when the mechanics of things that are long forgotten which underpin the way things work fail to ruin. The loss of objective reality is a warning sign of such things on the horizon.