Thanks for sharing your thoughts. I have a somewhat more optimistic take:
We will create some form of artificial intelligence (AI) in the next decade. However, the recent (i.e. over last five years) advances in natural language processing are not the ones that automatically lead to that. I think we need more fundamental breakthroughs than attention-based neural networks and all the tricks that have gone into say a Llama3. [1]
Even if we did develop an AI more aggressively, say by 2030, I am not sure if that automatically leads to extinction. Here, my optimism is guided by the last few times we had major technological innovations (i.e. Steam Engine, Rise of the Semiconductor based circuits, etc.).
To be sure, these technologies were disruptive, probably net-negative in terms of pure climate impact and led to near-term job losses and generally helped increase the power of people on the top of the capitalist food-chain. However, they also helped us accelerate into a world where we can produce more food than we need for feeding everybody on the planet in a given year. And generally, helped us see more of the world by connecting the planet both physically and virtually.
I am not a blind optimist though. I think we need regulation for harm prevention. The problem with that approach is that we need to understand what harms are being caused first and in some instances that might be too late. But, I don't think we start by taking a blanket position (as a society/polity) on how technological innovation should occur -- because a) we are not there yet and b) that's just a poor way to do science.