You’ve hit on a giant thing that bothers me about this discourse: endless rationalistic discourse about systems and phenomena that we have no experience with at all, not even analogous experience.
This is not like the atomic bomb. We had tons of experience with big bombs. We just knew atom bombs if they worked could make orders of magnitude larger booms. The implications if real big bombs could be reasoned about with some basis in reality.
It wasn’t reasoning about wholly unknown types of things that no human being has ever encountered or interacted with.
This is like a panel on protocols for extraterrestrial contact. It’d be fine to do that kind of exercise academically but these people are talking about passing actual laws and regulations on the basis of reasoning in a vacuum.
We are going to end up with laws and regulations that will be simultaneously too restrictive to human endeavor and ineffective at preventing negative outcomes if this stuff ever manifests for real.
> This is not like the atomic bomb. We had tons of experience with big bombs. We just knew atom bombs if they worked could make orders of magnitude larger booms. The implications if real big bombs could be reasoned about with some basis in reality.
Well, we thought we did.
We really didn't fully appreciate the impact of the fallout until we saw it; and Castle Bravo was much bigger than expected because we didn't know what we were doing; and the demon core; and the cold war arms race…
But yeah, my mental framing for this is a rerun of the first stage of the industrial revolution, and it took quite a lot of harm for what is now basic workplace health and safety such as "don't use children to remove things from heavy machinery while it's running", and we're likely to have something that's equally dumb happen even in the relatively good possible futures that don't have paperclip maximisers or malicious humans using AI for evil.
There are so many differences here vs organisms shaped by evolution and involved in a food web with each other. This is much closer to space aliens or beings from another dimension.
If there are huge risks here they are probably not the ones we are worried about.
Personally one of my biggest worries with both sentient AI and aliens is how humans might react and what we might do to each other or ourselves out of fear and paranoia.
It seems fine to me. When there is evidence for a certain type of current or future harm they present it, and when there is not they express uncertainty.
Can AI enable phishing? "Research has found that between January to February 2023, there was a 135% increase in ‘novel social engineering attacks’ in a sample of email accounts (343*), which is thought to correspond to the widespread adoption of ChatGPT."
Can AIs make bioweapons? "General-purpose AI systems for biological uses do not present a clear current threat, and future threats are hard to assess and rule out."
From looking at the summary, I think it's a bit more measured than this statement implies. They talk about concrete risks of spam, scams, and deepfakes. They then go into possible future harms but couched in language of "experts are uncertain if this is possible or likely" etc.
This makes more sense, thank you. I hadn't picked up on the distinction, but I agree that's more reasonable.
I still think we don't really know; it's developing technology and it's changing so fast that it seems like it's probably too early for experts on practical applications to exist and claim they know the impact it will have.
Which is one of those cases where I briefly want to reject linguistic descriptivism because to me the "G" in "AGI" is precisely "general".
But then I laugh at myself, because words shift and you have to roll with the changes.
But do be aware that this shift of meanings is not universally acknowledged let alone accepted — there's at least half a dozen different meanings to the term "AGI", except one of them requires "consciousness" and there's loads of different meanings of that too.
I think it's an important topic to discuss and consider, but this seems to be speaking with more knowledge and authority than seems reasonable to me.