A lot of human-driven car accident victims have done nothing wrong at all.
Almost every driver thinks they're better than average.
Even when it's a stupid person dying from their stupidity, it's still a tragedy.
I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.
>I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
What this ignores is that self-driving cars will by and large massively reduce the number of 'stupid' drivers dying (the ones who are texting and driving, drinking and driving, or just simply bad drivers) but may cause the death of more 'good' drivers/innocent pedestrians.
So the total number could go down, but the people who are dying instead didn't necessarily do anything deserving of an accident or death.
I say this as someone who believes self driving cars will eventually take over and we need to pass laws allowing a certain percentage of deaths (so that one at case of the software being at fault doesn't cause a company to go under), but undeserved deaths are something people will likely have to deal with somewhere down the line with self driving cars. At the very least, since they're run by software they should never make the same mistake twice, while with humans you see the same deadly mistakes being made every day.
OK but if by saving 1000 lives a year required as a side-effect that you personally be among the fatalities, would that be OK for you? I hope not. Think of this as a technical corner case; so, the question is the soundness of the analysis—for example, the distribution of deaths and what that means for safety—and not letting various facile logic get in the way of that work.
Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.
I think that’s a win, even if I now have an even statistical chance to be in the 1001 and no chance to be in the 2001.
Requiring that I be in 1001 is not ok, no more than requiring I donate all my organs tomorrow. Allowing that I might be in the 1001 is ok, just a registering for organ donation is.
>> Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.
You're saying that auto-driving would save the lives of 1000 people who would have died without it, by causing the death of another 1001 that wouldn't have died if it wasn't for auto-driving?
So you're basically exchanging the lives of the 1001 and the 1000? That looks a lot less of a no-brainer than your comment makes it sound.
Not to mention, the 1001 people who wouldn't have died if it wasn't for auto-driving would most probably prefer to not have to die. How is it that their opinion doesn't matter?
It saves 2001 (not 1000 as you said, or perhaps said differently, I'm exchanging the lives of 1001 to preserve the lives of the 2001).
It kills 1001.
Net lives saved = 1000.
> How is it that their opinion doesn't matter?
The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
It's a trolley problem[1]. Individual people have been killed by seatbelts, yet you probably think it's OK that we have seatbelts because many more people have been saved and/or had their injuries reduced. Individual people have been killed by airbags, yet you probably think it's OK that we have them. Many people have been killed by obesity-related mortality by shifting walkers and bikers into cars, yet you probably think it's OK that we have cars.
Right. And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We exchaned their lives.
>> The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
Of course it matters, but they were dying already, until we intervened and killed another 1001 people with our self-driving technology.
Besides, some of the people who would be dying without self-driving technology had control of their destiny, much unlike the (btw, very theoretical) trolley problem. Some of them probably made mistakes that cost their lives. Some of them were obviously the victims of others' mistakes. But the people killed because of self-driving cars were all the victims of self-driving cars mistakes (they were never the driver).
>> Individual people have been killed by airbags, yet you probably think it's OK that we have them.
An airbag or a seatbelt can't drive out on the road and run someone over. The class of accident that airbags cause is the same kind of accident you get when you fall off a ladder etc. But the kind of accident that auto-cars cause is an accident where some intelligent agent takes action and the action causes someone else harm. An airbag is not an intelligent agent, neither is a seatbelt- but an AI car, is.
Let's change tack slightly. Say that we had a vaccine for a deadly disease and 1 million people were vaccinated with it. And let's say that out of that 1 million people, 1000 died as a side effect of the vaccine, while 2001 people avoided certain death (and let's say that we are in a position to know that with absolute certainty).
Do you think such a vaccine would be considered successful?
I guess I should clarify that when I say "considered successful" I mean: a) by the general population and b) by the medical profession.
That's not really a very good argument. If you change parameters in a complex system then the odds are that you are going to find pathologies in new places.
People claim seatbelts have caused lots of deaths, and I'm sure at least a percentage of these claims are fair ([0]). I still think it's safer to drive a car with a seatbelt rather than without.
The downside of a laissez-faire policy towards self-driving cars could fall on anyone, but so does the upside. Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random. Run over, rear-ended, T-crashed at an intersection, and could not reasonably have done anything to prevent it.
>> Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random.
But all self-driving car victims (will) have done nothing wrong. Whethere they were riding in the car that killed them or not, they were not in control of it, so they 're not responsible for the decision that led to their deaths.
Unless the decision to go for a walk or a cycle, or to ride on a car makes you responsible for dying in a car accident?
Almost every driver thinks they're better than average.
Even when it's a stupid person dying from their stupidity, it's still a tragedy.
I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.