No it is not. But might be someday. Will that be good enough? My dad would spend 100k on a car that was 10 times less safe, if it let him keep using a car. I'm sure others would as well.
The proper way to prove "it's 100 times more safe" isn't to let it cause some number of deaths and then go "welp, we tried our best but we were wrong, turns out it's less safe. Shucks". But that's exactly what Tesla, Uber, etc. all seem to be doing. "We'll compare our statistics once the death tallies are in and we'll see which is safer".
The most we have to go on for a rough approximation of safety is the nebulous and ill-defined "disengagements" in the public CA reports. From what I can tell, there's no strong algorithmic or safety analysis of these self-driving systems at all.
The climate about these things is sour because the self-driving car technology companies seem to want to spin the narrative and blame anybody but themselves for the deaths they were causing, and just praying they'll cause less of them once this tech goes global.
For clarity on this point, "disengagement" has a specific meaning to the California DMV[0]:
> For the purposes of this section, “disengagement” means a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.
However, some self-driving car manufacturers have been testing the rules quite a lot by choosing which disengagements to report[1]. Waymo reportedly "runs a simulation" to figure out whether to include the disengagement in its report, but there's no mention of what the simulation is or how it might fail in similar ways the technology inside the car did! Thus, the numbers in the reports are likely deflated from being actually every single disengagement.
And even this pathetic and tootheless regulation was enough to drive Uber from California for a while.
With all the games they play with the numbers, Waymo still reports 63 safety-related disengagements for a mere 352,000 miles. This doesn't sound like an acceptable level of safety.
The surprising part is that it appears that Waymo is planning to start deploying their system in 2018. How can they even consider it with this amount of disengagements?
Inexperienced drivers cannot be avoided, but many states try mitigate the risk by limiting teenage drivers.
But my point is that whether to allow a vehicle that is not safer than a reasonable human driver should be not left to the car owner alone - there are other stakeholders whose interests must be taken into account.
Is it? Can it be proved controlling for all variables?