Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amen. Uber's fatal accident rate is now 50x that of sober human driver's. At their current rate of driving, they can't have a _lower_ rate until 2028! Inexcusable failure of their technology to prevent a human death in the most basic collision avoidance scenario.


Before the accident they were infinitely better than a sober human driver.

Understand statistics before employing them please. We don't have enough data and a single data point doesn't change that.


It is not 1 data pint though, say you shot at a target 100 times and hit only 1 time , is this 1 data point only and I can't make any conclusion?

Similar if you drive 1 million km and kill 1 person, if I drive 10 km and kill 1 person is still 1 data point and I can make no conclusion? I think it would have been 1 data point if this was the first km a Uber self driving car has driven.


>> Similar if you drive 1 million km and kill 1 person, if I drive 10 km and kill 1 person is still 1 data point and I can make no conclusion?

Yep. Because I still have another 9 m km to go before I've driven as long as you have and there is no way to know whether I'm going to kill another 9 people, or 0 more, until I've actually driven them all.


You are wrong, there is a conclusion we can make, the conclusion is not absolute but fuzzy so maybe fuzzy logic is not your thing.

Also you have a mistake in your comment, I would still have to do 999990 km of driving. If I killed a person in my first 10 km what is the probability that I won't kill anyone in my next 999990?

Your point is that I can't be 100% sure and that is true but we can compute the probability, so the probability that I had bad luck is very small, if the probability of killing 1 person in 1 mil km is 1 or 100% what is the probability of killing this person in my first 10km? ( you are correct is not 0 )


I misread the numbers in the original post. But what you say in your comment- well, that's not how it works.

To assess the risk posed by a driver, you wouldn't follow them around, constantly logging the miles they drive, counting the deaths they cause and continuously updating the probability they will kill someone, at least not in the real world (in a simulation, maybe). Instead, what you'd do is wait for enough people to have driven some reasonably significant (and completely arbitrarily chosen) distance, then count the fatal accidents per person per that distance and thereby calculate the probability of causing an accident per person per that distance. That's a far more convenient way to gather statistics, not least because if you take 1000 people who have causd an accident while driving, they'll each have driven a different distance before the accident.

So you might come up with a figure that says "Americans kill 1.18 people every million miles driven" (it's something like that, actually, if memory serves).

Given that sort of metric, you can't then use it for comparison with the performance of someone who has only driven, say, 1000 miles. Because if you did, you would be comparing apples and oranges: 1 accident per 1000 miles is not on the same scale as ~1 accident per million miles. There's still another 999k miles to go before you're in the same ballpark.

And on that scale, no, you can't know whether an accident in the first 1000 miles will be followed by another in the next 1000 miles. Your expectation is set for 1 million miles.

It's a question of granularity of the metric.


Do you have any math do back up that what I said is wrong? I can try to explain my point better but I see you are ignoring the math so maybe I should not waste my time.(we can reduce the problem to balls in a jar and make things easy)

But think about this, if I killed a person in my first 10 km of driving, what is the chance that will kill 0 after the next 999990, would you bet that I will kill 0 or 1 , more then 10?


I think what you mean by "maths" is "formulae" or "equations". Before you get to the formulae, you have to figure out what you're trying to do. If the formulae are irrelevant to the problem at hand you will not find a solution.

As to your question- I wouldn't bet at all. There is no way to know.

Here's a problem from me: I give you the number 345.

What are the chances that the next number I give you is going to be within 1000 numbers of 345?


Your problem is not equivalent with what we were discussing about, you need to change it a bit like

I draw random numbers from 0 to Max and I get 345, what is P that next number N is in 100 range near 345?

P = 200/Max; in the assumption that Max >445;

For self driving cars, the probability that a car kills a person for 1 km or road driven is unknown, so you can call it X

Then my self driving car killed a person in first 10 km, What is the probability that a random event will happen in the first 10km from 10^9 km, is 10^(-8)

Say the self driving car would have the probability of killing N people for 10^9 km, this are random,independent events So the probability that a kill will happen in first 10km is N*10^-8,

I hope you notice my point that we can measure something, we do not need to wait for 10 or 100 people to be killed

We are not sure but we can say that is is a very small chance that I will not kill other person in my next 999 990km.

let me know if my logic is not correct, in statistics is easy to do mistakes.


> Understand statistics before employing them please

Do you understand statistics?

https://news.ycombinator.com/item?id=16685929


I wonder if this type of events are modeled by Poisson processes and measured by MTB(A?)


reminds me of this xkcd https://xkcd.com/605/


A fatal accident at 50x the rate of a sober human driver with a study size where N = 1.


I'll have a go at seeing what we can conclude from the data. Others, check my thinking please. Now we have 1 death in 3m miles for Uber, versus 1.18 deaths in 100m miles for sober drivers.

The expected rate for 100m miles for Uber is 33.333...

But how confident can we be? To answer that let's compute a poisson confidence interval around that rate, as in https://stats.stackexchange.com/questions/10926/how-to-calcu....

Let's see what a 95% confidence interval for 1 death in 3m miles looks like:

  > poisson.test(1,conf.level = 0.95)$conf.int
  [1] 0.02531781 5.57164339
  attr(,"conf.level")
  [1] 0.95
Multiply that by 33.333 to convert to deaths per 100m miles:

  > 33.333333*0.02531781
  [1] 0.843927
  > 33.333333*5.57164339
  [1] 185.7214
  
So 95% confidence that the rate per 100m miles is from 0.84 to 185.72. That's pretty wide! And since the lower bound crosses 1.18, the difference is not significant at the .05 level (if we must make that particular comparison). However, let's look at 90% CI:

  > poisson.test(1,conf.level = 0.9)$conf.int
  [1] 0.05129329 4.74386452
  attr(,"conf.level")
  [1] 0.9
  
Which gives a CI of 1.71 to 158.13. So with 90% confidence we can say Uber is less safe than sober drivers. Ok.

Now let's look at 93% CI:

  > poisson.test(1,conf.level = 0.93)$conf.int
 [1] 0.03562718 5.17251332
 attr(,"conf.level")
 [1] 0.93
 
That gives a CI of 1.188 to 172.417. The lower bound being just a bit worse than sober drivers.

So we can conclude with 93% certainty from this data that Uber is less safe than sober drivers. Probably a LOT less safe. Although the CI is really wide, this is shocking data for Uber, in my opinion.


> 1.18 deaths in 100m miles for sober drivers

> Uber is less safe than sober drivers

But the 1.18 deaths in 100m miles is for all drivers, not just the subset of sober drivers. Not quite sure why you are claiming it is only sober drivers.


Erm... I don’t think statistics work like this. You can‘t go and pick a confidence level that „confirms“ your desired outcome.

People with more knowledge about statistics than me might be able to explain why.


Statistics works exactly like this. What doesn't work is saying "Okay, we have one death in 3 million miles, that extrapolates to 33 deaths in 100 million miles", because it implies a silent addition of "with nearly 100% certainty", which is the part that's wrong here.

But the poster did something different. He took it one level further and attempted to calculate this confidence number for different spans in which the actual "deaths per 100 million miles" number of Uber's current cars would fall into, given an ideal world (from a data perspective) in which they would have driven an infinite amount of miles. But he actually did it the other way round - he modified the confidence variable and calculated the spans, and then he adjusted the confidence until he arrived at a span that would put Uber's cars just on par with human driving in the best case.

The fact that a fatal incident happens that early (at 3 million, and not closer or past the 86 million that a statistical human drives on average until a fatal incident occurs) does not allow us to extrapolate a sound number per 100 million miles, but it tells us something about the probability by which the actual number of fatalities by 100 million miles that we'd get if Uber continued testing just like it did and racked up enough miles (and killed people) for a statistically sound calculation will fall into different margins. Sure, Uber could have been just very, very unlucky - but that's pretty unlikely, and the unlikeliness of Uber's bad luck (and conversely the likeliness of the fact that Uber's tech is just systematically deadly) is precisely what can be calculated with this single incident.


The statement "with 95% confidence" is a classic misinterpretation of what a CI is, the assumption of Poisson is dubious but there's no obvious plausible alternative. Overall seems reasonable.


Hello! I'd be interested to hear what you think the correct interpretation of these CIs are in this case. Failing that can you explain what is wrong with saying something like "with xx% confidence we can conclude that the rate is within these bounds" is?

The assumption of using Poisson seems pretty solid to me, given we are talking about x events in some continuum (miles traveled in this case), but always happy to hear any cogent objections.


The Poisson distribution assumes equal probability of events occurring. That seems to me to be an oversimplification, given that AV performance varies over time as changes are made, and also given that terrain / environment plays a huge factor here, whether looking at one particular vehicle or comparing to vehicles across companies (and drivers in general). Since AV performance will hopefully be improved when an accident occurs, we also cannot meet the assumption of independence between events. Although if AVs are simply temporarily stopped after an accident, that also breaks the independence assumption as we'd have a time period of zero accidents.

The bigger problem though is what you are doing with your confidence interval. A CI is a statement about replication. A 95% confidence level means that in 100 replications of the experiment using similar data, 5 of the generated CIs -- which will all have different endpoints -- will _not_ contain the population parameter, although IIRC this math is more complicated in practice, meaning that the error rate is actually higher. As such, if you generate a CI and multiply the endpoints by some constant, that's a complete violation of what is being expressed: there is vastly more data with 100m driving miles than 3m miles, which will cause the CI to shrink and the estimate of the parameter to become more accurate. There is absolutely no basis for multiplying the endpoints of a CI!

Ultimately, given that the size of the sample has an effect on CI width, you need to conduct an appropriate statistical test to compare the estimated parameters between the 1 in 3m deaths for Uber and whatever data generated the 1.18 in 100m deaths for sober drivers. There's a lot more that needs to be taken into account here than what a simple Poisson test can do.

For an analysis of how AVs with various safety levels perform in terms of lives saved over time, I recommend https://www.rand.org/blog/articles/2017/11/why-waiting-for-p...

Edit: Note the default values of the T and r parameters when you run poisson.test(1, conf.level = 0.95), and also that the p-value of the one-sample exact test you performed is 1. Also, since this is an exact test, the rate of rejecting true null hypotheses at 0.95 is 0.05, but given my reservations about the use of a Poisson distribution here, I don't think that using an exact Poisson test is appropriate.


To be more clear, when you run poisson.test(1, conf.level = 0.95) with the default values of T and r (which are both 1) you are performing the following two-sided hypothesis test:

Null hypothesis: The true rate of events is 1 (r) with a time base of 1 (T).

Alternative hypothesis: The true rate is not equal to 1.

The reason that you end up with a p-value of 1 is because you've said that you've observed 1 event in a time base of 1 with a hypothesized rate of 1. So given this data, of course the probability of observing a rate equal to or more extreme than 1 is 1! As such, you're not actually testing anything about the data that you claim you are testing.

I'm not trying to be harsh here, but please be careful when using statistics!


Ok I re-ran setting T properly for both cases. The results were similar:

> poisson.test(c(1, 11800), c(3, 1000000), alternative = c("two.sided"),conf.level = .93)

Comparison of Poisson rates

  data:  c(1, 11800) time base: c(3, 1e+06)
  count1 = 1, expected count1 = 0.035403, p-value = 0.03478
  alternative hypothesis: true rate ratio is not equal to 1
  93 percent confidence interval:
     1.006334 146.142032
  sample estimates:
  rate ratio 
    28.24859
The lower bound of the CI approaches a rate ratio = 1 for a 93% confidence interval.

Interestingly, if you multiply the CI I claimed before by the rate ratio instead of the expected rate, you get almost exactly the same CI as here.

  > ci <- c(0.03562718, 5.17251332)
  > 28.24859 * ci
  [1]   1.006418 146.116208
 
* Note 11800 is about two years of pedestrian deaths and time units are in millions of miles. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...


Facinating, thank you. Particularly the part about multiplying the CI. I wonder if the analysis could be resuced to some extent? I feel there must be a way to use the information we have do draw some conclusions, at least relative to some explicit assumptions.


No. 3 million miles of observation. You can get a pretty exact and conservative estimate with a bayesian poisson process model. I don't have the time to run the numbers right now, but my guess is the posterior estimate that Uber's fatal accident rate is higher than a human's is >90%, even if taking the human accident rate as a starting prior.


I thought Uber had to have a human take over every 13 miles.

It’s more like 10 miles of observation 300,000 times. Or rather an attentive human can be 50x better than average.


I'd be very interested in seeing the math if you have the time later.


95% - erring on assuming Uber has driven more miles than they probably have.

https://news.ycombinator.com/item?id=16621118


Hmm; if I understand correctly, in that link you show that if Uber’s AI has the same risk of killing people as a human driver, then the prior probability of an accident occurring when it did or earlier was 5%. That’s significant, but it’s not the same measure as the probability that the AI has a higher risk (which would require a prior distribution).


It's a reasonable gut feeling to not generalize from n=1, but the numerical evidence - with either a Bayesian or frequentist approach - is actually quite strong and statistically significant. Math here: https://news.ycombinator.com/item?id=16655081


That's not right. You're setting your expectation for N = 100m miles, then updating it for N = 3 million miles?

That's like saying: "I rolled this red d20 twenty times before I rolled a 1, whereas I rolled a 1 the first time on this blue d20, so the red d20 is obviously better and I'm rolling all my saves on it".

Or, I don't know- "I rolled three 1s on this d20 in twenty rolls so it's obviously not a fair d20".


Can you clarify? What do you believe to be wrong and why?

If you have a strong prior the dice are equivalent, then yes, the rolls shouldn't change your mind.

If you have a prior that the dice are weighted in an unknown way, then yes, the rolls really should change your mind.


What is it compared to human drivers in general? That seems to be the fair comparison as "computer will never be drunk/tired/distracted" is usually cited as one of the benefits of computer-driven vehicles.


If you want to be legally allowed on the road, I think your benchmark has to be other drivers that are legally allowed on the road.


Haha, as I joked in another article comment thread, their current crash rate ironically means a somewhat drunk but otherwise defensive driver is probably much safer! What a world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: