Humans do dumb stuff like drive their cars into flowing floodwaters and they show no signs of stopping. The Waymo Driver (the name for the hardware and software stack) is getting smarter all the time.
Humans do indeed drive into floodwaters like fools, but a critical point that’s often missed when talking about how self-driving cars will make the roads safer: you don’t. Self-driving cars can potentially be safer in general, but not necessarily for you in particular.
Imagine I created a magic bracelet that could reduce bicycling related deaths and injuries from 130,000 a year to 70,000. A great win for humans! The catch is that everyone would need to wear it, even people that do not ride bikes, and those 70,000 deaths and injuries would be randomly distributed among the entire population. Would you wear it?
I don't understand the analogy. No one is being forced to stop driving and take autonomous rides. If I am a better than average driver (debatable), I'm glad to have below average drivers use autonomous vehicles instead.
If you’re on the road with one you’re wearing the bracelet. If you’re driving one you’re wearing two. I don’t mean to sound so sour, I was hoping the analogy would alias that into the background a bit, it’s just that the hoopla around self-driving cars is causing people to skip reading the footnotes.
Safe vs Unsafe isn’t as simple as who gets a 10/10 on the closed course test. Humans are more predictable than random chance would allow, and often even when drunk or distracted. I can’t count how many times I’ve seen someone wobbling on the road and knew to stay back. You can also often tell when someone might yank over into your lane based on them flying up in the other lane, getting just in front of you in that lane then wiggling a bit, clearly waiting for the first chance to pull in front of you and take off. There are lots of other little ‘tells’ that, if you’re a defensive driver, have avoided countless accidents.
Being a prudent defensive driver goes out the window when the perfect speed limit adhering driver next to you goes to straight to -NaN when someone drives past it with Christmas lights on their car, or the sun glares off oversized chrome rims, or an odd shaped vehicle doesn’t match “vehicle” in the database, or, or, or.
* I’m very much not saying that the example I mentioned above is reason enough, I’m saying that I’m not sure enough thought is being put into how many more pages I could go on, and I’m just some schmuck that worked for some number of years on the underlying technology - not the guy watching it fail in imaginative ways on the road.
Something said earlier that really overestimated what’s happening: it doesn’t get smarter, it gets another “if” statement.
That's just what getting smarter is though. I mean, we want to see the human "if" as somehow better than the machine "if" due an obvious bias, but mechanically, what's the difference?
Comparing the two as “ifs” is a really fun way to start getting drunk on this flavor of philosophy, and I highly encourage it, but the short answer is that ‘if’ is deterministic, whereas intelligence isn’t. It’s actually the first step in determining if something has intelligence. If every time you do Action[A,B,C] it does Response[A,B,C] then you can end your inquiry. Things that respond that way include tuning forks, calculators, toasters, Furby, Tickle Me Elmo, Roomba, doorbells, glitter, magnets, literally any pile of garbage, and Teslas.