> I think a car is going to need a Theory-of-Mind to navigate complex social driving environments
As someone with a lot of experience in self-driving cars, my opinion has changed over the course of the last decade from "we can create smart enough models of these separate problems to create a statistically safer product" to "first you need to invent general AI."
It becomes immediately obvious as you encounter more and more edge cases, but you would never even begin to think of these edge cases and you have no idea how to handle them (even when hard coding them on individual cases - and you couldn't anyway as there are too many) so you realize the car actually has to be able to think. What's worse, it has to be able to think far enough into the future to anticipate anything that could end badly.
The most interesting part of self-driving cars is definitely on the prediction teams - their job is to predict the world however many seconds into the future and incorporate that into the path planning. As you can guess, the car often predicts the future incorrectly. It's just a ridiculously hard problem. I think the current toolbox of ML is just woefully, completely, entirely inadequate to tackle this monster.
I happened to run across statistics (iihs I think) for fatalities to (human) drivers per registered vehicle-years, and the number is about 30 per million. You may or may not accept that this is the right type of metric, but it's interesting to think about what the number implies. I read that Uber had a fleet of 250 cars in testing. If an average human driver has a fatal accident once in ~33,000 years, then to demonstrate parity in self driving, Uber would have to operate their 250 vehicles for an average of 130 years between fatalities. Granted, this is driver fatalities, so hitting pedestrians would not count. But even so, it seems it is a high bar and you would need hundreds of years of testing at the current rate to be confident you are better than human drivers rather than having a statistical fluke.
As someone with a lot of experience in self-driving cars, my opinion has changed over the course of the last decade from "we can create smart enough models of these separate problems to create a statistically safer product" to "first you need to invent general AI."
It becomes immediately obvious as you encounter more and more edge cases, but you would never even begin to think of these edge cases and you have no idea how to handle them (even when hard coding them on individual cases - and you couldn't anyway as there are too many) so you realize the car actually has to be able to think. What's worse, it has to be able to think far enough into the future to anticipate anything that could end badly.
The most interesting part of self-driving cars is definitely on the prediction teams - their job is to predict the world however many seconds into the future and incorporate that into the path planning. As you can guess, the car often predicts the future incorrectly. It's just a ridiculously hard problem. I think the current toolbox of ML is just woefully, completely, entirely inadequate to tackle this monster.