Click-bait headline by Reuters (and Guardian). The Delphi spokesperson repudiated[1] the "near-miss" description.
"I was there for the discussion with Reuters about automated vehicles," she told Ars by e-mail. "The story was taken completely out of context when describing a type of complex driving scenario that can occur in the real world. Our expert provided an example of a lane change scenario that our car recently experienced which, coincidentally, was with one of the Google cars also on the road at that time. It wasn’t a 'near miss' as described in the Reuters story."
"Our car did exactly what it was supposed to," she wrote. "Our car saw the Google car move into the same lane as our car was planning to move into, but upon detecting that the lane was no longer open it decided to terminate the move and wait until it was clear again."
I wouldn't mind living in a world were all news stories are like this - "Something bad almost happened, but it didn't, because the algorithms took care of it".
Back to the story, the article doesn't mention if the human drivers had to interfere to avoid collision. It would be quite pointless to have a self-driving car which you cannot trust and be always on the lookout to take over if it does something stupid.
No, Minority Report had psychics (the title comes from having 3 of them, so when one of them sees something different than the other 2, there is a "Minority Report").
Is 3 psychics really different than 3 algorithms? Safety critical software does get implemented sometimes in 2 different ways to cross-check sanity at runtime. Which does of course come up with issues like which side to trust and what to do if they disagree.
Not to be cynical, but this is perfect publicity for Delphi Automotive. There appears to be no third-party confirmation of the incident, solely the say-so of the Delphi executive.
It would seem to me (given the low density of self-driving cars and their excellent track record) that the odds of Delphi wanting get in the news and bash a competitor at the same time is greater than the odds of two self-driving cars happening to cross paths and causing a "near-miss".
Both vehicles would be doing a ton of data collection, so it's hard to imagine a situation where false claims wouldn't easily be debunked with data/footage from the vehicles themselves.
It isn't possible for Google to provide footage showing it _not_ happening, and the release of footage from Delphi would be entirely on their own prerogative.
So the safety tech worked as expected. That is a great news. There are way more "near misses" everyday from human drivers.
Also seeing the "near miss" in the editorialized title - this Carlin quote is mandatory
“Here's a phrase that apparently the airlines simply made up: near miss. They say that if 2 planes almost collide, it's a near miss. Bullshit, my friend. It's a near hit! A collision is a near miss.
[WHAM! CRUNCH!]
"Look, they nearly missed!"
"Yes, but not quite.”
Carlin is ignorant. The OED lists usage of 'near miss' going back to 1940 in a maritime context. It was not made up by the airlines. Even if it was, the phrase still makes sense. It's not 'nearly missed', it's 'a miss where the objects were near'.
Have you ever heard the phrase 'it's funny because it's true'? Well this isn't true, so I don't see Carlin as funny. He's just ignorant and angry.
The real question is how did they get it to a collission situation in the first place.
Even assuming their evasive action was perfect, if instead of 2 s.d. cars was 1 normal car on the other side or some other obstacle (a building etc), getting on a collision course wouldn't be that safe...
EDIT: downvote? If someone disagress can also state why. Or am I shuttering the perfect technology dream?
A self-driving Lexus operated by Google apparently cut off a self-driving Audi run by Delphi Automotive as it was trying to change lanes, causing it to take “appropriate action” to avoid a collision, said a Delphi executive.
John Absmeier, who was travelling in his company’s car at the time, said the Audi was forced to abort its lane change in the incident, which happened earlier this week.
I'm glad it was a robot, and not me driving that car. Not sure if I would have been able to change lanes that fast without causing an accident.
That's the issue: one of the robots forced another car to react quickly to avoid an accident. Had this been a human, it could have ended very differently.
It wasn’t a 'near miss' as described in the Reuters story."
Instead, she explained how this was a normal scenario, and the Delphi car performed admirably.
"Our car did exactly what it was supposed to," she wrote. "Our car saw the Google car move into the same lane as our car was planning to move into, but upon detecting that the lane was no longer open it decided to terminate the move and wait until it was clear again."
Sounds like an excellent idea, otherwise there's the risk of companies not reporting any accident by just tagging them all as "manual driving" whether they were manually driven at the time or not.
”No collision took place.”
”In all cases, the self-driving prototype was not at fault, according to the California Department of Motor Vehicles and the companies.”
You're piecing together two unrelated sentences from the article to tell a different story...
"No collision took place" - refers to the two cars in this case.
"In all cases, the self-driving prototype was not at fault, according to the California Department of Motor Vehicles and the companies." - refers to previous crashes excluding the one being discussed in the article.
The road this happened on frequently has three lanes on a carriageway. I can easily imagine a situation where the cars in question are in the left-most and right-most lanes, and both want to move into the middle lane. Both "see" that the lane is empty and both make their move at the same time. I wonder if these self-driving cars can "see" a turn signal two lanes away.
Of course, we don't know if the Google car was in self-driving mode or was being operated by a person. The article only tells one side of the story.
So I guess the real issue is, we need a Federal standard with regards to self driving cars that they must talk to similar cars within a set radius.
Should be far easier that just driving down a crowded road. Effectively each car has the equivalent of a mac address and simply broadcasts its actions and any car listening within range responds appropriately.
1. a narrowly avoided collision or other accident.
"she had a near miss when her horse was nearly sucked into a dyke"
2. a bomb or shot that just misses its target.
"he had escaped more than twenty near misses"
In no definition is "near miss" a hit, even taking it "literally" near is adjective for the noun miss, so it is a miss that is near. i.e, a miss where two bodies are close but don't hit.
"could care less" only exists out of misappropriation of the original idiom though, which is "couldn't care less" - a phrase which makes far more sense.
But it's used in the sense "too close and not within acceptable margins". It's not a collision, but one might just as well think of it as a "hit" - someone messed up.
"near miss" just means "that was close", or "we almost didn't miss" -- in other words, "we missed, but we were pretty damn close and could have collided".
It doesn't mean "nearly missed, but actually hit" -- check any dictionary.
What if one of the automated cars has to pick between swerving into another obstacle or keeping the collision course?
(not even considering the moral implications of what that second obstacle might be...)
This brings up an interesting topic - who is liable when two self driving cars crash (without outside influence)?
The car owners? The car manufacturer?
It's going to be great when these cars are around a bit more. You could have some pretty good fun trying to get one of the self driving cars to crash into you. Won't be that hard to do I'm sure.
Ugh, this article sounds like the Lexus brand trying to latch on to the google self-driving hype train. Hey guys we are making a cool car too!
On a more important note: do we really want all the different tech companies and car manufacturers competing to build separate driverless software and standards? Looking at how well that worked out for online maps, doesn't make me feel safer.
Do you mean that Lexus developed this on their own? Because they didn't; Google's initial autonomous car prototype was a Lexus. The dingy small ones are their new fleet, using different hardware but the same software.
I think they were more implying that the software that makes the decisions should be common to all of them. That way we don't allow individual car manufacturers to make potentially fatal mistakes when cutting corners with their software development. I think at the very least there should be a standardisation, so that there can be some communication between cars to aid in resolving traffic jams and other uses.
Then you will have people writing software with the aim of pass the tests, not real world safety. I don't think auto testing can catch the type of bugs that can arise sporadically, which could be fatal in the case of self driving cars.
The industry isn't going to just hand Google a monopoly. Of course they are going to develop their own as well.
Even if Google's software was the absolute bees knees, what if Google deprioritized it? Left the industry? Made unreasonable licensing demands, or made exclusivity deals with competitor automotive manufactures? What if they used it as leverage to push manufactures around?
Even if you do decide to work with Google for now, not having a backup plan is just poor strategic planning. Having a backup plan means developing these sort of systems yourself.
All I'm saying is that I trust Google to produce this software to a high standard, much much more than I trust the auto manufacturers to do so. I'm sure Google isn't developing their own brake pads in these cars, and I wouldn't trust those brake pads if they were.
I am not arguing against competition, I am saying it would be nice if all these cars inter-operated seamlessly, co-ordinating with eachother or centrally rather than each car having a different set of parameters trying to figure every other car out on the fly.
If the goal is safety then a wild west with every company setting standards for their own projects isn't going to be the best approach. If the goal is profit then Yee-Haa let the gold rush begin!
There are some real problems with centralized systems. The obvious one is that if the centralized system goes down, things would become very, very bad in a very short time (thousands of cars effectively driverless, all at once...yikes).
Making each car responsible for its own collision avoidance is a lot more robust. If the sensors fail in one car, the others can take corrective action. For maximum safety, you'd also want the cars to be running different (but equally good) software, so you don't run into the type of situation where (e.g.) they all go nuts because the programmer didn't account for leap years or whatever.
do we really want all the different tech companies and car manufacturers competing to build separate driverless software and standards?
Diversity is good. If everyone used the same software, it would all have the same flaws. Competition also provides an incentive to make the software better.
"I was there for the discussion with Reuters about automated vehicles," she told Ars by e-mail. "The story was taken completely out of context when describing a type of complex driving scenario that can occur in the real world. Our expert provided an example of a lane change scenario that our car recently experienced which, coincidentally, was with one of the Google cars also on the road at that time. It wasn’t a 'near miss' as described in the Reuters story."
1. http://arstechnica.com/cars/2015/06/no-2-self-driving-cars-d...