Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Only rigorous, continual, third party validation that the system is effective and safe would be relevant. It should be evaluated more like a medical treatment.

100% agreed, and I'll take it one step further - level 3 should be outright banned/illegal.

The reason is it allows blame shifting exactly as what is happening right now. Drivers mentally expected level 4 and legally the company will position the fault, in as much as it can get away with, to be on the driver, effectively level 2.



They're building on a false premise that human equivalent performance using cameras is acceptable. That's the whole point of AI - when you can think really fast, the world is really slow. You simulate things. Even with lifetimes of data, the cars still will fail in visual scenarios where error bars on ground truth shoot through the roof. Elon seems to believe his cars will fail in similar ways to humans because they use cameras. False premise. As Waymo scales, human just isn't good enough, except for humans.


So, I agree with what you're saying, but that doesn't matter.

The legal standing doesn't care what tech it is behind it. 1000 monkeys for all it matters. The point is level 3 is the most dangerous level because neither the public nor the manuf properly operates in this space.


Yeah - Tesla is in a weird level 2-4 space. They've managed to shunt liability onto their customers until now..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: