Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI system manufacturers want to sell their products by advertising their superhuman reliability, but they don't want to take responsibility for any mistakes. I should mention that I don't have any Tesla cars in my environment, but a friend of mine claimed in 2017 that the car was capable of fully autonomous and safe driving without human intervention. It's interesting how advertising can distort the essence of technology in people's eyes.


My wife and I have 2 teslas, a HW3 and a HW4. Even late last year FSD 12.5 was nowhere near close to being able to drive safely. Any non-straightforward situation (like merging during rush hour) would throw it off, so critical interventions were required at least daily.

Starting with FSD 13 on HW4, which came out last December, it's improved dramatically, and since then in my case it hasn't needed a single critical intervention. I think 12.6 on HW3 is also quite good.

The caveat is that we live in the Bay Area, which has an abundance of Tesla training data. Elsewhere I've heard the experience isn't as good. And of course, even in the Bay Area the reliability needs to get a few orders of magnitude higher to be suitable for fully unsupervised self-driving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: