Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, they don't. Look at what has happened when a tesla has mistaken a motorcycle with two small rear lights that is nearby for a car that is further away but with the same lighting configuration. Did not end well for the motorcyclists.

He's just wrong about this.



I dont think it's wrong, but i do think models avaliable right now lack the inductive bias required to solve the task appropriately, and have architectural misalignments with the task at hand that mean for a properly reliable output you'll need impossibly large models and impossibly large/varied datasets. Same goes for transformers for language modelling. Extremely adaptable model, but ultimately not aligned with the task of understanding and learning language, so we need enormous piles of data and huge models to get decent output.


[flagged]


With respect, I disagree. Musk is obsessed with the "the best part is no part ". Which only works if you don't actually need it. Combined with an obsession with cost cutting, and you get tunnel vision insisting on a course of action which does not know with 100% certainty about the world it is trying to navigate. And this has led directly to people dying.

Being obsessed only works when you turn out to be right, and tesla's system does not work as well as lidar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: