Llm hallucination? I want to give posters the benefit of the doubt but I didn't mention a reddit thread.
If you're just getting me mixed up with another poster, I got my stats from an electrek article supplemented by Waymo's releases: https://waymo.com/safety/impact/
Tesla's tech is also marketed as a full self driving autopilot, not just basic driver assistance like adaptive cruise control.
That's how they're doing the autonomous robotaxis and the cross country drives without anyone touching the steering wheel.
Sure. And Tesla doesn't have robotaxis at all, they're still playing in the kindergarten league.
So Tesla is in a weird state right now. Tesla's highway assist is shit, it's worse than Mercedes previous generation assist after Tesla switched to the end-to-end neural networks. The new MB.Drive Assist Pro is apparently even better.
FSD attempts to work in cities. But it's ridiculously bad, it's worse than useless even in simple city conditions. If I try to turn it on, it attempts to kill me at least once on my route from my office to my home. So other car makers quite sensibly avoided it, until they perfected the technology.
Girl get real. Mercedes fooled quite a few people with their PR stunt but they have NOTHING like fsd. Drive assist pro is vaporware, as their “L3” has been for the past 2 years. You can’t order that shit but half of hackernews is glazing mercedes for it
They canceled the Drive Pilot L3, which is fully autonomous with zero driver intervention (approved by the government), because the software isn't there yet due to the hand off problem. They are still working on making it work at 130km/h on the highways. The problem with a zero driver intervention system is that the driver isn't guaranteed to pay attention when the mode is no longer applicable and the mode switch is only obvious on the highway when exiting, but the L3 system doesn't support highway driving speeds yet.
I'm not talking about some Tesla style last second bullshit where you're supposed to compensate for the deficiencies of the system that supposedly can do the full journey. I mean a route like L2->L3->L2 where L2 is human supervised autonomous driving and L3 is autonomous driving with zero intervention. You can't tell people they're allowed to drink a coffee and then one minute later tell them to supervise the driving.
> I'm not talking about some Tesla style last second bullshit where you're supposed to compensate for the deficiencies of the system that supposedly can do the full journey.
Interesting because that's just not my experience at all and a lot of other users.
This goes against my daily fsd usage and my friends fsd usage. We all use fsd daily, zero issues, through hard city and highway environments. It’s near perfect outside of the occasional weird routing issues (but that’s not a safety issue). We all have the latest fsd on hw4. No other consumer car on the market in the US can do this (go from point a to b with zero interventions through city and highway). If there was something better then I’d buy it, but there’s not.
The issue here is that "zero issues" is something that must be based on a very large sample size. In the US the death rate for cars is a bit over 1 per 100 million miles. So you really need billions of miles of data. FSD could be 10x as dangerous as the average driver and still it would most likely be "zero issues" for you and all your friends.
I'll post the 7 billion miles of stats here (https://www.tesla.com/fsd/safety) but then the objections will be "it's Tesla of course they lie" and the debunked "they turn FSD off right before an accident".
Sigh. FSD is OK on freeways, but it constantly changes lanes for no discernible reasons. Sometimes unsafely or unnaturally, forcing me to take over. The previous stack had a setting to disable that, but not the new end-to-end NN-based system.
In cities, it's just shit. If you're using it without paying attention, your driving license has to be revoked and you should never be allowed to drive.
For anyone who has or has experienced the latest gen FSD from Tesla this comes across as a complete lie. Why would you spend energy lying on HN of all places?
> anyone who has or has experienced the latest gen FSD from Tesla this comes across as a complete lie
I used the latest FSD and Waymo in December. FSD still needs to be supervised. It’s impressive and better than what my Subaru’s lane-keeping software can do. But I can confidently nap in a Waymo. These are totally different products and technology stacks.
It also misinterprets this signal: https://maps.app.goo.gl/fhZsQtN5LKy59Mpv6 It doesn't have enough resolution to resolve the red left arrow, especially when it's even mildly rainy.
Are you talking hw3 or 4? Also, the e2e FSD is recent. And FSD has gotten really good since 13, and with 14 it's really, really good. Not sure what 2015 has to do with anything. Red hands of death would be sunglare due to your windshield not being clean. I haven't had red hands since 14 came out.
These are influencers who have a stake in Tesla. The general consensus from the regular users is that it is really good starting at FSD 14. It's the first version that finally feels complete. I have 5000 miles on FSD 14 with no disengagements. 99% of my driving is FSD. I couldn't say that for any other version. Even my wife has 85% of her driving on FSD and she hated it before. She just tends to drive herself on short drives and in parkings lots, where as I don't. So your take just doens't line up with what people are saying in social media and my personal experience.
> My windshield is completely normal
If it's never been cleaned from the inside, it's a good chance it's not. The off-gassing from new cars causes fog on the inside of the windshield in front of the camera. It might behave ok (or wierd) but when sun hits it you get red hands of death.
You need to clean it yourself or have Tesla do it. They offer it for free. I did mine following this video and it wasn't bad if you have the right tool. After I did this things were completely fine in low direct sun.
> These are influencers who have a stake in Tesla.
I've seen it on multiple forums. Just like a broken record.
> If it's never been cleaned from the inside, it's a good chance it's not.
The camera is clean. I can see that on the dashcam records. And if the system is so fragile that a bit of dust kills it, then it's not good.
The issue with the red-hands-of-death is caused by the forward collision warning, the road there curves and slopes up, so the car gets confused and interprets the car in front as if it's on a collision course. This happens even during manual driving, btw. False FCWs are a common problem, if you check forums, and people are annoyed because it affects their safety score used for Tesla Insurance.
FSD got better than it was 4 years ago. But it's still _nowhere_ near Waymo. You absolutely can not just sit back and snooze while it's driving, you constantly have to be on guard.
> The camera is clean. I can see that on the dashcam records.
You won't see it unless you shine light into it.
> And if the system is so fragile that a bit of dust kills it, then it's not good.
It's not dust, it's fog on the inside of the windshield from offgassing.
> The issue with the red-hands-of-death is caused by the forward collision warning, the road there curves and slopes up, so the car gets confused and interprets the car in front as if it's on a collision course
Of fair enough. I've never seen this, and I used FSD (14) all through the Appalachian mountains.
> FSD got better than it was 4 years ago. But it's still _nowhere_ near Waymo
Fair enough, but FSD is still years ahead of any other system you can buy as a consumer.
I recently went on vacation and rented a 7 year old Model X and the FSD on it (v12) was better than nothing but not great, especially after having v14 on my truck drive 99% of my miles. It truly is a life-changer for people fortunate enough to have it, so it's always jarring to see the misinformed/dishonest comments online. It's still not perfect but at this point I would trust it more than the average human and certainly more than a new/old/exhausted/inebriated/distracted driver.
Is it really comparable, though? What is better a Ferrari or a Ford Ranger? That depends on if you are trying to go fast or haul 500 lbs of stuff across town. Waymo is a much better completely autonomous robo taxi in limited areas mapped to the mm, but if I want an autonomous driving system for my personal car to go wherever I want, Tesla FSD is the better option.
We being who? What is your evidence it's better? The fact all the cars stopped moving when the power went out? The fact they cost WayMore? Show the evidence for your claims. And they have remote operators as proven by the power outage.
Apologies, I was unclear with the "i.e." bit I assume, to spell it out: I think after struggling with it over years its time to call it because Waymo has a scaled paid service, no drivers, multiple cities, for 1 year+.
It’s because you spam this thread so much with such aggressive language that it honestly is scary to deal with you.
You’re smart Darren, and so are other people, you should assume I knew the cars have remote backup operators. Again, you’re smart, you also know why that doesn’t mitigate having a scaled robotaxi service vs. nothing
I doubt you’ll chill out but here’s a little behind the scenes peek for you that also directly address what you’re saying: a big turning point for me was getting a job at Google and realizing Elon was talking big game but there’s 100,000 things you gotta do to run a robot taxi service and Tesla was doing none of them. The epiphany came when I saw the windshield wipers for cameras and lidar.
You might note even a platonically ideal robotaxi service would always have capacity for remote operation.
I can't tell if this was suppose to be for me, I am not Darren. The reply was on my thread...
My replies are at the same level as that which I respond to, never aggressive IMO.
And if you "knew" something about the relevant topic and leave it out, that in itself is part of the dishonesty.
So once you got a job at Google then you felt Waymo was better, hmmm.
Tesla has a robot taxi service that in some cases has nobody in the car. Also everyone that owns a Tesla has experienced FSD in which it goes from A to B without being touched which is the same as it driving by itself. A person just went cross country and back with this. So to say Tesla is doing none of the 100,000 things you think are required, I think that says more about what someone at Google thinks is needed vs what is happening on the ground.
I am not against remote operation in some cases, but those suggesting Waymo has solved this need to admit that it relies heavily on them for basic decision, like what to do when the power goes out at intersections.
This is such a weird take when Elon Musk is still letting his Optimus robots be teleoperated for basically every live demo. If you're lenient with him, it's completely unreasonable to be strict with Waymo, which works autonomously the vast majority of time.
Optimus is early days, and my take isn't against remote operators but pointing out that Waymo relies on them much more than Tesla. You can go from 1 side of the country to the other without ever touching the wheel, something Waymo could not do. And the SF power outage incident showed us that it is actually only autonomous until it isn't, then you have a bunch of cars that can't move and won't and do not.
Because I use them both and I can tell Teslas are really, really good at driving, and more naturally than Waymo at that. Obviously there’s a reason they’re still supervised but if they manage to climb that mountain it’s game over for waymo
What's lacking here? Waymos are driving driverless in multiple cities and Teslas are not. Robotaxis have a person with hands on button at at times for emergencies.
They might get better but how is that not evidence enough that currently Robotaxis are behind Waymos in self driving capabilities?
This was your chance to provide the evidence to your claims. It is conjecture what you have provided. Waymo requires the remote operator make decisions often, such as at uncontrolled intersections when the lights go out, as shown in SF. Just because you don't see the strings doesn't mean they aren't there.
I was just thinking about this on my 60 mile FSD driver I just finished. Basically inevitable that I would shortly go HN or reddit and read how FSD doesn't work.
FSD is here, it wasn't 3 or 4 years ago when I first bought a Tesla, but today it's incredible.
That isn’t the point. The point is that it’s not enough to figure out that there is an obstacle. You also have to figure out what that obstacle is, and you have to predict its movement. In the case of pedestrian, the car, for example, needs to know whether the pedestrian has seen you. Things like that you just cannot do with LiDAR. Hence you’re gonna need cameras anyway. Hence tge “anyone relying on LiDAR is doomed” prediction.
The long-term view of LIDAR was not so much that it was expensive, though it was at the time. The issue is that it is susceptible to interference if everyone is using LIDAR for everything all the time and it is vulnerable to spoofing/jamming by bad actors.
For better or worse, passive optical is much more robust against these types of risks. This doesn't matter much when LIDAR is relatively rare but that can't be assumed to remain the case forever.
Doesn’t mean they’re failing because of interfering lidar though. If it’s something like them failing due to the road being blocked or something, it makes sense they’d fail together. Assuming they’re on the same OS, why would one know how to handle that situation and another not?
I am just some schmoe, but optics alone can be easily spoofed as any fan of the Wile E. Coyote has known for decades. [0]
What's crazy to me is that anyone would think that anything short of ASI could take image based world understanding to true FSD. Tesla tried to replicate human response, ~"because humans only have eyes" but largely without even stereoscopic vision, ffs.
But optical illusions are much less of an issue because humans understand them and also suffer from them. That makes them easier to detect, easier to debug, and much less scary to the average driver.
Sure, someone can put up a wall painted to look like a road, but we have about a century of experience that people will generally not do that. And if they do it's easy to understand why that was an issue, and both fixing the issue (removing the mural) and punishing any malicious attempt at doing this would be swift
> and punishing any malicious attempt at doing this would be swift
Is this a joke? Graffiti is now punishable and enforced by whom exactly? Who decides what constitutes an illegal image? How do you catch them? What if vision-only FSD sees a city-sanctioned brick building's mural as an actual sunset?
So you agree that all we need is AGI and human-equal sensors for Tesla-style FSD, but wait... plus some "swift" enforcement force for illegal murals? I love this, I have had heath issues recently, and I have not laughed this hard for a while. Thank you.
Hell, at the last "Tesla AI Day," Musk himself said ~"FSD basically requires AGI" - so he is well aware.
Intentionally trying to create traffic accidents is illegal. This isn't an FSD-thing. If you try to intentionally get humans to crash their cars you are going to get into trouble. I don't see how this suddenly becomes OK when done to competent FSD (not that I'd count Tesla among them)
If I understand your argument correctly, then posting a sign that it is incorrect.. like a wrong way highway on-ramp sign, would be illegal? That sounds correct.
But what if your city hired you to paint a sunset mural on a wall, and then a vision-only system killed a family of four by driving into it, during some "edge case" lighting situation?
I would like to think that we would apply "security is an onion" to our physical safety as well. Stereo vision + lidar + radar + ultrasonic? Would that not be the least that we could do as technologists?
That was autopilot not FSD. Autopilot is a simple ADAS system similar to Toyota Safety sense or all the other garbage ADAS systems from Honda, Kia, Toyota, GM etc. FSD passed this test with flying colors
everyone uses cellphone that transmit on the same frequency. they don't seem to cause interference. once enough lidar enters real word use. there will be regulation to make them work with each other.
Completely different problem domains. A mobile phone is interacting with a fixed point (i.e. cell tower) that coordinates and manages traffic across cell phones to minimize interference. LIDAR is like wifi, a commons that can be polluted at will by arbitrary actors.
LIDAR has much more in common with ordinary radar (it is in the name, after all) and is similarly susceptible to interference.
No, LIDAR is relatively trivial to render immune to interference from other LIDARs. Look at how dozens of GPS satellites share the same frequency without stepping on each others' toes, for instance: https://en.wikipedia.org/wiki/Gold_code
Like GPS, LIDAR can be jammed or spoofed by intentional actors, of course. That part's not so easy to hand-wave away, but someone who wants to screw with road traffic will certainly have easier ways to do it.
> No, LIDAR is relatively trivial to render immune to interference from other LIDARs.
For rotating pulsed lidar, this really isn't the case. It's possible, but certainly not trivial. The challenge is that eye safety is determined by the energy in a pulse, but detection range is determined by the power of a pulse, driving towards minimum pulse width for a given lens size. This width is under 10 ns, and leaning closer to 2-4 ns for more modern systems. With laser diode currents in the tens of amps range, producing a gaussian pulse this width is already a challenging inductance-minimization problem -- think GaN, thin PCBs, wire-bonded LDs etc to get loop area down. And an inductance-limited pulse is inherently gaussian. To play any anti-interference games means being able to modulate the pulse more finely than that, without increasing the effective pulse width enough to make you uncompetitive on range. This is hard.
I think we may have had this discussion before, but from an engineering perspective, I don't buy it. For coding, the number of pulses per second is what matters, not power.
Large numbers of bits per unit of time are what it takes to make two sequences correlate (or not), and large numbers of bits per unit of time are not a problem in this business. Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.
> For coding, the number of pulses per second is what matters, not power.
I haven't seen a system that does anti-interference across multiple pulses, as opposed to by shaping individual pulses. (I've seen systems that introduce random jitter across multiple pulses to de-correlate interference, but that's a bit different.) The issue is you really do get a hell of a lot of data out of a single pulse, and for interesting objects (thin poles, power lines) there's not a lot of correlation between adjacent pulses -- you can't always assume properties across multiple pulses without having to throw away data from single data-carrying pulses.
Edit: Another way of saying this -- your revisit rate to a specific point of interference is around 20 Hz. That's just not a lot of bits per unit time.
> Signal power limits imposed by eye safety requirements will kick in long after noise limits imposed by Shannon-Hartley.
I can believe this is true for FMCW lidar, but I know it to be untrue for pulsed lidar. Perhaps we're discussing different systems?
I haven't seen a system that does anti-interference across multiple pulses...
My naive assumption would be that they would do exactly that. In fact, offhand, I don't know how else I'd go about it. When emitting pulses every X ns, I might envision using a long LFSR whose low-order bit specifies whether to skip the next X-ns time slot or not. Every car gets its own lidar seed, just like it gets its own key fob seed now.
Then, when listening for returned pulses, the receiver would correlate against the same sequence. Echoes from fixed objects would be represented by a constant lag, while those from moving ones would be "Doppler-shifted" in time and show up at varying lags.
So yes, you'd lose some energy due to dead time that you'd otherwise fill with a constant pulse train, but the processing gain from the correlator would presumably make up for that and then some. Why wouldn't existing systems do something like this?
I've never designed a lidar, but I can't believe there's anything to the multiple-access problem that wasn't already well-known in the 1970s. What else needs to be invented, other than implementation and integration details?
Edit re: the 20 Hz constraint, that's one area where our assumptions probably diverge. The output might be 20 Hz but internally, why wouldn't you be working with millions of individual pulses per frame? Lasers are freaking fast and so are photodiodes, given synchronous detection.
I suggest looking at a rotating lidar with an infrared scope... it's super, super informative and a lot of fun. Worth just camping out in SF or Mountain View and looking at all the different patterns on the wall as different lidar-equipped cars drive by.
A typical long range rotating pulsed lidar rotates at ~20 Hz, has 32 - 64 vertical channels (with spacing not necessarily uniform), and fires each channel's laser at around 20 kHz. This gives vertical channel spacing on the order of 1°, and horizontal channel spacing on the order of 0.3°. The perception folks assure me that having horizontal data orders of magnitude denser than vertical data doesn't really add value to them; and going to a higher pulse rate runs into the issue of self-interference between channels, which is much more annoying to deal with then interference from other lidars.
If you want to take that 20 kHz to 200 kHz, you first run into the fact that there can now be 10 pulses in flight at the same time... and that you're trying to detect low-photon-count events with an APD or SPAD outputting nanoamps within a few inches of a laser driver putting generating nanosecond pulses at tens of amps. That's a lot of additional noise! And even then, you have an 0.03° spacing between pulses, which means that successive pulses don't even overlap at max range with a typical spot diameter of 1" - 2" -- so depending on the surfaces you're hitting, on their continuity as seen by you, you still can't really say anything about the expected time alignment of adjacent pulses. Taking this to 2 MHz would let you guarantee some overlap for a handful of pulses, but only some... and that's still not a lot of samples to correlate. And of course your laser power usage and thermal challenges just went up two orders of magnitude...