Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Camera-Only Autopilot Requires Automatic High Beams Enabled (reddit.com)
88 points by BoorishBears on May 29, 2021 | hide | past | favorite | 99 comments


Disclaimer, I work in the autonomous vehicle industry but the opinion are of my own. I roll my eyes (heh) when people say if humans drivers only use their eyes so AVs should too. What a load of bull. While a camera is getting really good, but HDR is still subpar to human range. What about stereo vision and how we use two eyes. What about how your eyeball can articulate while having stereo vision. What about how your neck also articulates your vision range. What about how your eyes instinctively look at moving objects. There’s so much more to “human vision” than just the eyeball.


Everything you mention seems possible to replicate with 2 cameras, a few sensors (and a X years of AI/software dev). Which is the point of people saying cameras are good enough if human’s eyes are good enough.

Or am I missing something?


The human eye is a far better sensor in many ways than the CMOS sensors used in cars. They have variable focus lenses, finely controlled irises, and are on gimbaled mounts in the head. They have integrated shades/shutters and cleaning mechanisms. The head itself is on a gimbaled mount which itself can be moved. Attached to the eye mount are inertial sensors, audio sensors, and an accelerometer.

A camera or even a stereo pair of cameras mounted in or on the car will provide inferior imagery to the control system than eyes to the brain. They have less dynamic range and no articulation. If you wanted to replicate human style vision you'd need a bunch of fixed cameras and inertial and acceleration sensors all on top of AI that's better than what Tesla's been demonstrating.

LIDAR is the most straightforward augmentation for fixed cameras because it can build very accurate depth maps and image segmentation. You need fewer fixed cameras if your spacial model is built with LIDAR. You're in even better shape if those systems are augmented with radar.

While humans don't have LIDAR and such, our visual systems are highly developed and augmented with highly developed proprioception. Trying to replicate it with just cameras and tons of processing power is a fool's errand.


First worked on Autonomous cars in 2007 for DARPA grand challenge and even back then fused sensing was where it was at. Modern Cameras are better but they're no replacement for the eye. The best thing we can do right now is take high quality cameras and augment them with things like radar and LIDAR to get close to the human eye level perception. It makes the AI's job more about the macro driving problems and less about vision. Look at the string of crashes of Teslas into white box trucks on bright days.

I remember the first time we had problems with a matte black surface with our LIDAR that would have been easily spotted by our camera and vice versa with a shiny white surface in direct sunlight relative the car being easily picked up by lidar but nearly invisible to the cameras.


It seems insane that Tesla continues to shun LIDAR. Is it just pride at this point? Apple sells an $800 tablet with it built in, so while obviously the Tesla unit would have to be a lot larger and more powerful, I can’t see the cost argument making sense when the cheapest car they sell costs 50x as much (and the most expensive will be 250x).


I think at this point Tesla's camera-only system is an issue of pride for Elon Musk more than anything. He makes declarations and then refuses to backtrack on them ever unless he is literally forced to do so.

In theory a camera-only autonomous system can work effectively but in practice there's innumerable edge cases where it doesn't work well and edge cases are where catastrophic failures happen. If you had infinite processing power and error free AI you might be able to cover many edge cases but Teslas have neither.


Worked on autonomous X for US gov. For any value of X, its a fool's errand to try to reduce the amount of information going into your GNC. Multi spectral, multi viewpoint, inertial-calibrated, and lots and lots of onboard models and processing.


Two cameras, plus as-of-yet unknown human vision to synthesize depth info correctly for arbitrary scenes, plus the ability to articulate side by side by 8-10 inches, plus a very efficient liquid coating plus cleaning mechanism (blinking), plus the ability to deploy anti-glare shades proactively (hands).


Don’t forget the self maintaining general intelligence with a minimum of 14-16 years of real world training (minimum, varies by state) to understand context and environmental factors, go to highly trained specialists when sensors seem out of calibration, etc.


So that’s the good side. Now add in the bad sides. Sometimes drunk / high. Often tired. Night blindness as you get old. Maybe cataracts.

Beating the prime of the prime human driver with cameras may be hard. Beating tired / drunk / old drivers is a different story.


Depends on how buggy the firmware is and how poorly maintained the sensors are I’m guessing?


>plus as-of-yet unknown human vision to synthesize depth info correctly for arbitrary scenes

I have an adversarial example.

https://wallpapercave.com/wp/InAcQKW.jpg


I think he's saying that the "X years of AI/software dev" are really the important part, much more than the mechanics of the visual sensing. And that the human brain and optical system doing that is what's not so easy to replicate in a machine.


The best cameras are about 80% there, compared to human eyes. At least, in some respects. There are no cameras that are 80% there in all respects. And that’s when compared to low-quality human vision that is attached to low-attention/highly distracted humans.

Now, there’s this little thing called The Pareto Principle.

That last 20% is going to take a long time to achieve, and is going to be very, very expensive.

Are you willing to roll a ten sided die every time you get in the car, and only if you roll a three or higher, do you get to arrive at your destination unhurt, on time, and without major incident?


2 cameras with 3-axis rotation and xyz movement to look around obstructions and see things out of reach from a fixed perspective.

Systems wise, that's going to be a lot more complex than having more cameras. Of course, if you've got enough cameras you can probably get better visibility than a human, and continuously process all viewpoints. But 2 cameras is definitely insufficient.


Plus, you don’t only use your eyes. Your ears sense sound, the balance-sensing thingies in your ears, even the vibration sensed by your butt feed into your brain’s awareness of what’s going on.


The human eye has incredible dynamic range -- it's something like 25-30 stops total (although you can't see them all at once). Compare that to the 10-12 stops for a high quality DSLR and the 5-7 for the types of cameras going into cars.


The dynamic range is the kicker. Automotive cameras are already grayscale to get as much light as possible and not compromise vision at night but its still far inferior to the human eye (and even that is bad! but it's the benchmark Elon chooses).

No surprise then that the vision-only "level 5 autonomous driving" system would want the high beams on...


That's quite one-sided list which imo doesn't really prove anything about whether human eyes or cameras have an overall advantage. We could make a similarly one-sided list in favor of cameras: E.g. one could cite faster adaption rates of cameras, 360° vision, ability to capture infrared photons, possibility of wider stereo separation etc.


We don't really use stereo vision for the distances required for driving. Our eyes are simply too close to each other for that.

Stereo vision is mostly usable below 15 ft/5m range, if even that.


While I think self-driving will be just fine with cameras, human stereo vision works at much longer distances than 15ft: you can easily test it by looking at few distant trees (150ft), and then closing one eye with a hand and blinking. You’ll see that there is good Z-layer separation with two eyes and none with one eye.


A question I always wanted to ask: why stereo vision cannot replace lidar?


It's just less accurate, more error prone, and more limited. Especially in bad conditions and when there is low contrast.

Stereo vision can also be fooled by things like reflections.

Stereo vision at the human level also requires head movements and so on.


Also people don't realize how important blinking is. Any camera system that is not thoroughly self cleaning is going to have significant problems after a relatively short period of time. We don't have cameras built to withstand the elements day-after-day like our eyeballs.


Doesn't lidar have the same problem then?


What about how birds use wings to fly?

And what about how fish use fins to swim?


That seems very reasonable? If the auto high beams aren’t accurate, that’s a problem. But if I’m driving I want control of the lights, and you can’t fault the self driving system to think the same? As long as it doesn’t want to control the playlist on the stereo…


Very reasonable. Cameras for front collision avoidance need as much visual distance as possible, hence the auto high beam requirement. High beams = more night distance.

Tesla is hiring for a DSP engineer for electronic scanning radar, so the rumor is this is temporary (camera only safety systems) until they’re ready to deploy their own, more advanced radar sensor (trained off of the visual neural net) versus the crude Continental radar sensor they’ve deprecated.


The unreasonable part has to do with the reality of auto high beams.

Even the best implementations dazzle other drivers.

Auto high beams are only realistic when used where normal high beams are. That is, mostly on single lane roads with no dividers, little to no street lights, where opposing traffic is not appearing often, etc.

-

It seems like such a small thing compared to driving a whole car... but this is one of the few vision features that directly affects other drivers on a regular basis.

The comments explain how for example, vehicles with tall ride heights like commercial trucks get dazzled due to their tailights sitting much lower than their mirrors

It's just not realistic to leave your high beams "potentially on at all times" unless you're willing to dazzle other drivers.


Modern auto high beams work fine, but unfortunately modern headlights are illegal in the US.

https://www.autoweek.com/news/technology/a34417809/nhtsa-sti...


The "auto" in "auto high beams" is that they switch off for oncoming traffic.


If only naming things based on their intended function made them work like that?


Tesla should put lidar on the car and be done with it. That's what Lucid has done:

https://electrek.co/2021/05/26/lucid-reveals-ux-intuitive-us...


The point is, adding lidar to the car does not mean you are "done with it". If it were the case, Tesla would have done it long ago. Lidar is great for precisely detecting 3D surfaces. But it helps very little in characterizing those surfaces. Lidar could tell you precisely how far a sheet of metal is. Whether that sheet of metal is a traffic sign and what it says, it tells you not. Lidar doesn't detect lane markings and many things more. Also, there is the challenge of merging the Lidar data with your camera images.


LIDAR absolutely detects lane markings, along with street signs, if you’re close enough.

Edit: the reason here is that road paint (and the marking on a sign) is typically retroreflective, while roads (and regular painted sheet metal) are not. Retroreflective materials are very easy to detect with LIDAR. Even if you are on older roads that don't have retroreflective paint, white objects and black objects also tend to show up differently.

Edit2: there's also no reason you can't use detection algorithms to characterize pointclouds. You can also fuse the returned objects from the lidar sensor(s) with the objects from the camera(s). You have the latter problem even with just multiple cameras. Inevitably, whatever you are using for segmentation/classification will return different results on different sensors, for sensors that share the same FOV - you do need a strategy for handling this.

Tesla has not added LIDAR because of cost (and because of Elon's ego...) - they are one of the few companies going directly for consumer sales - more parts that cost $$$ and require calibration aren't good for that.


LIDAR is used in addition to 2D camera images. LIDAR is used to build a depth map and do image segmentation and works in low light and low contrast environments.

Single cameras (even stereo cameras) require a lot more assumptions about contrast and environment lighting to do image segmentation and depth maps. They're rarely as accurate as LIDAR and often far worse.

Depth maps and image segmentation are the building blocks of object identification. You can try to AI your way into those with a single camera but it'll be less accurate and more error prone.


> adding lidar to the car does not mean you are "done with it"

It will mean Tesla is done with trying to avoid adding it. They test with lidar so they might as well start using it:

https://insideevs.com/news/508669/tesla-model-y-luminar-lida...


Too expensive for mass market. Lucid’s vehicle is almost $190k.


Compare the 190K with Tesla Plaid fully loaded. Lucid also has a $69K model.


There are multiple trim levels. Pick a cheaper one:

https://www.lucidmotors.com/air


None of these vehicles can be bought today. It's vaporware at the moment.



One question then is why it requires you to enable "Auto High Beam" manually

If it wants auto high beam, it can just enable the setting itself, and restore your own preference if you disable autopilot (and even better: it can control the beams when autopilot is enabled, irrespective of any user settings, after all if autopilot can steer why couldn't it also change the lights)?


It actually does enable them manually, it just doesn't re-enable them after the driver manually turns them off during autopilot.


I guess less reasonable in the context of “they’re very unreliable, which is why I always keep auto high beams off.”

It’s not an irrelevant bit of context.


In 6 years of driving Tesla’s, I’ve never even thought to turn off auto high beams. I’m not even sure how I would!


As someone who frequently drives windy roads at night(I live on one), please learn how. Auto high beams are rude, they only turn down once my lights are on you, if we’re going through a corner, thats close enough to put after-images on my retinas. Frankly, I think they are a safety hazard that should be illegal.

A human driver will see the headlights coming and turn their high-beams down preemptively.

Its doubly shitty because cars outfitted with auto high-beams typically have those high intensity lamps too. Ugh.


> Frankly, I think they are a safety hazard that should be illegal.

They are a safety hazard, and unless they consistently operate headlamps in accord with rhe applicable local law [0], using them already is illegal.

[0] e.g., https://leginfo.legislature.ca.gov/faces/codes_displaySectio...


This is incredibly rude. Always turn off high beams.


They’ve never failed to turn off automatically when a car is coming in my experience. In fact, they turn off more reliably than if I was doing it myself.


LOL. let the car drive itself, but give up music control? over my dead body!

it's funny to me because you'd expect music to be lower stakes and such, but it just highlights that actually driving a car is a much more well defined problem than picking the music i like


Do Teslas have matrix high beams?


No, because the government doesn't allow them.


Hey! The rule is and always has been: Driver chooses the music!


This is the only realistic way forward for attempting to do autonomous cars. Humans use visual range light to make their decisions on where to drive. If automated vehicles use anything else their decisions will be based off different information and likely lead to different outcomes.

This is especially highlighted in the parts of the USA that have a real winter with permanent snowcover. In these areas of the country sometimes the road surface is not visible for months at a time and the lanes that are formed have little to do with existing road markers. Instead humans sort of flock and form new emergent lanes and behaviors. And all the other humans (mostly) follow this visually in a low constrast (white snow) environment. Any system that uses absolute positioning or non-visual cues will not be able to follow the emergent lanes and cause danger.


I agree following fixed lanes with stored maps and GPS won't work on snow-covered roads. But you lost me in two places.

First, does following others' tracks necessarily mean visual-range light? I'd expect you could see tracks on infrared. Probably also LIDAR—this isn't my field of expertise but obviously tracks have depth, so I'd think it'd work if the resolution/precision is sufficient. (Not sure how well LIDAR works while snow is falling but that's a different concern than the one you mentioned.)

Second, does using visual-range light (for some or all of the input) mean having automatic high beams on? I very rarely use high beams myself. (Then again, maybe automatic high beams very rarely turns on too.)

And certainly if other vehicles/pedestrians are actually present at the moment—when it's most important to be behaving like them—infrared, LIDAR, and RADAR are options for seeing them.


It depends - be aware though that the more you diverge from the sensor input others use, the more you tend to diverge from others behaviors. You see different data.

It’s important (often more important) while driving that you’re doing what others expect, less what the rules say must be done - especially on edge case behavior.

Each sensor suite has it’s own pros and cons - LiDAR can have real challenges with reflective surfaces or highly absorptive ones (wet and slick, oily, snow). It’s range is based off return signal strength. There are also problems with many LiDAR sensors and daylight drowning out the signal.

Time of flight sensors (really a type of ‘broadcast’ LiDAR) have similar issues combined with some weird edge cases with reflective geometries or some surfaces.

Passive visual light sensors have issues with contrast (high signal strength drowns out low signal strength in nearby areas) and lacks useful information about time of flight unlike LiDAR. They are Cheap though generally, and give us a signal we generally think are ‘obvious’

Active radar sensors (including phased array) also provide very useful signals, also have pros and cons.

Sonar, same.

Ideally you’d have 360 coverage from enough different sensors that you can do sensor fusion and detect and exclude a sensor in situations where you’re hitting a known problem for a sensor suite. Looking into the sun? Well visual and potentially LiDAR/ToF data is iffy, switch to sonar and radar. In a high EMF environment? Surrounded by metal? Switch off radar perhaps.

Those cost money - equipment and development - however.


> It’s important (often more important) while driving that you’re doing what others expect, less what the rules say must be done - especially on edge case behavior

First rule of the road is avoid accidents. From there flows rules like driving in a predictable manner and giving up right of way when necessary.


Unfortunately as self driving cars make apparent, the rule is about as clear cut or practically useful from an engineering perspective as Asimov’s 3 laws of robotics.

If you are in a climate with lots of snow and slush, and are in a road with zero lane visibility and everyone is moving along too fast for road conditions (unable to effectively stop in time stopping distance wise).

Do you 1) go slow in the right hand lane, potentially leading to a pileup? 2) keep up with the road traffic (maybe trying to stay in the low side), and then maybe get in a wreck if a deer jumps out in front of you? 3) know this is a common situation for that road in the current weather condition and avoid the road entirely? 4) some variant of these?

In many cases it can only be judged retroactively, or as Waymo or others are finding out, REALLY be respected by being so paranoid and conservative you can’t really function in the real world environment that presents itself and/or require essentially walled gardens so the environment is controlled enough you can have the certainty you need. The waymo answer here is ‘don’t drive at all’. Which is great, unless you’re running out of food and a bigger storm is coming in, or you’ll lose your job if you don’t show up in the next 39 minutes.

Humans manage to be somewhat functional in these kinds of environments - we’ve had to be, evolutionary pressure wise. We’ve got a ways to go before self driving cars will be a help instead of a hindrance here.


Humans make a lot of mistakes. Full self driving cars can not afford to make those mistakes. It has to be better than humans by a significant amount or you can forget adoption.


Arguably self driving cars can unlock a lot of utility which means that they don't really have to be better, they could even be worse than the average human driver still be a utilitarian improvement. Consider autonomously dropping off children (freeing up parents), providing more mobility to the disabled and elderly and perhaps even reducing the overall number of cars that have to be owned (and thus produced). If we consider every minute spent in front of a steering wheel a minute wasted then it would even save many QALYs.


For a lot of people those minutes behind the wheel are amongst the few minutes in the day that they feel like they are free agents. Not every minute spent in front of a steering wheel is wasted.


Perhaps, but that doesn't fundamentally change my argument. Those people can keep driving if they like it but for everyone else it can still free up time and thus be counted towards the benefits.


The issue with human drivers isn't that we can't see well enough. It's that we're easily distracted and have slow reaction times. A self-driving car can be way better than human drivers without needing special cameras or other sensors that would give it superhuman vision.


We could also just continue legislation and enforcement of distracted driving.

The touch interfaces in cars have really been a step backwards in this regard.


In Switzerland we have very strict laws regarding just using navigation equipment or eating while driving. Yet for whatever reason cars with touch interfaces hiding important features are permitted.


Revisit this when we have actual AI to consider.


> If automated vehicles use anything else their decisions will be based off different information and likely lead to different outcomes.

"Leading to different outcomes" is exactly why autonomous driving is attractive... Why is that a cons for you ...?


At least in the beginning, self-driving cars need to be compatible with human drivers, who are also on the road.


Not to mention our entire driving infrastructure, which is wholly built around visual driving.


Because while humans and self-driving cars are sharing the same roads, it's important that they agree on where the lanes are.


By this logic we should not use radar for driving in adverse conditions just because it sees more clear than a human driver.


It’s funny you say that. In heavy rain my car barfs at me that radar doesn’t work anymore.


Tesla vision - we can't source a crucial part and don't want to idle our factory.


Tesla has been saying they wanted to go vision-only for a while. The semiconductor shortage may have accelerated that plan slightly, but it's not the only reason for it.


With safety features deprecated until they figure how to get it working without the radar sensor - you said there was a "transition plan" ???.


Tesla vehicles without radar still have Autopilot safety features despite what the media is saying: https://electrek.co/2021/05/28/tesla-autopilot-safety-featur...


How come it's still shipping model S and model X with radar then?


I think it's just a timing thing. Within a few months, I expect them to not ship with radar anymore either.


Maybe some at Tesla know that FSD will not happen before these models end up on the scrap yard. ~7 years and most of these are either totalled or need costly rapairs.



This link has some kind of broken geoip redirect, so here is the outline:

https://outline.com/PAbsD8


I can’t imagine ordering a vehicle and having parts and features removed from it before delivery, with a promise that they’ll fix it in a later sprint.

I’ll wait for other automakers to start making 300 mile+ range vehicles.


I have a long range Y ordered before the change was made. Received an email asking if I wanted to continue with my order due to the radar->vision change, one click approved and received my VIN. The bleeding edge has a cost I suppose, and I’m unwilling to wait. Could be years before an automaker produces a similar CUV/SUV vehicle with a robust charging network [1] for ~$50k.

Not a Tesla apologist, just someone comfortable with a Devil's Bargain.

[1] https://supercharge.info/map


Here you go:

- https://rivian.com/

- https://www.lucidmotors.com/air

- https://www.ford.com/trucks/f150/f150-lightning/2022/

- https://electrek.co/2021/04/05/mercedes-benz-eqs-specs-range...

Rivian deliveries will start in July, Lucid later in the year, and Ford in Spring next year. The F-150's range figures are with 1,000 pounds of weight onboard so the unweighted range will be greater.

The Audi e-tron GT and the Porsche Taycan can also achieve 300 miles range. If you put thinner, smaller tyres on you can achieve more range:

https://www.youtube.com/watch?v=LZ4dypyQokY


I am shopping for a car currently, and I want an electric car, but I have to regularly make a 300 mile drive on short notice. The only vehicles that I can purchase today which are EPA rated to make this trip without stopping have a Tesla badge on the front.

And with the chip shortage, there are a lot of other vehicles that are hard to find too.


Tesla's EPA numbers don't correlate well with real world results.

On paper, the Tesla Model Y AWD Long Range (EPA range 326 miles) should have more range than a Ford Mustang Mach-E AWD Extended Range (EPA range 270 miles). But in a real world test the Mach-E has more range:

https://www.youtube.com/watch?v=lSmSiOo-v8s

Another test:

https://insideevs.com/reviews/433010/tesla-model-y-70mph-hig...

https://insideevs.com/reviews/502506/mustang-mach-e-70mph-ra...

The Mustang Mach-E RWD California Route 1 trim should get more range again because AWD vehicles typically have lower range than FWD or RWD vehicles.

And you should also try the route you want to drive with A Better Routeplanner. It should give you a pretty good idea about which vehicles will be suitable (both in range and charging options):

https://abetterrouteplanner.com/


You’re never more than 100 miles to a Supercharger (fairly dense nationwide network), so even if you’re not getting the full range during a drive cycle, your charge will be quick and uneventful (on Tesla’s charger network).


You want to go where the infrastructure is going. Tesla's proprietary plug is being out-invested by car manufacturers and charging networks using CCS plugs. Additionally, more state and federal subsidies will be going into public charging and this will only apply to chargers that charge all brands (which is as you'd expect). Examples:

https://www.whitehouse.gov/briefing-room/statements-releases...

https://www.nypa.gov/innovation/initiatives/evolve-ny

The smartest thing Tesla can do in North America at this point is to switch to CCS (like they did in Europe two years ago) and to open their chargers to all brands of EV.


One might prefer to buy and own an EV that charges anywhere today, not 3-5 years from now, when public charging infra might have caught up to Tesla’s network (EVgo’s recent SEC filing indicates they will be in breach of their contract with GM at the end of June due to being 80~ charging stations short of their contractual obligations, allowing GM to break the charger deployment agreement with EVgo and seek ~$15 million in liquidated damages, for example; not good!).

Tesla will open their network to other brands when other brands contribute to the $600 million+ capital investment Tesla made (as Tesla has previously stated publicly, and no automaker has taken them up on).


> One might prefer to buy and own an EV that charges anywhere today

Too bad you're not in Europe then. Sorting out the charging infrastructure standards is one of the reasons Europe is the biggest EV market at the moment (stricter emissions standards being the other):

https://insideevs.com/news/482202/europe-world-biggest-plugi...

In the meantime, watch North American CCS locations grow here:

https://afdc.energy.gov/fuels/electricity_locations.html

New locations are being added every week. And try A Better Routeplanner to see what CCS charging infrastructure is like in your area. In some North American locations there are more CCS charging locations than Tesla charging locations.

> Tesla will open their network to other brands when other brands contribute

The way you "contribute" is by simply paying for the electricity when you use it. You know, just like how Telsa can use other charging networks.


I keep track of CCS chargers versus Superchargers in the US. There’s a reason we still own Teslas. Even when aggressively comparing other EVs to Tesla’s experience, they came up short.

Tesla, in my opinion, is the least worst option based on all available information if you want an EV that isn’t tied to your home location (and you intend to travel far from home).


Well, it's going to cost you money to convert those plugs over to CCS (or alternatively purchasing a CCS adapter that won't give you full charging performance).

Tesla should also hurry up and get to an 800 volt platform for better compatibility with CCS.


It’s all about light sensor sensitivity right?

The human eye is incredibly dynamic, especially at low light levels when adjusted. A camera can get close to that sensitivity with a longer exposure and a big SLR lens, but exposure isn’t an option at speed, and the sensors on my car are pretty tiny.

Is Tesla actually developing new CCD or CMOS tech? That seems unlikely. More speculatively, perhaps they are supply constrained on RADAR and decided it was better to ship and retrofit later. I hope that’s not what’s going on.


To be clear the issue isn't requiring high beams, the issue is the comments of the post

That auto high beams as implemented dazzle commercial vehicles, cannot handle reflective road signs correctly, etc.

Anyone who's owned a car with auto high beams is probably aware, you can't leave them on 24/7 unless you're willing to dazzle other drivers in many situations.


Why wouldn't they do IR highbeams?


What about when you have two Tesla cars on autopilot coming at each other?

Does anyone really believe the cheap VGA webcams Tesla is slapping on their car for their "autopilot solution" can probably be blinded by even the slightest hint of a high beam?






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: