Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Autonomous DeLorean drives sideways to move forward (stanford.edu)
580 points by 80mph on Dec 22, 2019 | hide | past | favorite | 188 comments


They did most of that back in 2015.[1] This is version 2.

It's interesting what's happening as the control theory people get into machine learning. The controls people don't typically run a neural net as a controller. They use the trained net as a tool for building a controller with known continuity properties. The trouble with pure neural net controllers is that they sometimes do something totally bogus for some data point within the normal input space. That's not OK in control systems.

This has all the math of machine learning plus the math of control theory and I don't understand it, although I sometimes look at the papers.

[1] https://news.stanford.edu/2015/10/20/marty-autonomous-delore...


>They use the trained net as a tool for building a controller with known continuity properties.

This sentence and comment is a revelation to me so thanks!

I was very into control theory from 2001-2006 before jumping into Bayes nets and eventually ML. Having not really returned to CT since, it's always been a question in my mind how to use ML for SCADA & PLCs without running into the problems that you describe. As with most problems like this, it seems obvious now that it's described as such - though not actually obvious to those like me outside of it!


I would highly recommend https://www.youtube.com/watch?v=PYylPRX6z4Q as a way of framing the problem.

It seems like our options for ML are currently: self-play, dataset fitting, or human feedback. It's possible to solve any problem that can fit into one of those molds.

(Unfortunately the opposite is true: if a problem doesn't fit into those categories, it seems very hard to solve with any form of ML.)

But, now that I think about it a bit more, the sentence "use a trained net as a tool for building a controller with known continuity properties" seems neither obvious nor straightforward to do. I'm decently well-versed in AI techniques at this point, and I can't really think how I'd sit down and write a program that would generate such a system. Anyone have any leads?

Basically, the goal is to make a neural net that helps you design a system that can be predictable. One way would be to write a loss function that penalizes unpredictability, and then let it run until it converges on a predictable control system. But that's a bit like saying "Just draw an owl." Not so easy to think of the actual code to do that.


One approach is that you have a semi-traditional control system, which has integrators, filters, adders, etc. All these components have tuning parameters, and tuning complex controllers is hard. But once you get them tuned, they behave in a reasonably predictable way.

The machine learning part is used to get a predictor. You operate the thing and train a model of what will happen for various inputs. The machine learning system is just an observer in this training phase; it doesn't control anything. You train from recorded data.

Then you use the trained model to tune the controller. The model lets you get an output for any set of inputs, so you can now choose input test sets which are suitable for tuning the model, like changing one input at a time and noting the output change.

Automatically generating the structure of the controller (how those blocks are connected) is a separate problem. That's called "system identification".

See [1].

[1] http://www.mpc.berkeley.edu/research/adaptive-and-learning-p...


So what is the difference of using this approach with exploring the set of inputs to find the optimal one using a search algorithm like A* or the many different ones?

I’ve been struggling to find where machine learning starts and what is just good old statistics (used in data mining for ages) and search algorithms like A*.


They serve the same purpose (optimisation). The advantage of ML is that you don't need to be able to define an objective function instead you just have a data set.


Neural Networks are good at curve fitting which makes them useful for inference +- noise.

In control systems you need to do inference to predict feedback signals +- noise.

So you can use a neural network for that. Train it on input and feedback data from your samples.

Your control system still needs to account for predictability, smooth behaviour etc so you have the rest of your machinery to handle that.

This is not an end-to-end system though.

Anyone know if there are end-to-end deep learning controllers with guarantees on the output space?


There's https://journals.sagepub.com/doi/abs/10.1177/027836491985942.... I think Aaron Ames' group is working on theoretical guarantees of neural net policies as well.


Perhaps it could just try and learn the parameters to some kind of pre existing control theory algorithm? Perhaps with a simulator it could fall into the first category you mentioned.


> "The trouble with pure neural net controllers is that they sometimes do something totally bogus for some data point within the normal input space."

A friend's theory on Google Maps is that it sometimes intentionally sends you down a non-optimal path in order to gather data on alternate routes. This friend isn't in the ML space, so this is just their anecdotal observation. Are they right, then?


That is my theory as well.

It sometimes suggests a route I don't normally take, then when I continue on the route I normally take, the ETA updates to say I will now arrive sooner.

Google Maps intentionally suggests to take longer routes.


It could be that Google Maps is saving everyone time by doing this. The Braess Paradox is really interesting, and this could be an attempt to prevent it: http://vcp.med.harvard.edu/braess-paradox.html


I would imagine introducing cars to a road itself causes a braess paradox.


Introducing cars does cause more traffic.

However the interesting thing about braess paradox is that even if you keep the number of cars constant, introducing a new road can make travel times worse.


It usually suggests three routes to me, each with a time estimate, and the highlighted one doesn't always have the least expected time. I figured it might be respecting my other preferences, but it could equally be trying to balance traffic across the whole network. Though this also happens when cycling, which has a minimal impact on congestion.


Unlikely. Usually a suboptimal route is chosen when there's traffic somewhere else. Sometimes traffic might clear up after the path is chosen. Its an appropriate theory though, given that this is Google we're talking about.


There is a class of routes I take, which involve going through a cross street between two east-west roads. Google Maps sometimes directs me to take one that does not end in a traffic light, which would leave me at a stop sign trying to make a left onto a major road across traffic. The last time I was stuck in that position, I basically made a right and then a U-turn.


"Alternate routes" are still normal roads people travel on. Since the travel time of a path is just the sum of the travel time of all path segments you only need people traveling every road to estimate the total time of any path. Path finding is well understood with many algorithmic solutions, so there is no reason to use machine learning.

Google's implementation is impressive, but I see no reason why it would benefit from people traveling suboptimal paths.


It doesn't have to be doing gradient descent on a neural network to be machine learning. You could easily generate small perturbations to generated routes if they didn't add much time and took the driver down an under-monitored road.


Some path times are destination-dependent in ways Google doesn't seem to handle well, like certain turns or freeway onramps vs the regular lanes next to them. You would need people "testing" those parts, not just the regular path parts.


Google Maps is not just path finding; it also takes real-time traffic info into account.


Which is still just path finding: estimate travel time for each segment, use travel time instead of distance as edge cost.

Don't take that as a dismissal of Google maps: it's the most impressive path finder I know, working with high performance over long distances, taking into account real time information, and calculating meaningful alternative routes to choose from. It's an impressive piece of technology that actually makes the world a better place by saving humanity a lot of travel time that can now be spent on better activities.

Google Maps would certainly be high on my list of best inventions of the last few decades. At the same time the routing is just a good path finder with a good travel time estimator informed by real time data.


Compared to distance, travel time is a more complex and nuanced variable. There are more things that a route planner could do, and I suspect that Google maps (and others) are doing some of them. They include taking into account the variance in the time to transit each route segment (which might be, for example, weather-dependent), preferring simpler routes (including taking account of statistics on missed turns), and taking into account the effect of its -- and others' -- recommendations on the flow of traffic (or even on public safety.)


Gathering data is required to do travel estimates. If Google was using handset data to update congestion information, it would want to send a percentage of users down sub optimal routes to verify they are sub optimal.


I would be very upset if they did this, though. I'm not Google's test subject and it would be pretty f%$#@! dehumanizing if they treated me that way.

I'm not a bloody ant or part of their hyper-organism.


>The trouble with pure neural net controllers is that they sometimes do something totally bogus for some data point within the normal input space. That's not OK in control systems.

Is that actually true? Everything we do at work with neural nets seems to indicate continuity. We do stuff with mathematical simulations and imagelike processing. Latent space for auto encoders tend to be smoothly varying throughout the range of outputs.

I thought the problem is just that it isn't proven that neural nets are sufficiently smooth.


It's not a continuity problem. It's that "bogus" is defined with respect to what it should do, and that criteria requires a working understanding of the environment which isn't present in the model.

If you had all possible images of all possible roads in all possible (lighting, weather,..) conditions, etc. then the ideal behaviour is just a function from all inputs to all desired actions. In the absence of this infinity of data you need a model. Any model which has no explicit understanding of the causal behaviour of an environment is going to misbehave for one of those unseen data points (eg. an unseen road in a weather/lighting/etc. condition).

"Correct"(/Safe) behaviour is not a function of pixels. No statistical model which associates pixel patterns with action can be correct.


>Any model which has no explicit understanding of the causal behaviour of an environment is going to misbehave for one of those unseen data points (eg. an unseen road in a weather/lighting/etc. condition

I disagree. That sounds like an older interpretation of behavior of simple deep nets like the inception family. My experience with auto encoders and GANs is such that the nets can and unquestionably do interpolate between training data points. What's more, the latent spaces display order reflecting logic - much in the way that you can perform arithmetic on BERT encodings in what amounts to a form of logic. This is the where SOTA is now and I think it's a strong step towards AI, though we're still far from it.

The trick is in understanding the boundaries in high dimensional space that your training data represents. So there is some degree of covering all your bases, so to speak, and that requires a new kind of intuition. These are very exciting times in tech/ML.


Again you're treating this as-if its a problem of finding a sufficiently smooth function over data points.

The problem is that the right behaviour is not a model fitted to that data. It isn't "fitted" at all.

When I turn the thermostat the temperature in the room increases. If the thermostat is broken, the temperature does not. The predicting the effect of the thermostat on the room requires intervening in the room to find out if it is broken; it requires having a model of the room, of the thermostat, etc.

No system that is not in direct causal contact with its environment can adapt to it. The system -- as it is in contact --- needs to be explicitly modelling the causally relevant features of that environment.

This isn't a statistical problem. You cannot learn a function over images to actions because the environment is absent from those images.

An infinite number of 2D images contain no 3D information. An infinite number of 3D images contain no skeletons (ie., inner-structure of objects). An infinite number of images of object pieces contain no information on object behaviour. An infinite number of videos of an object behaving contains no information of its behaviour when broken. An infinite number of videos of all possible breakages in all existing environments contains no information about behaviour in new environments.

Animals solve this problem by playing with objects, building models of those objects (their causal properties, ie., how they interact with other objects). That requires being-in an environment and explicitly modelling it.

There cannot be a "non-bogus" system arrived at via ML. Statistics itself is deficient in providing tools to design systems that "do what they should".

Any paradigm which makes the cartesian assumption that "behaviour is a function of data" is necessarily incapable of intelligent adaptation. Environments "as data" are infinities.

ie., For statistics to work you need all the relevant variables of an environment to make the "correct decision", and all the data needed to train against those. That's infinite.

The "child on road" column isn't going to be fed into the machine. The machine does not, and cannot, even model what a "child" is. Patterns among fractions of infinites is not a basis for saftey (, nor for intelligence).


I think most of the systems do try to model the objects around them. Of course, that is also subject to its own statistical anomalies. See report of the Uber/pedestrian collision for what happens when that goes wrong.

I don't mean to dismiss your points, though. I wholely agree that if we really want to be moving towards an AI that is anything like animal intelligence, things like play are crucial. I'm a big fan of the attention schema theory. And if that is right, then building systems that can generate such models for their own mental state and others is the way to "consciousness".

https://en.wikipedia.org/wiki/Attention_schema_theory

It's all well and good to build systems that can look at faces and statistically say, "oh they're mad". And I think people have this idea that maybe if we build enough of these systems and jam them together, we'll make an SI. But until we can build systems that can model why that person is mad, based on the system's own experience and models, we're just making parlor tricks.


I mean a very specific sort of model: a causal model.

A model of that kind has to be able to generate predictions given the agents interaction with the object. Ie., how will the thermostat behave if I turn on the air conditioning.

The problem is that we acquire these models during an entire lifetime of being embedded in an environment, esp. a social one.

A self-driving car is never going to predict human behaviour given its own behaviour without having lived a human life. The question of what it should do given how another driver/pedestrian/etc. behaves isn't something it can solve with "mere statistics".

It might be that human behaviour around roads ends up being extremely predictable, I'm doubtful that's the case. I think a car is going to need a Theory-of-Mind to navigate complex social driving environments, ie., a causal model of the behaviour of other animals.


> I think a car is going to need a Theory-of-Mind to navigate complex social driving environments

As someone with a lot of experience in self-driving cars, my opinion has changed over the course of the last decade from "we can create smart enough models of these separate problems to create a statistically safer product" to "first you need to invent general AI."

It becomes immediately obvious as you encounter more and more edge cases, but you would never even begin to think of these edge cases and you have no idea how to handle them (even when hard coding them on individual cases - and you couldn't anyway as there are too many) so you realize the car actually has to be able to think. What's worse, it has to be able to think far enough into the future to anticipate anything that could end badly.

The most interesting part of self-driving cars is definitely on the prediction teams - their job is to predict the world however many seconds into the future and incorporate that into the path planning. As you can guess, the car often predicts the future incorrectly. It's just a ridiculously hard problem. I think the current toolbox of ML is just woefully, completely, entirely inadequate to tackle this monster.


I happened to run across statistics (iihs I think) for fatalities to (human) drivers per registered vehicle-years, and the number is about 30 per million. You may or may not accept that this is the right type of metric, but it's interesting to think about what the number implies. I read that Uber had a fleet of 250 cars in testing. If an average human driver has a fatal accident once in ~33,000 years, then to demonstrate parity in self driving, Uber would have to operate their 250 vehicles for an average of 130 years between fatalities. Granted, this is driver fatalities, so hitting pedestrians would not count. But even so, it seems it is a high bar and you would need hundreds of years of testing at the current rate to be confident you are better than human drivers rather than having a statistical fluke.


To simplify, if you have a map of Ankh-Morpork on your wall and you put your finger on Sator Square, your finger is not in Sator Square, no matter how detailed the map is. If you moved your finger around on the map, you would not know what would happen if you moved your finger around in Ankh-Morpork, nor whether it's safe to do so, no matter how detailed the map is.


Yes but the control-theoretical approach can't interpret pixels at all.

But translated to the problem at hand, your argument is correct.


the Uber incident investigation showed how a continuous input generated wildly varying output in practice. in theory the existence of adversarial images and adversarial training has been studied for a while, but the state of the art can still produce singularities around continuous inputs.

start here: https://arxiv.org/pdf/1312.6199.pdf


> I thought the problem is just that it isn't proven that neural nets are sufficiently smooth.

I think this is exactly what the GP was trying to say (at least, this is what I understand from it). The continuity bit is about the (non ML) controller, not about the neural network.

Nearly all neural networks generate continuous output. They do interpolate between their learned points. The problem is that it is very hard to verify (and impossible to predict) the interpolation function, so it may cross outside of the safety area on any point.


That was roughly the thesis of Fuzzy Logic (Kosko 1993); complex control systems that used neural nets during design to extract "rules" from the system under control.


What papers are you talking about? Can you please link to them? I don't see anything about neural networks in the papers about this experiment.


> They did most of that back in 2015.[1]

Just in time for the 30th anniversary of back to the future. Nice.


Anyone have more technical information on this? This sounds really cool.


Can we just stop for a minute and admire the fact that there are people out there who have found a way to get paid to program a DeLorean to drift?


More like, if you can program a DeLorean to drift, someone will pay you for it.


That's definitely not true. Making something cool and useful doesn't necessarily translate into making money. In this case, these people are getting funding because they convinced someone that this was useful research, probably through a grant application of some kind.


My thoughts exactly


Back in the late 70s, my high school driving instructor said "don't tell your parents, but the best way to learn how to control a car is to go out to the school parking lot on a snowy night and spin donuts." Funny that they are finding that artificial intelligence can use the same method.


He's not wrong. When there's a solid snow, I love to find an empty parking lot and just do some stupid maneuvers. It really let's you get a feel for what works and what does. Like, if I turn my steering all the way and slam on the brakes. How far do I slide. Where do I end up? It's really good for getting comfortable with all kinds of scenarios


I've never tried it with a RWD car, but as a teenager, I thought the behavior of FWD in the snow was really fun. Go around a turn as fast as possible, and push on the gas pedal, it slides towards the outside, front first. Let off on the gas, and the front snaps back to the inside of the turn. I also buried my car in a snowbank more than once and had to dig it out.


I always thought RWD was more intuitive, and in this video the car appears to be RWD. It probably depends on what you grew up with.


This group has done good work, and at talks about their work I was able to come away with an interesting trivia.

When they try to compete with the best racers in the world, they do lose by a consistent amount. It turns out the human racers are constantly pushing the vehicle+tires to the limit of control to understand the sharpest possible turns they can make without losing time. As the tires wear, the human is in constant learning mode to compensate turn approaches as the coefficient of friction changes.

Work on the DeLorean will hopefully feed into their racing controls and then they might be able to beat the best human racers in the world on arbitrary tracks.


Somewhat reminds the progress airplanes did in the 1970s with fly-by-wire technology. Specifically they way it was utilised in the F-16.

Moving from directly piloted controlled surfaces (rudder, elevator, flaps, ailerons etc) to using a computer to convert the pilot command input to servo actuators. Flight surfaces had much more authority, much better performance could be achieved, but the flight computer was in charge of safety and keeping the aircraft within limits.

https://en.wikipedia.org/wiki/Fly-by-wire


Steer and brake by wire is a safety thing, all manufacturers so far have kept a mechanical link in place despite electric assistance because of the potential for lawsuits. Infinity has gone the closest to full steer by wire so far that I’m aware of.


Well, a very key factor of steering is that it connects your fingers to the interface between tyres and road surface and imparts critical information about what’s going on there. If you decouple the steering completely from the mechanical bits you have to recreate that channel of information. Which isn’t necessarily easy to get right.

Even servo assisted steering systems can easily become a bit numb. If you have ever driven a tractor it is very different from a light sports car without power steering for instance.


I drove an old redneck work truck once that someone installed assisted or power steering into. It also had a giant steering wheel from before that stuff was installed so it was comically easy to drive.


I’ve heard steer by wire can have faster feedback than pure mechanical systems. Even a mechanical steering has lag in cogs and there is never 100% stiffness in the steering column, etc.


I think the poster above wasn't talkng about input lag, but about the fact that the steering wheel acts as both input and output - you control the car through it, but you also get back information on the state of the car on the ad. If your motion was simply interpreted by computer, you would lose that information; so you in fact need a two way system, that reads your inputs and controls the car, but also reads the state of the car on the road and conveys that information to you.

Sort of like how gaming driving peripherals added force feedback to make the controls more intuitive.


This is untrue. I have also tested throttle by wire systems and found more lag. The auto industry needs it for emissions and markets it as better, it is not.



So is the general idea that by removing the automated controls, which normally limit drift so human drivers don't lose control, they're able to gather data at the extremes of car handling?

I'm curious how this would be applied in the real world. It almost seemed to suggest that a self-driving car would operate with those stability controls turned off, allowing it operate evasive maneuvers that would be impossible in a human-driven car.


Drifting is objectively less effective and safe than normal manuevering and braking.

Drifting as a sport exists to look cool and stylish. It's like figure skating with cars.

(There is one exception- off road racing on loose surfaces requires drifting. But driving that way off road is only useful for speed at the expense of safety and reliability.)


What a grandiose statement with no backing whatsoever. "Objectively"?

Drifting, as most things, has its place. If you've ever gone karting on a wet track, you know that you can't really place first against competent drivers unless you use drifting to a significant degree.

Same thing with off-road driving, or driving in general that requires rapid direction changes with low surface traction.


It seems the longer term objective is to learn how to steer the car while in a skid, not intentionally drift. This is a useful defensive driving skill. You may not have time to recover before a collision, and getting the car comfortable with avoiding obstacles while sideways seems a good way to help.


Another exception of a sort, and I believe the one this research is meant to address, is when the drift is initiated unintentionally. Say you hit a patch of black ice mid-corner, for example. This kind of work can expand the envelop of the control system to encompass these scenarios.


I don't think this is true. If you hit black ice and the tire skids, the problem is you just have no friction. Modern traction control systems already account for this as best they can - I don't see how this model would help any more except for trying to go faster through patches of black ice. And even then, drifting inherently requires traction in order to modulate the throttle and control the vehicle.

If you can't get the wheels to connect with the road, the first thing you'd want to fix are your tires - not your traction control system. Most people don't recognize how much better modern winter tires are compared to all season tires for driving on snow and ice.

https://youtu.be/atayHQYqA3g


The thing is: By the time the car has traction again it might have rotated around the up-axis. Getting back from that configuration do a less unstable configuration can very well improve if the car knows how to drift, especially how to go into the desired direction even before the drift has ended. Doing so would make the (result from the) initial lose of control on the black ice more limited and more controlled, hence less risky.

tl;dr. nothing you can do on the black ice. but if you have tracking again there might be faster and saver ways then "first stop drifting, second start driving in the correct direction".


Yes, this is exactly what I meant by a drift that was unintentionally initiated, such as by hitting a patch of black ice. Thanks for elaborating.


If you hit black ice you are screwed anyway. The most useful thing your car could do is to detect black ice before you hit it.

I’ve seen some technologies for doing just that. Black ice isn’t very easy to distinguish from other surface conditions cheaply, but the technology exists. (It just has to reach volume production).


The very best way to deal with black ice is to avoid it entirely, simply do not drive if the conditions susceptible to generating it are present.


For a lot of people, that's equivalent to tell them not to leave their homes for six months.

It's possible to drive safely in black ice conditions with the right tires and appropriate levels of caution.


I live in a coastal city in Norway. You might as well tell me to hibernate :-)

But on a more serious note: it isn’t that hard to cope with if you are prepared. In general by choosing proper tyres and learning to drive in those conditions. In particular by learning to read the conditions so you can anticipate where there’s localized black ice.


yeah but even then the autopilot need to adapt to the different conditions, it can't just throw the hands up when the conditions are hard and let the driver to crash on its own (at least the ideal full autonomous)

so you need to have a "friction" (lateral and longitudinal) input to the model, so that the model forecasted car position matches the actual car future, to allow the autopilot to plan the correct avoidance maneuver


You're assuming Autopilot can't currently account for losing traction (this is probably true). But unexpectedly hitting a patch of black ice won't favor one model over the other - in either case you'll use the same methods to slow down. The only difference would be if you also had to avoid an object some distance ahead, and the drift model helped you maneuver around it by applying the throttle.

It's extremely difficult for me to contrive a scenario where this would help avoid an accident over a standard traction control model found in any modern automobile. But technically I guess it's possible it could help.


> contrive a scenario

pretty easy actually. car ahead swerve to avoid an obstacle. autopilot don't see the obstacle until the car ahead swerve. autopilot has to change lane to avoid obstacle.

if it has recently snow some lane will have different traction than others.

this is what happens on most of these pileup videos you see on youtube, so while not common (there are a handful a year after all) they aren't just hypoteticals


You're right - I guess it's not that hard to imagine that scenario. But how would the car know which lanes have traction? If the car can reliably detect black ice it can avoid it without ever having to slip/skid.


yeah that the hard part, humans can't identify it visually, so car will probably be able to identify it only after it loses traction, reducing the margins significantly.


Why wouldn't a model that knows how to drive in winter-conditions be better than one that doesn't? There are more effective actions than hitting the brakes and hoping for the best available, one of which is to continue driving the vehicle in a manner that is compatible with the conditions. Don't make abrupt changes to the configuration of the car and search for traction and positive outcomes. Of course this is greatly simplified by not going 90mph on all-year tires.


How quickly can a car measure the available traction and respond to brief loss events?

Can on/off detection response times and recovery strategies be improved for black ice (and hydroplaning)?

If a car hits black ice, should it power itself down or should it max out the CPU looking for the instant traction returns on any wheel and do whatever it takes to try to slow down?


Modern traction control systems available on production cars and motorcycles have a polling rate of up to 1000Hz. You don't need self driving tech to handle most low traction situations.

Black ice is an exception because once you start sliding on it there is not much you can do to regain control except ride it out and hope you don't hit anything before you get out of the ice.


Can't do anything while you're on the ice, but once you're past it you may still need to react to avoid a collision, and you may find yourself in a sideways slide at that point. So a control system that knows how to handle a drift could be beneficial at that point.


My mouse has a 1000Hz polling rate. Is 1000Hz really good enough for excellent traction control? I'm sure it's fine for most situations, but the small amount that it's not good for are probably the ones that matter the most.


I expect you could turn your mouse's polling rate down to 200Hz and not notice any difference, even gaming. Also, a mouse is a very precise control; much more so than the controls of a car. I could certainly be wrong, but my intuition suggests that a 1000Hz sensor polling rate would be fine for an autonomous vehicle.


Stability control is just another algorithm running in the car so it's logical that a more general system would supersede it. Keeping existing code in place would limit car's possibilities and increase complexity.


I drive a lot on slippery surfaces. My car has stability control from 7-8 years ago. I’m sure a lot has improved in that time, but it is very obvious that the stability control in my car is more about trying to “rescue” me than assist me. Meaning, it will interrupt normal flow to “correct” more than it will assist.

If you extend the envelope for what driving dynamics you can handle I would imagine you could make a much more smooth and safe experience.

That being said: I’ve driven somewhat newer cars on wet race tracks, at speed, and I have to say I’m a bit impressed with how well they behave under stress. Still meddlesome, but not as dangerous as they used to be.

(But track use in slippery conditions are outside the envelope for a road car, so you’re better off with the assists turned down or off to avoid surprises)


It's a demonstration that they can predict the result of a wide range of control inputs across a wide range of vehicle states.

There's nothing unusual about a car without stability control. ABS and traction control have only become standard features relatively recently. Vehicle stability control builds on traction control by attempting to predict the driver's intent instead of just preventing slipping.

I think that you are right, that it enable more drastic responses in situations that required them. The normal control mode would probably focus on passenger comfort though.


I think you have the right idea, this is just dipping the toes of autonomous vehicles into the pool of kinetic friction.


I'd like to see this car vs humans in the Hyperdrive series on Netflix


Don't get me wrong I think this is an impressive feat and I am always impressed to see universities pull off these kinds of complex system and control projects. Yet, there are a few things that I would like to mention.

The article says that the car does "doughnuts with inhuman precision" and they want to develop vehicles that can handle "emergency maneuvers or slippery surfaces like ice or snow". In this context, I feel this demo falls a bit short. The car can drive with superhuman precision, but it also gets superhuman capabilities like inch-precision localization (and an IMU is my guess) or superhuman steering wheel turning speeds. And the vehicle is heavily modified for this specific use-case. Drifting looks stable in the video but I really cannot judge how much easier it is with this car than a normal car. In addition, the asphalt also looks fresh and clean. I would like to see what happens if it suddenly encounters wet surfaces (or ice).

edit: typos


Inch precision localization isn't out of the grasp of human drivers. Kimi Raikkonen at Monaco coming millimetetd from the walls repeatedly comes to mind.

It also isn't out of the abilities of a bog standard optical SLAM+IMU.


Don’t forget a “normal” car is typically front wheel drive. I’m fairly certain drifting like this is nearly impossible with front wheel drive.


Correct in this case. And to a pro driver, this is not drifting. This is donuts. Mashing the gass to lose grip is the lowest tier of 'drifting'. Real drifting as racers do it uses mostly the momentum and changing inerta from braking at corner entry to get sideways, using the gas only to modulye traction. That can be done in any car, fwd included.


I genuinely hope Tesla get a 2020 Roadster to lap the Nurburgring on autopiliot with nobody inside. If it lives up to promises, it will easily set the all time lap record as the fastest vehicle ever.

It will be an interesting time when a $200k production car can whip the pants of a multi-million dollar F1 racecar.


Elon Musk claimed in a tweet that the roadster can beat a time of 6:44 min on the Nürburgring Nordschleife, which is (currently) 20.832 km long.

The current lap record for street legal cars is 6:40 set by a Porsche GT2RS MR.

The overall record is 5:19.55 set by a Porsche 919 Evo.

F1 last raced the Nordschleife in the 70s. The lap record is 7:06.4, but the track was 22.835 km long back then.

The Roadster might have a chance at beating the lap record for street-legal cars, but physic will make sure that it will not come even close to an overall lap record.


Based on what we know of physics, weight, and grip, elon is, as usual, full of it.


Don't forget the 2020 roadster will actually have rocket boosters. (Elon has given his word on this more than once)


Elon also gave me his word that Vernon Unsworth is a peadophile, so maybe not.


Theres no way a tesla roadster is beating the current record holder, the 919 evo. The roadster would be magnitudes heavier, plus it wouldn't have the aero for the downforce.

https://youtu.be/PQmSUHhP3ug


Obviously we have to take it with a grain of salt, though Elon said it will. So far we've heard of an 8.8 sec quarter and 0-60mph in 1.9s. The chief designer also said it will better all publicly announced numbers.


Tesla cars historically are pretty terrible on the track. They're too heavy, and they wind up overheating.

The torque from the electric motors means they do really well in a straight line drag race, but they're fairly pathetic at any race that involves manoeuvring over a longer course.


The prototype of the Plaid Model S recently was significantly faster around the 'ring than the new Porsche Taycan.

https://www.thedrive.com/news/30846/tesla-model-s-prototype-...


Better than the Porsche Taycan though, to take it somewhat more apples to apples. So there’s that. While having near double the range, a lower price, air suspension, frequent free updates including for performance and handling, seating more people when off track, having a non-vaporware fast charging network... it’s almost an unfair comparison against a one trick pony toy car (I’m sure you can cherry pick some special racing-only case where the Taycan steps up).


They've apparently made improvements to the cooling since the Model S, which could not complete a lap of Nordschleife without overheating.


I wonder if it would be feasible to make a fleet of race cars that have self driving technology that has each car communicating with the others and monitoring the human drivers, with the car letting the human drive but able to step in if the human tries to do something that would cause a crash?

I bet you could make some pretty good money with such a fleet running open races that members of the public can pay to drive in.

The self driving system could keep track of the number and severity of its safety interventions which could be used to give time penalties at the end of the race [1], so that the winner is determined by the skills of the humans.

Besides racing, you could also do car chases that recreate scenarios from action movies. Add in something to simulate guns, and you could do scenarios where a car with a driver and a couple armed passengers is trying to escape a couple pursuing cars, each with a driver and armed passenger.

[1] Or maybe actually do the time penalties during the race. If the self driving system has to take over from you, it could slow you down for a bit before returning control to you.


I doubt race car drivers (who are better drivers than the average car driver) would want a computer to yank their steering or do something unexpected in a race.

Autonomous cars aren't necessarily better than race car drivers.. would be interesting if human racers and autonomous compete


There has been progress since the millenium on race traction control. In the beginning, it was a crut h for bad drivers. Some systems today are banned in racing. Both for ocassional advantange, and for not-occasional mishaps that endanger other drivers. It always seemed odd to me that tech not safe enough for racing is now mandated on the road.


Article mentions each tire gets 7,000Nm torque from its electric motor. Is that a typo?


Probably not. Teslas, for example, have a ~9:1 drive reduction from motor output shaft to drive wheels. 1000Nm is a high torque motor to be sure, but not that hard to find.


This could be a pay-to-ride-for-thrill startup idea. What a ride!


Until someone spills a bit of oil on the pavement and it drifts straight into a nearby wall because oily pavement is "out of domain" for its deep learning systems.


If you notice in the video there are no nearby hard walls, it's a giant open parking lot and the course is set up by soft pieces like plastic traffic cones. I'm sure a course could be made with appropriate margins for safety. Even so, in the ride world there are accidents but that doesn't stop people from riding.


Not to mention the fact that this system is monitoring the traction of all 4 wheels constantly and would detect the slip in milliseconds, transferring power to other wheels. It would be fine.

They probably already have a bunch of lubricant on that lot in order to reduce wear on the tires.


> They probably already have a bunch of lubricant on that lot in order to reduce wear on the tires.

That's a real skid pad at a real road course[1] with real rules and regulations[2], not some expendable engineering testbed greased up to save tires.

[1] https://www.thunderhill.com/renting/skid-pad

[2] https://www.thunderhill.com/s/Skidpad-Event-Guidelines-2018-...


On top of that, there's the deadman's switch on the console that stops the car if the driver releases it. Would definitely pay to ride in something like this.


It's an e-stop button (probably NFPA 79 Cat 0 implementation), not dead man's switch. This timestamp[1] clearly depicts an air gap between fingers and button while maneuvering.

[1] https://youtu.be/3x3SqeSdrAE?t=74


Hah you're right. That makes more sense, the other angles looked like he was holding it down the entire time. Thanks :D


it’s the skid pd at thunderhill. there are definitely walls, they are just off camera.


Yeap. That's where I sent my roommate from college to learn performance driving; a Korean dude with a CS degree from an UC turned California highway patrol (CHP) officer. (How many cops has anyone ever run into understood Big-O notation?)

Disclaimer: I learned a thing or a thing and half sliding around the back roads of the Sacramento levee system with water hazards on both sides. }:]


i did a track day up there years ago who was chp and was trying to explain you had to take trigonometry to me for taking the radar speed of a vehicle at an angle...


Vehicle speed within traffic lane, or relative speed along a vector to/through the radar device - which vector might be at a significant angle to the traffic lane?


Doesn't this sort of movement cause a lot of wear on the tires, thus making it unsuitable for routine use?


It's not supposed to be for routine use. The idea is that if the car gets into a situation where stability is lost it will know how to handle it.

As an example, imagine an autonomous car is going around a bend and hits a large patch of black ice and goes into a slide. Current systems will struggle to handle the car and it could result in a crash. With this knowledge added the car will know how to handle slides and will recover easily.

Another example (given in the article) is if a person darts out into the path of the car it can perform a sharp turn and gracefully handle manoeuvring around them without losing control using this knowledge.


This is exactly right.

Recently, I drove a big 4WD SUV at about 5 mph onto ice in the middle of a 90 degree turn and lost traction in the front, and instead of doing anything helpful, the system decided the thing to do was engage the parking brake, which instead of stopping the vehicle put it into a slide, and almost got damaged because of it- when the vehicle stopped, it was less than half an inch away from a big transformer box in the front and less than 5 inches away from a concrete pole on the passenger's side.

I think I honestly could have done better without traction control than the stupid stutter-stutter click slide that came with it in that situation.


Yes, but.

Yes #1 is that this is mostly Drifting techniques, which are mostly for show/style competition. The tyre slip angles mentioned are around 40deg, far past the optimal level of grip.

Yes #2 Racing techniques are more explicitly seeking to optimize for maximum grip, often found around 3 to 6 degree slip angles (depending on tyres, tyre pressure, tread & road temps, road cleanliness, dryness, etc.).

Both still are in the ranges of tyre slippage, at the edges of control, and burn tyres at crazy rates vs street use (drifting just insanely so).

Moreover, as someone with race training & experience, I can say that anyone doing full race/drift techniques on the streets as an ordinary practice is a real a*hole & a hazard.

BUT, all that said, it is absolutely critical to have thise techniques in your bag of tricks, available to use when something exceptional happens. In those rare-ish events, being able to use the full performance envelope of the vehicle is a real life saver, both for you and the others nearby.

So, for sure not every day, but these guys are absolutely right about the need to work on the full dynamic range of the performance spectrum.


> BUT, all that said, it is absolutely critical to have thise techniques in your bag of tricks, available to use when something exceptional happens. In those rare-ish events, being able to use the full performance envelope of the vehicle is a real life saver, both for you and the others nearby.

are you referring to during a road race, or ordinary driving on ordinary streets? It would be great if you could try to provide evidence for the latter. I've never heard of accidents being avoided on highways because someone knew how to do racecar drifting. My own 30 years of driving experience would suggest that not tailgating, keeping a safe distance from the other cars, and being alert for those occasional race-car-wannabes whipping in between everyone, invariably always driving BMWs, is likely to be statistically much more effective.


Just to clarify here, I'm talking about driving on the edge of available grip, NOT the wholesale sideways drifting tech - that is mostly for show.

Of course, in road racing, these skills are critical multiple times per lap.

On the street - there are still too many examples to list from direct personal experience.

Long trips, find myself in a severe snowstorm, or an ice storm where it's too slippery to even stand, cars off the road everywhere, I'm able to drive (w/ managed sliding) to make it through to destination or shelter just fine. Similarly, coming onto a muddy or oiled patch, same thing -- very handy to be able to both minimize the consequences of lost grip, or recover from a sliding situation without hitting anything.

Also the occasions where an obstacle is in the road by surprise behind a visual obstacle -car pulls out, parked in road behind corner, whatever, available space is closing -- being able to drive with precision at the limits of available grip makes the difference between a <whew - close one!> and a <mash the brakes, turn the wheel, bash into whatever> incident.

Obviously, alertness is key to everything (even w/ the best skills), and of course maintaining safe distances, etc. is also key, and the basis of any sound skill set.

Edit:, additional clarification, the traction controls in many modern cars also eliminates the option for many of the high-slip-angle moves. You'd have to turn of all thise features, or unplug the fuse, which is not possible on the timescale of an emergency maneuver.


great, knowing how to oversteer at low speeds in snow, of course we all need to know that and yes with my current Subaru it's hardly something that really happens anyway, the scariest thing that has happened with that car was a sudden whiteout where I had to cross down a hill that ended at a red light, for which cars weren't even able to stand still on said hill due to slipperyness. I basically plotted a course to what embankment I would use to break the movement in case I kept slipping but the Subaru was able to stop.

Overall I was curious if you were suggesting I'd need to go to racing school and learn drifting in order to keep my family safe. thanks for clarifying.


>> Overall I was curious if you were suggesting I'd need to go to racing school and learn drifting in order to keep my family safe. thanks for clarifying.

Excellent question

Absolutely need it? probably not. Many situations are like the tricky one you described, where picking the least-worst snowbank was probably your best move. That said, only chance will tell if anything more hairy comes up (the thing that wigs me out most is dashcams of things coming in from the oncoming lane -- gotta be really alert, quick, and dynamic to survive).

Highly recommended? Absolutely!

I cannot recommend enough going to some classes that are typically called Car Control Clinics, and are often offered by the local SCCA (Sports Car Club of America), BMWCCA (BMW Car Club of America), and others.

IMO, these should be obligatory for all new drivers. They cover the general principles of the limits of grip, and how to manage the car at the limits, and how to drive more precisely both at ordinary speed and in quick situations (e.g., sudden-lane-change drill), how to keep the car balanced in maneuvers, how to get the most out of brakes, steering, etc. I took my mom to one once, and she had decades of experience in snowy climates, and she learned a huge amount and had a ton of fun - started out timid and was smokin' the brakes by the end of the drills! A great investment of a day and entry fee to cover the site, and a lot of fun.

If you want to move up into track day classes, autocross, and road racing - you'll learn all kinds of hings you never knew existed (true for me even w/top-level experience in other speed sports) and have the most fun you cna have with your trousers on

Please ping me if I can help you find options in your area.


>I've never heard of accidents being avoided on highways because someone knew how to do racecar drifting.

Knowing how to control oversteer is a crucial skill when driving on snow and ice.


this is very different from intentional racecar drifting. I can oversteer in snow without having racing school training.


not me, but my father's experience here:

He was on the way home from something in the winter with a sedan fully loaded with his friends, when a semi truck in the leftmost lane of a three-lane highway lost control of the trailer and it went sideways across traffic. He was in the rightmost lane and avoided crashing into the semi trailer by sliding his car into the concrete barrier and driving sideways on it, thus evading the semi trailer and saving him and his closest friends from what would've been serious injury and possible fatalities.

Driving sideways on walls is not good for vehicles- that stunt totaled the vehicle- every single body panel was damaged, the wheel rims were all toast, and the vehicle's frame and structure was not in good shape afterwards- but it did deposit its passengers on the other side of the trailer in one piece.

If my dad hadn't broken the rules he did- driving in the shoulder, driving with two wheels on the ground, pulling a handbrake on the highway, and excessively accelerating, he would likely not be around today.

Once when I was driving, I took a left-hand (cross-traffic) turn and a vehicle in the oncoming lane decided to speed up instead of slow down. If I hadn't stomped the accelerator pedal and burnt rubber, I would've had my first accident on my second day on the road.

This isn't to say that most of the stuff that people do is good, or smart- but performance envelopes of vehicles- how fast you can go from 0-60, 60-GTFO, and whatever speed you're going to dead stop matter quite a bit- not for normal driving, but for when it isn't normal anymore.

Speaking of which, my dad also once chased a driver who rear-ended someone and sped off to get his license plate number before he could escape- he hit about 120mph chasing the person across traffic long enough to memorize their plate number and make/model.


they have gotten this car to do complex and exotic movements via automated manipulations of the car's human-oriented interface, that is, a single steering wheel, four wheels that turn in the same direction, and then the brakes / accelerator.

It seems though that if you are actually trying to design autonomous cars that feature greater maneuverability than what is normally possible you'd instead ditch the whole assumption of a single steering column and all of that and just consider all the wheels as independent, or maybe add more wheels that engage for some kinds of maneuvers, or probably a whole lot of other things that im sure all the car people here know are possible if you are no longer constrained to controls created for a single human with only four limbs and extremely limited coordination and reaction time.


The people working on this car are designing a vehicle control system, not an actual new vehicle.



That's incredible footage just on its own :)


Imagine if the car could also do that in the air, where the "ground" is unstable and changes all the time, but autonomy system should still maintain narrow range glissade.

Wait, that's called landing autopilot.


Can an autonomous car beat the best version of Frank Martin in The Transporter series, let alone any of the Fast & Furious drivers?

Now we have the answer. It’s game over people! Even the fictional drivers in the movies have no chance against real world AI today.

This means in the future when an autonomous car or robot is chasing you — you may as well give up. You just better hope it doesn’t get cheap enough that someone can simply hire a swarm of robots or autonomous cars to kidnap people or incapacitate them or wreak havoc en masse (eg crashing into 10,000 gas stations at once).


> Besides, MARTY, the driver

:-D

Too many software engineers overlook the fantastic opportunity for cleverness when naming things. It's one of the hardest problems in computer science, but one of the most rewarding.


I was half expecting a Toyota Sprinter Trueno and Eurobeat. Pop references aside, though, what is this useful for? I thought cars were designed to not skid normally due to ABS?


I believe the idea is partially to understand how to train models to recover from skids. ABS gives you an assist, but it may be that the car can drift itself out of danger?

> “We’re trying to develop automated vehicles that can handle emergency maneuvers or slippery surfaces like ice or snow,” Gerdes said. “We’d like to develop automated vehicles that can use all of the friction between the tire and the road to get the car out of harm’s way. We want the car to be able to avoid any accident that’s avoidable within the laws of physics.”


ABS is a very human-centric system the tries to keep the car controllable under low traction, often by sacrificing something aspect of movement. The neural nets here throw all that out of the window, and control the car with direct physics, without being once or twice removed from reality.

The video also has some very interesting points about how the AI should be able to control the acceleration and braking on each wheel individually - so we really don't need to limit AI driving with human safety or control systems.


That, and also ABS only prevents the wheels from locking (much) under braking. It doesn't help you if you enter a corner too fast, or hit a patch of ice, or stab the throttle too early on exit, etc. Traction control helps with the last of those scenarios, and modern stability control can start to help with others. But as you said, this system would theoretically be able to go straight to the most effective inputs in any given situation, since it doesn't have to deal with a human driver in the loop.


This was what finally killed my long-held attitude of "we don't need no stinkin' ABS". No matter how good I am, I will never be able to independently limit-brake each wheel of my car.


Yet, in snow, non-abs cars still stop faster. Hmmm.


Same on gravel, and for the same reason. Neither of those are something I run into particularly often, though. Painted lines on wet roads that will cause the wheels on one side to lock up, though...


ABS is a system to prevent you from locking your wheels when you step on the brake. It doesn't prevent your car from breaking traction.


It sounded like it precomputed the route, steering, drift for the whole course first, then executed it.

If the friction were variable and unknown it would get completely lost.


Which is very common, even in controlled drift events. Over the course of a single run, tires can completely change grip coefficients.


So you got the sense that it was recalculating based on the response it got?

I wasn’t clear on that from the article.


I combed through the article and a previous article linked in it, but I can't find any concrete answer one way or the other. My intuition is that it would have to be recalculating it's inputs as the environmental variables change. They could have modeled in expected tire friction coefficient changes, or perhaps the task is more forgiving of that changing variable than I expected.

One of their stated goals is to drift in tandem with another human driven vehicle though, so they are likely aiming for complete, adaptable autonomy.


"This leads to the physically insightful result that one can use the rotation rate of the vehicle's velocity vector to track the path, while simultaneously using the yaw acceleration to stabilize sideslip."

This sounds like what every racer will tell you: When the car is sliding, you control your direction with the throttle and control the angle of the slide with the steering.


it’s generally a bit more complicated than that.


Not really. The hard parts are initiating the slide and recovering from it, in the correct trajectory. The actual slide can be done with a rope and a 2x4.


Most cars sold in Europe these days are FWD.


If self-driving tech has become so advanced, I wonder why Waymo is not delivering anything? I wonder why is Tesla taking forever to deliver self-driving, why Uber discontinued its self-driving program, Why Apple ain't serious about Apple car yet?

Does this demo hide something that we should know?


Because this has not much to do with what Waymo or Tesla are doing. What they've developed here is a control system, a way to make the car move in a certain way, follow a certain course. The problem that autonomous driving needs to solve is to have a car that can reliably perceive and understand what's happening around it, navigate any complex situation involving pedestrians, intersections, road work, parking, etc. The drifting DeLorean doesn't have to worry about any of those things, it only has to work in an empty parking lot. There might not even be safety measures to prevent it from hitting people. They use precise GPS to track the car's position (where it is on the course), so it's possible the car doesn't have cameras or LIDAR to perceive its surroundings.


I think in this demo they simply paint a path for the car to follow. The cones etc are there just to help us see whether it's following the path well. If there was a cone in the middle of the path, it would've run right over it.


Yes, this. Not trying to take anything away from this, but this car is essentially "running a program." I would love for someone to come on here from the program and publicly repudiate what I'm about to say. I bet they ran that "dance" hundreds of times, no cones or hay or obstacles, got it to the point where it was precise and repeatable enough to then set up the cones and barriers (I mean, we have robots that place parts to within 0.001" or better repeatably).

What would impress me (seriously). I go out the L.A. Colliseum parking lot, set up cones / a track that the car can definitely do (based on radiuses & turns in this video), the car "studies" the track (I don't care if a drone looks at it & takes measurements), and then the car calculates and executes a perfect path through the course the first time without hitting anything.

Again, not trying to crap on this, I'm just saying that anyone who's worked in controls, automation, engine control, etc. (I've done all these & more) isn't utterly wow'ed by this. Kudos, but we didn't just put a man on Mars.


Adaptive cruise control with stop and go can understand surrounding and knows when to stop and go. With that if I add paint path as this demo then self driving should work. But in reality it ain't. So There is something else which seems like too hard to solve problem with self driving . I want to know what is that


A couple traffic cones out in the street could direct this thing onto the sidewalk, where it would drift over pedestrians or slam into a brick wall without hesitation. There's a big gap between a car that can drive on a closed course and one that can navigate the real world for millions of miles without killing anyone


I'm impressed by how well this complicated page renders without javascript.


Ken Block eat your heart out. He should trademark an enable Block mode.


I seem to recall a video Thrun's/Stanford's DARPA car "Junior" drifting in a parking lot autonomously. Around 2007.

Cannot find the video, though.


Response is twofold: I think the cars maneuvering is impressive. How did they get enough power from a a DeLorean to drift? Hail to MrFusion!


Exactly, that terrible french engine makes a delorean a terrible car to actually drive. It is likely one of the many with an engine swap. As cool as they look, buy are they crappy cars. The lotus designed suspension cant help being overweight, underpowered, and assembled in northern ireland during rough years. I wonder if the tesla truck will fare better.


Seems like a DeLorean isn't necessarily that expensive, prices range from $20,000 to $45,000.


Looks like they had fun with it given the car didn't matter as they replaced the brakes, engine and suspension.


Are you telling me they built an automated drift car... out of a DeLorean?

The way I see it, if you’re going to build an automated drift car, why not do it with some style?


Next step: Controlled flip!


“It almost like as if we did some math for this.” lol, awesome.


Funny, I built a mechanical rig to do figure 8s years ago. It was some rope and a few pulleys, and caster adjustment on the car. This seems like using a laser to peel an apple for internet points.


> This seems like using a laser to peel an apple for internet points.

How so? From what I can tell they're working to improve autonomous vehicle handling in extreme situations. That seems very practical and useful. I'm curious how you arrived at that conclusion about "internet points." Because the question I really want to ask you is against the rules here.

Also the capabilities of your mechanical rig pale in comparison to what this can apparently do.


Soon we will have autonomous f1.



http://selfracingcars.com/ - i run this and it is at the same track as in the video.


That would be a lot of fun to see one AI vehicle competing in a field of drivers.


The asymmetry in mortality risk might lead to different strategies used by human drivers vs ai.


So, in Back to the future : Tokyo drift, did MARTY met Sally when Flash tried to bring back Doc (Hudson)...


You’re telling me that not one Stanford engineering student could come up with a mock Flux Capacitor?!!


I want a longer version of the drifting with some classical music as a backdrop


Thunderhill sighting!


Where we're going, we don't need roads.


    echo $NICK | s/80mph/88mph/


I see James Bond plot twists in the future.


"You built an AI... into a DeLorean?"


I guess if you can afford a DeLorean you can afford to keep replacing the tires you burn up by drifting on pavement. I'd rather have my car programmed for long tire life and high mpg please.


In the US, a new DeLorean isn't all that expensive compared to typical "jelly-bean" vehicles that are the most common. Back in 2005, I narrowed down a vehicle purchase decision to a new DeLorean, an used Lotus Elise or a new G coupe. I went with a bone-stock manual G that I kept for a few years before I got into Cold War-era West German and French vehicles. I'm not a rich person, far from it.


the worst part of the 2009 crisis is that the the European used car market is shot

I got a 3 year old gt in 2007 and with that money today I could barely get a panda.


> a new DeLorean

They're still making them?


Sort of. Someone bought the leftover parts from the original factory, and brought back the company name, selling completely refurbished ones. They've been trying to sell completely new ones as a low-volume automaker, but haven't successfully navigated the bureaucracy on that so far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: