Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I wonder if he's going to explain this position at all? Or if it's just posturing against Waymo et al. Seems like more data is always better than less data for this application?


They explain it in the software talk. In short, I think his thesis is:

Lidar is expensive, and gives surprisingly limited data (just points of depth) and is like a crutch that will only get so far. It will not get you to full autonomy so why not spend the time and money on vision which will (as proven by humans). They also do use radar and ultrasonics (not visual spectrum).

At some point your machine must be good enough via machine learning to give a very good impression of understanding intention in other road users, pedestrians etc. One example they gave was a distracted pedestrian with a phone. Lidar tells you nothing save an obstacle is on the pavement, it won’t tell you they might step out without looking, but machine learning on a massive dataset can.


One thing LIDAR can do that Tesla's ML vision software cannot is identify freeway dividers and trucks turning in front of the car.

Once Tesla can actually handle those two extremely basic tasks maybe they can start talking about how vision is better than LIDAR. Until then, it's just more hot air, and that's not even taking into account Tesla's claims of using ML to predict the actions of uncontrolled independent agents in the field of view.


They explained that in the talk to


Because lidar is expensive, error-prone, and can't be created without flaws. So if you make a car LIdar will fail way before camera. The only problem is if camera only can work


Why not both? Infact, Waymo has Lidar, radar and cameras (Tesla have just the last 2 and are working overtime to demonize the first). More data (and more redundancy) is better - assuming you have the computing power to process all the sensor input. As far as I can tell, the rubbishing of Lidar is FUD.


Because if you're using lidar for depth and you try to mass produce the there a high chance the lidar will be broken and because it accurate people will assume it more correctly. Eyes and camera may sometime make mistake with distance but usually it will be software not hardware. Lidar opens the issue for both. So it's easier to use the camera and just try to make a camera as good as lidar.


All sensors and the systems they operate with have flaws. There is no way around this - it's just the real world. Even cameras will have flaws, or they can develop them over time (lens degradation, image sensor dropping pixels or gaining stuck pixels, etc).

That is point behind what is known as "probabilistic robotics" - the real world has noise, and you need to be able to deal with it. Don't expect perfect sensors, don't expect a perfect environment.

Self-driving vehicles are the application of probabilistic robotic principles to a real-world task - quite possibly one of the most difficult tasks for the field. To be quite honest, it's amazing how well it's worked in such a short period of time.

I think cameras and machine vision approaches will be needed for self-driving vehicles to be fully successful, but I wouldn't say that LIDAR should be counted out. It will probably be necessary - even required - to have all of the sensors currently being used, not just a subset.

One kind of sensor that hasn't been explored much that I think will be needed is some kind of audio input; some kind of wide-range microphone (probably binaural or stereo) to take in environmental audio and use that for driving cues. Some simple examples might be for vehicle horns, or the squeal of tires, or the revving up of an engine indicating someone is speeding up aggressively, or an emergency siren, etc.

There may even be other sensors needed or that could provide other data to fill in certain gaps of environment knowledge to help a self-driving vehicle navigate. I don't think any one or another should be discounted.


Cost is also a significant consideration on mass market cars.


>I wonder if he's going to explain this position at all?

Doubt it.


Yeah it's nonsense. It's an after the fact reframing of a decision that was made for other reasons. Lidar + camera is better for this kind of problem. It's because current lidar are too expensive and too fragile (1-2year life) to put in a production car, not because they aren't much better. If this wasn't the case he'd definitely be using them. Elon cannot use a lidar in a Tesla even if he wants too so he's coming up with some FUD to dismiss any completely reasonable questions in advance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: