Apart from the map data going stale as it gets out of date, you always have the problem of registration of the vehicles position within the map. GNSS data can have a really long tailed error distribution (particularly within urban canyons and in tunnels). So you always need some way of making a local map and placing it within the high detail global map. There is a pragmatic argument saying if you already have a detailed local map, do you really need the global map apart from high level navigation tasks.
Can you elaborate a little about how the maps are used? The "go/no go" is basically the sort of thing I mean by "high level navigation", the sort of thing a satnav system would provide help with. If they have nothing like that it seems insane. I guess there is an ambiguity around how high detail is "high detail".
Neural networks are probabilistic. They output predictions and are rarely 100% confident. So when Tesla's visual neutral networks are assigning inferred labels to things like cars, bikes, pedestrians, road barriers, etc, there is always some chance, say, 0.1% that it's wrong.
If you have other sensors, like LIDAR, radar, and high resolution maps, then you can corroborate your predictions, so for example, if the visual system predicts a 99.9% chance you can make a safe right turn, checking the LIDAR and map will provide additional checks, even if your map is out of date. Consider:
1) camera system predicts you can make a right turn. You have no map, so you make it.
2) camera system predicts you can make a right turn, you have a map, the map says there's a wall there. You don't make the turn. If the map is wrong, you're still safe, you just missed your turn off.
3) camera system predicts you can make a right turn, you have a map, the map is out of date and says there's no wall there, but a new barrier has been erected and the map doesn't contain it. This is equivalent to situation #1
4) Same as #3 but you have SIDE RADAR, so you sense the barrier
5) Same as #3, but you have SIDE LIDAR, so you sense the barrier
The problem is, Tesla is not only going "all in" with their "we only need video cameras", but Elon Musk was asked directly at intersections if the lack of side radar was a problem, and he again fell back on "video is all you need", but if that's the case, why even have front radar?
Clearly, video is not enough, and if you have front radar, there's no reason to argue against side or rear radar, and by proxy, against LIDAR.
All of this comes down to the Tesla is shipped with a cheap and fixed sensor suite, they can't change it now, it's too late, and so they are trying to justify their previous design decisions with overconfident sales tactics and PR, and I think this is somewhat reckless and dangerous to the entire AV industry because the more Tesla's that crash and kill people, the more all AVs will be tarnished as dangerous by the public.
Musk is acting like Tesla is SpaceX and they can afford to blow up rockets or astronauts during iteration, but these are consumer products. People will trust these things to be safe, not to be "beta"
Ok, lots of separate issues there. In terms of actually using a map I think my original point still stands. You need to use online navigation information to localize yourself within the map before you can use it. There is also the non trivial issue of actually doing a bayesian update of "can I go around this corner" given the error budget coming from how accurate is the map, how accurate is my position within the map and how accurate my other sensors are. Weighting all these probabilities and actually updating the state in a principled way is a bit of a nightmare (given the long tailed and multimodal distributions flying around).
In a well principled controller the lack of certainty of the state should simply result in a more conservative control action. In your vision only scenario the correct control with high uncertainty is caution. i.e. slow down and don't turn until it can work out what is going on. This has nothing to do with sensors. It is possible to be safe with poor sensors but it will probably result in an unacceptable closed loop performance (will drive like grandpa). A big problem here is that it is really hard to get this stuff right. Probabilities will not be actual probabilities and solving the correct control problem is intractable in real time (i.e. actually solving the stochastic program for the optimal control action).
Engineering problems always have constraints, I don't have a problem with a limited sensor set. Sure it is always better to add more sensors but money is a problem. So are logistics. I do have a problem with doing this hard problem badly and practicing in public. Really difficult problem to solve, being cavalier probably isn't the way to go.
Visual systems have problems with obstructions and adversarial imagery, having side facing radar which can go under and around obstacles at long distances for example is a huge benefit at intersections.
Musk swept this simple fact under the rug when asked during the press conference, again overselling a 100% visual solution.