Ok, lots of separate issues there. In terms of actually using a map I think my original point still stands. You need to use online navigation information to localize yourself within the map before you can use it. There is also the non trivial issue of actually doing a bayesian update of "can I go around this corner" given the error budget coming from how accurate is the map, how accurate is my position within the map and how accurate my other sensors are. Weighting all these probabilities and actually updating the state in a principled way is a bit of a nightmare (given the long tailed and multimodal distributions flying around).
In a well principled controller the lack of certainty of the state should simply result in a more conservative control action. In your vision only scenario the correct control with high uncertainty is caution. i.e. slow down and don't turn until it can work out what is going on. This has nothing to do with sensors. It is possible to be safe with poor sensors but it will probably result in an unacceptable closed loop performance (will drive like grandpa). A big problem here is that it is really hard to get this stuff right. Probabilities will not be actual probabilities and solving the correct control problem is intractable in real time (i.e. actually solving the stochastic program for the optimal control action).
Engineering problems always have constraints, I don't have a problem with a limited sensor set. Sure it is always better to add more sensors but money is a problem. So are logistics. I do have a problem with doing this hard problem badly and practicing in public. Really difficult problem to solve, being cavalier probably isn't the way to go.
Visual systems have problems with obstructions and adversarial imagery, having side facing radar which can go under and around obstacles at long distances for example is a huge benefit at intersections.
Musk swept this simple fact under the rug when asked during the press conference, again overselling a 100% visual solution.