There is the argument that they are already safer than human drivers though and it'd be morally objectionable to stop. Not saying I'm completely on that side, but it is a valid utilitarian argument.
Have you watched some of these videos in question? These videos of autonomous city driving are so bad that this claim seems absolutely laughable to me.
The burden of proof is on the person making the claim of safety. For Autopilot, the argument was made comparing known AP deaths per total AP miles vs auto deaths vs. driving miles of the entire US fleet... when the former almost certainly didn't include a share of the deaths and was driven under less demanding conditions (highway miles).
This argument seems not fully thought through: First, "already" makes it appear that this property is monotonically increasing, i.e., that they will stay safer from now on, even if more people use this feature. From what we see in the video, it seems this cannot be concluded: If this technology is used in more vehicles, it would appear that also the rate of accidents may increase, possibly rendering it less safe in total. We do not know how many accidents were avoided because other drivers prevented them. Second, the types of accidents will likely also change, in a way that in total could make the technology less safe. Third, what if the drivers are not fully alert all the time, as they appear in this video, to quickly take over when the technology fails in such colossal ways as we see in the video, does it still remain safer?
That argument is bullshit built on aggregate Tesla Autopilot miles driven, which are largely cherry-picked ideal condition miles vs. human miles driven in all conditions.
If this argument holds consistently with every subset of cases, possibly. But for certain edge cases, humans still consistently outperform Tesla fsd (white trucks against the sun, for example) [citation needed], which means in certain scenarios that cannot be foreseen, the risk to the non consenting public is higher than with the average driver.
Ergo there's a predictable and preventable source of accidents that must be suppressed.
They are safer than some human drivers in some conditions. They are not safer than every human driver in every condition though, so you'd need more to make a valid utilitarian argument.
The utilitarian argument doesn't require that they be better than every human driver in every condition -- just that they be better enough against the average situation that outcomes are net better.
I don't think that's right, because human drivers are exceedingly good at handling the average situation. If they weren't, you'd have accidents literally all the time. The average situation is accident-free. How much can you improve on that?
The utilitarian argument for driverless cars needs to be that driverless cars are better at handling the edge cases than humans, because that's when accidents happen.
I think you’re confusing “better” and merely “good,” as well as “average” and “mode.”
The average situation is maybe 0.99 accident-free (made up number for illustration). You improve on that by being 0.995 accident-free, which is better.
I agree that “safer than some human drivers in some conditions” is insufficient.
What I'm saying is that you can't design a car to handle the average driving situation and expect to really put a dent in the accident rate, because accidents are highly correlated to particular situations. For instance, if all of a town's accidents happen in a highly foggy area, but your driverless car cannot handle the fog because it's been designed to handle the average situation (not fog), then how will it reduce accidents? I would think that to reduce accidents it would have to be designed to work in the exceptional case (fog).
Maybe I'm confusing the idea of average and mode, so if I am an example of what you have in mind for an average situation would help.