Is that because their vision fails to provide the information necessary to drive safely? Or is it due to distraction and/or poor judgment? I don't actually know the answer to this, but I assume distraction/judgment is a bigger factor.
I'm not a fan of the camera-only approach and think Tesla is making a mistake backing it due to path-dependence, but when we're _only_ talking about this is _broadly theoretical_ terms, I don't think they're wrong. The ideal autonomous driving agent is like a perfect monday morning quarterback who gets to look at every failure and say "see, what you should have done here was..." and it seems like it might well both have enough information and be able too see enough cases to meet some desirable standard of safety. In theory. In practice, maybe they just can't get enough accuracy or something.
> Is that because their vision fails to provide the information necessary to drive safely?
In certain conditions, yes. Humans drive terribly in dark and low light, something lidar excels in.
Still, millions of humans drive every night and only a miniscule percentage cause any accidents. So maybe we are not so bad at this.
According to NHTSA, about half of all fatal crashes occur at night, even though only 25% of driving happens at nighttime. So yes, we are pretty bad at this.
I totally agree, I think most accidents are caused by human nature (especially slow reaction time in specific conditions like being tired or drunk) and ignoring laws of physics (driving too fast). And some are just a pure bad luck (something/someone getting on the road right in front of the car).