Actually it’s not even a powerful computer or high resolution anything. Most of it is running off of GPS for the navigation, and proximity sensors like the self parking cars for the near stuff. They’ve realized an important factor of driving that most people don’t actually think about but it is how people really work: you don’t need specifics, you just need to know where the blobs you don’t want to hit are. A 3D image system will tell you all kinds of data about the thing you’re trying not to hit, but the important data is size direction and distance, which is easy to do with relatively unsophisticated stuff. Just like when you drive, while if you look at something long enough you get all kinds of specific data, but most of the time it’s just a thing about yea big, moving to where you want to be too fast for you to turn now. It was the big leap in realizing how little you actually need to know about your surroundings that allowed self parking cars, the first big step to automated driving.
I was describing the human driver. Or trying to.
In fact, trying to describe and quantify exactly what we do and how we do it when we drive is not easy.
It was the big leap in realizing how little you actually need to know about your surroundings that allowed self parking cars,
Granted ... it may be that in terms of computational power and sensing capability we (humans) are way overqualified for mere driving.