Tesla troubles show even more limits of fake autonomous cars

There are “autonomous vehicles” and then there are “autonomous vehicles”. But some kinds of autonomous vehicles are much dumber than others, and proofs of this fact continue to come.

Fully self-driving cars (FSDCs from now on) are those cars whose human driver never needs to take over (nor could she do it, since that car may have no steering wheel, pedals and so on).

Cars with “driver-assistance systems” (DAS) instead, are those in which “a human driver is expected to pay attention 100 percent of the time and correct any mistakes the driver assistance system makes”.

I have already written about how terribly dumb most DAS are, from the marketing point of view. Now this article about Tesla not only confirms my point when it says:

“once a DAS gets, or seems pretty good, humans start to trust it and stop paying attention to the road. This can happen long before the system is actually safer than a human driver, leading to more fatalities rather than fewer.” Does this sound familiar?

Tesla troubles show even more limits of fake autonomous cars /img/tempe-self-driving-car-driver.jpg
<u><em><strong>CAPTION:</strong> 
<a href="/2018/03/the-cars-that-drive-themselves.-but-only-if-you-drive-them-too/" target="_blank">click to read how smart it is to buy a FAKE self driving car</a>

</em></u>

That article also explains well how and why DAS cars and FSDCs are so different that it is naive to believe that DAS cars may ever become FSDCs with just “software upgrades”. In other words, it gives even more reasons than I wrote back in March to never buy any fake “self-driving car”.

Finally, the reasons why that article considers the Google/Waymo business model much less risky than Tesla’s are very close to the ones I suggested last year for calling self driving cars with their REAL, very different name: forget fake or real individual self-driving cars, go for driverless taxi services as complements of better public transit instead.