Why Tesla self-driving is bad. Part 2. It almost works

Part 1 is here.

Tesla is planning to use drivers to monitor full self-driving and effectively use them as QA and beta testers while they are developing technology and eventually switch to completely autonomous driving.

But this does not work because it almost works. Let me explain what I meant by that. I will start with this video why the author explained that in 2013 Google started testing its own autonomous vehicles and gave several of them to Google employees. They were told that they needed to always pay attention and to always keep their hands on the steering wheel at all times. Lastly, they told them that there are hidden cameras that will watch them all the time.

After that, it would be logical to assume that people will do what they are told to, right? But if you watched the video above you will see that people are not paying attention to the road at all. They are texting, doing makeup, or just plain sleeping. All while driving on the highway. And I’m almost 100% sure that they give these cars to relatively responsible employees. Moreover, they were not brainwashed by the CEO of one company that their car can definitely drive by itself. Yet the results are still shocking.

Why does it happen? Well, there is an easy answer. It is in human psychology. The first few days these people probably were doing exactly what they were told to. But with time they realized that the car was doing well and surely nothing bad would happen if they quickly sent this text. They are telling themselves that they will be distracted only for a second or two.

But as time passed they became even more and more confident in car capabilities and became distracted for longer periods. In the end, people stopped paying attention to the road completely because they were used to it. They start responding to emails, doing makeup, or just sleeping.

What do you think happened to Tesla drivers who are definitely not carefully chosen as responsible people, and who do not have anybody monitoring them? Yes, you are right, they are doing way worse than that. Some of them even were sitting in the back seat or were not in the car at all.

Do you seriously believe that most of them carefully monitor the situation around them and are ready to take control at any moment? Why do they need full self-driving in this case if they doing everything anyway, except actually driving? Moreover, they actually have to do more work because they have to be ready in case the car does something unexpected and that is not happening when they are driving by themselves.

Most of them carefully monitor the car for around 20-30 minutes. Everything looks good and the car controls itself quite well. After some time they decided that it is ok to get distracted because it is quite boring watching and controlling everything. Then after a few hours, they are distracted more. Finally, in a few days, they stop paying attention to the road and only check it periodically.

It is again a human psychology and we are treating the AI the same as a human driver. After we see that the human driver drives correctly we subconsciously assume that it is an experienced driver and stop checking that driver and assume that it is safe.

But AI is not a human driver. Moreover, it does not have an “I” part that stands for intelligence. And because it does not have intelligence, AI cannot do much in the situations that it was not trained for.

At this moment some can ask a question: wait, but if there are so many issues, why a certain CEO is pushing it so hard? Well, it is a classical 80-20 trap. The developers spent 20% of their time developing 80% of the functionality. It looks solid to demonstrate to your CEO. But then the developers need to spend 80% of the time to develop the last 20% which will mostly edge and corner cases.

However, the CEO is not happy. To the CEO it looks like this feature is almost done, just needs some minor polishing. But this feature needs way more time because in this area it could be easy 90-10 or even as extreme as 99-1.

Moreover, In our case, developers trained AI on every road their CEO typically uses, and to that CEO everything looks absolutely perfect and the CEO demands to release it ASAP because it looks very good but reality is far from that.

And the more people use it the more negligent and less attentive they become and they start to pay the price. The ultimate price. This will be our next part.