Why Tesla self-driving is bad. Part 1. The system
I wrote several articles about Tesla self-driving. But after reading a lot of articles and watching many videos I found new aspects of this problem and I think that it is much worse than I or anybody else thought.
Anyone who writes software knows that there will be bugs and it does not matter how many people will test it, or how good is QA, there still will be bugs and a lot of them. It is simply impossible to find and fix all bugs in any big program.
While it is hard to find all the bugs in code that is written by people, we developed a lot of different techniques that help us develop and maintain big applications with reasonable quality levels. However, it took tens of years to develop and mature these principles and procedures through trial and error.
However, we don’t have any good techniques and procedures yet for neural networks. They are relatively new and we are still in the process of developing what we already have for regular software. It will take years if not tens of years.
Moreover, for a regular software fix, you just change a little bit of code. As far as I know, neural networks are typically required to add new data to the dataset and then re-train the whole model. It is slow and takes a lot of energy. And after that, you need to re-test everything with a new model. This whole process is very slow.
But wait, it is just software bugs, but there are hardware bugs too. The CPU and memory contain billions of transistors and it is not possible to test them in every single mode and every single of them. There is a chance that certain radioactive particles will flip one bit of the memory or change the content of the CPU register. Also, the silicon degrades over time. You are probably all aware of Intel's 13th and 14th-generation CPU issues that are related to degradation. These issues lead to crashes and reboots.
What do you think when a crash or glitch happens while a car is driving by itself? In the best case, the car will alert the driver and the driver can try to take control. But what happens if the car changes a lane, turning, or stopping to let a pedestrian cross the pedestrian walk? Will the driver have enough time to take control then asses the situation and continue what the computer does before the driver takes control? Especially if the driver was completely distracted at that moment. I think it is a rhetorical question. And what will happen in the case when the car has no steering wheel and pedals and nobody can intervene?
As a result, it is completely understandable that making autonomous cars using consumer-grade parts will not make autonomous driving safe. Completely different types of parts are required in this case. Extremely reliable parts that are graded for cases when human lives are at stake. But they cost way more.
Just as an example, I like monitors with an aspect ratio of 4:3. But they were slowly replaced by models with 16:10 and 16:9. One day I found a monitor with an aspect ratio of 4:3. It was a 15-inch monitor that cost $1500. Just as an example, I can find a regular 22-inch monitor for $74 on Amazon.
The monitor I found cost that much because it is supposed to be used in hospitals. If a monitor on your desk dies, nobody will be hurt. But if a monitor stopped working while it was showing vital information about the patient during surgery then somebody may die.
So this kind of hardware is built way differently and uses medical-graded parts. It has completely different development and QA stages. These devices are regularly tested and inspected by specially trained people.
But autonomous cars typically use parts that you can buy in any computer shop. They are simply not intended to be used in any critical environment. Moreover, there are no duplications of any kind. Just as an example, any mission-critical servers have duplication of pretty much everything.
However, what I wrote above are general problems that any self-driving technology inherently has. But Tesla has its own, specific only to Tesla's problem that makes it way more dangerous. The name of that problem is computer vision.
The human eye is a marvelous device. It is extremely complex and there is a lot of processing happening behind that we are not even aware of. Yet it is easy to fool or confuse. I believe a lot of you see the static picture with specially drawn lines. And to our eyes and our brain, it looks like lines are moving. Sometimes we see curves lines even lines are straight. Or we can see even different colors from what is actually there.
I think every reasonable person understands that our eyes are far from perfect. And the camera also has similar issues. If you don’t see something, there is a huge chance that the camera will not see it too. The camera can be dirty or simply wet. Their resolution is not as good as the human eye. Their sensitivity is also worse. They are much worse in cases when there are very bright and very dark objects on the scene at the same time.
How do our eyes measure the distance to some object? Well, we have two eyes, and looking at an object from 2 slightly different angles helps our brain make a decision. But it is far from precise and for example, it is very hard to judge the speed of an object that moving exactly towards us.
Can we measure the distance to an object precisely using cameras? No, unless we know the exact measurements of that object. But different cars have different dimensions and it may even change with time. For example, a car may have a tarp on top of it that is flipping in the stream of air. Before the election, I saw cars with flags.
But even if we know the precise dimensions of the object it is still extremely hard to calculate distance to the object. Your car is moving on the road which is far from ideal and your car is not moving in a straight line. That object is also moving and most of the time it is moving in the unknown direction. Lastly, cameras are often at least slightly misaligned.
Basically in general the car can do many measurements and calculate only the estimated speed and direction of the moving object. And this estimation will have a relatively big margin of error.
Just as an example, when I am parking my Tesla in my garage it always screams me to stop way before the safe spot. And it is a very simple scenario. My garage is static and not moving at all and only my car is moving and moving slowly on a very flat surface that is very close to perfect. Yet it still does not work.
Auto wipers still do not work in the Tesla and they use the same computer vision. Sometimes wipers activate without any sign of water. Sometimes wipers do not activate when there is rain and visibility is bad.
My Tesla is notoriously bad at reading speed limit signs and constantly misses them and then screams at me for speeding. Once I drove around 40 minutes with beeps every few minutes for speeding.
If they cannot measure the distance to the wall in my garage, make auto wipers work and properly read speed limits, how they are planning to make cars autonomous in much more complex situations?
In conclusion, the whole platform is obviously prone to bugs in software and glitches from hardware. The computer vision has exactly the same problems as the human eye. It is not precise and is far from perfect. It can be blocked by dirt or water and obviously, it does not see anything in the dark or mist, and its visibility is clearly affected by rain or snow.
Remember, we don’t need a solution that is “good enough” or a solution that works 95% of the time. All these issues will lead to car accidents and potential deaths and we need that number to be as close to 100% as possible.
But this is just the tip of the iceberg. More in the next part.