No More “meat” Drivers: The Tesla Supercomputer Fueling Self-Driving Vehicles
Tesla has released exciting new details about their new supercomputer, a precursor to the mysterious “Dojo” program that CEO Elon Musk has been teasing on Twitter.
The enormous computer is said to be number five in the world in terms of computing power and is meant to facilitate the switch from “meat” drivers to silicon, as Tesla puts it. While flying cars may be a while away, self-driving ones could be just around the corner.
What is the Supercomputer for?
The new supercomputer will be processing an insane amount of data, not least because Tesla’s proposed self-driving system will only use cameras, no radar or lidar. Other autopilot systems use a holistic mixture of the camera image, radar and lidar to develop as accurate a picture as possible of the vehicles’ surroundings.
However, Tesla has elected to overhaul this approach in its future cars, because camera technology has become so much more sensitive and reliable than radar. At the 2021 Conference on Computer Vision and Pattern Recognition last week, Tesla’s senior director of artificial intelligence, Andrej Karpathy, said that radar was turning into a “drag” for autopilot development, claiming that camera sensors can be “100x better” than radar, and so much more sensitive that radar sensors are only contributing noise and impeding computer learning.
This doubling down on the use of cameras by self-driving AI means a “large, clean and diverse data set” is required to teach neural network AI. The vision-based autopilot will need to be able to perceive and interpret changes in depth, velocity and acceleration, all of which will need to be fed to it from an actual footage and processed via computer. Read more about the function of self-driving cars here.
Where is the Data Coming From?
This footage is taken from current Tesla products, which have deep neural network-style AI running in “shadow mode” within the vehicle. This means the AI is perceiving and predicting the course of the vehicles’ journey and interactions with objects and other vehicles but is not actually controlling the vehicle itself.
These predictions are recorded, and all errors are logged. The errors are then used by Tesla engineers to create a training dataset of scenarios known to be difficult for AI to interpret. Altogether, this looks like roughly 1 million high-framerate, 10-second clips, and amounts to about 1.5 petabytes of data (that’s 1536 terabytes or 1.5 million gigabytes!). Tesla’s AI can then be subjected to these scenarios repeatedly until it is able to complete all scenarios without a mistake. Once the AI “graduates” from the training program, it is redeployed into test vehicles to find more challenging scenarios, and so on.
"What will most likely occur, and I am certain about this, is that self-driving will become much safer than a human driver. Probably by a factor of 10"
What does this mean for Tesla?
Such intensive training for future “silicone drivers” might reassure those of us who are a little skittish about handing control over to a machine. Some high-profile accidents, such as that which resulted in the death of Elaine Herzberg in 2018, have been taken as a sign that self-driving cars aren’t up to the complex spatial, visual or even ethical challenges that humans manage while driving.
Karpathy disagrees: “people are not very good at driving; they get into a lot of trouble and accidents” and, in terms of whether a computer can become as good at visual processing as a human, the answer is an “unequivocal yes”. It seems that however, we may feel about it, it seems that the worlds’ driver base will be switching from “meat” to silicon drivers far sooner than we think.
Looking to gain investment for your startup? Check out Floww’s Startup Solutions to find funding today.
Header Image via Financial Press.