Nvidia Drive PX 2 Self Driving Computer Deep Dive

 

Nvidia has been providing graphic interface computers to Tesla Motors since the first Model S rolled off the assembly line in 2012. It was Nvidia that made the enormous 17″ touchscreen in the Model S possible. Now it is continuing its relationship with Tesla by supplying what it calls “a supercomputer in a box” to provide the computing power needed for full Level 5 autonomous driving in the future.

Nvidia Drive PX 2 computer

Nvidia’s lastest device is called the Drive PX 2 and it is 40 times more powerful than the unit it replaces. Equipped with a neural network that makes deep learning possible, the company says it has the computing power of 150 Macbook Pro’s. Its job is to take in all the data fed to it by Tesla’s 8 cameras, 12 upgraded ultrasonic sensors, and a forward looking radar device and assemble it into an accurate graphic representation of the environment around the car it is installed in out to a distance of 250 feet. The Drive PX 2 is liquid cooled to keep its artificial brain from overheating.

According to Nvidia, the machine is “the size of a lunch box.” It has two Tegra processors with a total of 12 processor cores and two Pascal GPUs which together can perform 24 trillion operations per second.  The Drive PX 2 creates a complete digital representation of the outside world at the rate of 2,800 frames per second.

Just a few weeks ago, Nvidia released a video of a converted Lincoln sedan teaching itself how to drive using a neural network. Many other companies like Google are relying on high resolution digital maps to allow cars to drive themselves, but that assumes such a map is available for every mile of road in the world.

Using artificial intelligence instead allows a computer to think like a human driver and react appropriately to any situation that arises. That way, if a new sign goes up, a construction detour occurs, or some other change takes place after a road has been mapped, the car doesn’t have to pull over and stop until it gets instructions from some computer programmer located on the other side of the world.

Tesla says many of the features available with its first generation Autopilot will be inactive initially to allow the new Autopilot system to gather enough data to make accurate driving decisions. Every Tesla built since October 19 has the new hardware and software package built in. The system will run in shadow mode for some period of time, during which the on board computer will monitor the outside world and decide what it would do and compare that to what actual drivers do. That data will also be fed back to engineers at Tesla.

When Tesla is convinced the Drive PX 2 is capable of making decisions on its own, over the air software updates will unlock the Enhanced Autopilot features a few at a time. That accumulated data will also be available to convince regulators that the cars are actually capable of full autonomous operation safely and without being a danger to other motorists. The age of self driving cars is upon us and Tesla is leading the way, with help from Nvidia.

Source: TU Norway    Hat Tip: Leif Hansen





About the Author

I have been a car nut since the days when Rob Walker and Henry N. Manney, III graced the pages of Road & Track. Today, I use my trusty Miata for TSD rallies and occasional track days at Lime Rock and Watkins Glen. If it moves on wheels, I'm interested in it. Please follow me on Google + and Twitter.
  • Ed

    Compared to the map-reading approach of Google, the Tesla Deep Learning process looks like the right path. Incredibly exciting.

    • Steve Hanley

      Yes, Google’s approach seems so last century.