Economy, Sci & Tech
0

Supercomputer for Self-Driving Cars

Supercomputer for Self-Driving Cars
Supercomputer for Self-Driving Cars

Nvidia is kicking off CES 2016 with its traditional first keynote. CEO Jen-Hsun Huang wasted no time getting to the “punchline,” a new supercomputer for cars he’s calling the Drive PX2, the follow-up to last year’s Drive CX.

Nvidia is famous for throwing out big numbers to tout the power of their processors and many of them were mentioned here on stage. Its processing power is supposedly equivalent to 150 MacBook Pros, all sitting in a computer that’s about the size of a lunchbox.

It has 12 CPU cores that support a combined eight teraflops and 24 deep learning tera operations per second. It’s also water-cooled, which makes sense given how hard these chips need to work,” the Verge writes.

Make sense? Probably not, but that’s okay. The point is that Nvidia believes that a properly-designed self-driving car needs to have a ton of processing power so that it can handle all the sensors, controls and learning it needs to be fully autonomous (instead of, say, being dependent on connectivity to the cloud).

Jen-Hsun Huang believes that “self-driving cars are hard” and Nvidia know how to do it.

According to Nvidia, the PX2 can “process the inputs of 12 video cameras, plus lidar, radar and ultrasonic sensors”.

Volvo is Nvidia’s first partner to use the Drive PX2; it will deploy it in some tests for self-driving vehicles.

We’re still liveblogging Nvidia’s event, where we expect Jen-Hsun Huang to take his time to explain why the PX2 is better at perception (the core problem for self-driving cars, apparently) than anything else. He likes to laboriously explain everything: His keynote involves leading us through the history of AI development and Nvidia’s part in it.

Nvidia has created a reference platform called the “Nvidia Drivenet” and it is already testing its own self-driving cars. It has nine inception layers so that it can train itself to “perceive things out in the world”.

Huang says that it took just a few months for the network to recognize objects in real time. It’s not new, but Nvidia says that it has made huge strides in recent months. In one impressive demo, Nvidia showed how it was able to detect cars even in very snowy conditions.

Nvidia is also working with Audi to test its systems, to the point where its system was able to read German road signs “better than a human can”. Daimler, BMW and Ford are using the system to develop their self-driving cars, as well.

 

Financialtribune.com