NVIDIA’s GTC launches this week. This is a huge conference and it has doubled in size since the inaugural event in 2012. There are five big announcements at the show this week, each more interesting than the last. Let’s cover the highlights of each, which take us in huge bounds to a future where what we think is real increasingly won’t be and computers will increasingly do things that only humans could do before.
This is an integrated package of tools that cover the variety of platforms that NVIDIA now spans. This ranges from how to develop for games, Virtual Reality, compute, design, self-driving, and Jetpack (programing for small autonomous machines). One of the most impressive parts is Iray. It is a scalable ray tracing technology to help drive real time photorealistic rendering – critical for the VR experiences coming over the next few years. They are clearly making a major push to drive this next generation of virtual experiences. These are a mass of libraries collected together into one huge package so that developers can more easily access the tools they need to advance one of the related technology areas.
One of the new components of this package is the GIE (GPU Inference Engine). This has been one of the areas where NVIDIA has lagged and this engine closes this gap. For instance this changes the platforms ability to analyze images from 4 to 24 images a second, massively improving deep learning performance.
Everest VR is an interesting showcase of where VR is going. They took millions of images (108 million pixels) of Mount Everest and them implemented Physics to create a very realistic fully rendered, photorealistic, VR experience of Everest. This is one of those things you’d have to see because words can’t describe how incredible it looks.
Mars 2030 is an eight square mile fully rendered photorealistic VR mars experience consistent with the hundreds of actual pictures taken on and above the surface. It even showcases the underground caves that folks will likely have to live in long term. They brought up Steve Wozniak on video to help showcase this technology.
At the heart of this realism is what they are calling Iray VR. This takes hundreds of light probes, which produce data that is then mixed per eye and the result in rasterized and composed into an incredible photorealistic VR experience. The demonstration was done on NVIDIA’s new headquarters, which is heavily naturally lit and not yet built. Yet you can use this technology to walk through the building and it looks real. All the light, all the shadows, all the materials are fully rendered. They have a demo download called Iray VR lite that folks can download and play with.
AI Processor (Deep Learning)
NVIDIA believes, and I agree, that Deep Learning will change computing as we know it. This is the move from programming to learning, where ever more capable computers learn to program themselves. What makes this really interesting is that it takes one general architecture, one general algorithm, hit it with massive amounts of data, give it a ton of processing power and it teaches itself what to do. They then showcased how this moved from a few research organizations, to core technology groups, to AI-as-a platform, to a huge number of startups ($5B has been pumped into this area in 2015), and finally it is already moving into industry leaders like Audi, Cisco, eBay, GE, and Tesla among hundreds of others. The related technology market is expected to crest $2T by 2025 and it is currently NVIDIA’s fastest growing business.
Currently, NVIDIA sells the Tesla M40 for deep learning and the M4 for inferencing once the learning has been done. At this event they are launching the Tesla P100 monster 150B transistor part that can perform at 21.2 Teraflops.
NVIDIA DGX-1 Deep Learning Computer
NVIDIA is going beyond just the processor and building a purpose built AI server. This is a box that can perform at a level of 170 Teraflops and potentially cuts the training time for efforts like Alexnet from 150 to a mere 2 hours. Or to perform in line with 250 current generation deep learning servers by itself. This is a massive performance improvement and it is effectively an AI server in a box. The cost for this is $129K, sounds like a lot but just to interconnect between the 250 servers this replaces costs $500K. These will initially be targeted at research organizations working on AI and particularly for those organizations working on medical advances.
Autonomous Cars - Drive PX
This takes us to the NVIDIA Drive PX, which is designed to make cars smart. This was a showcase for Dave Net, an application that trains self-driving cars like we train people and it observes drivers to learn how to drive. The impressive part of this was the launch of a self-driving race car and a new race series called Roborace. It looks like the autonomous car industry is moving rapidly from the idea of programing cars to drive to teaching them through observation. Either way, once you have one programmed it is relatively easy to move the resulting learning from car to car and even update it for new events or decision criteria. It shouldn’t take very long at all for these self-driving systems to demonstrate they can drive better than we can.
NVIDIA’s GTC this year, a conference famous for gaming content, had very little of that this year. It was mostly about computers that could think which should, coincidently, give us humans more time to play games. The advancements are pronounced from VR computer generated experiences that are indistinguishable from reality to cars and robots that can go about their business without human handlers or interference. Suddenly the future doesn’t seem so far away.
Edited by Maurice Nagle