AR images driven by Nvidia GPUs to your phone
Nvidia has been in the clouds for ten years or more and they know their way around. You may remember GRID, the VMware/Citrix remote computing solution that Nvidia was a part of. And then there’s GeForce Now, its game streaming platform. The company also offers Shield, cloud-based TV delivery, and of course, its render farms, cloud computer, and AI training. Sometimes you might even know when the resource you’re accessing is on the cloud or real.
So, it really isn’t a big surprise when Nvidia shows you a fully rendered highly detailed image from the cloud on your smartphone using an AR app, is it?
During a keynote presentation for Mobile World Congress in Los Angeles, Nvidia’s CEO Jensen Huang revealed Nvidia’s new AR software, CloudXR. With the Nvidia CloudXR SDK, the rendered car was streamed to a 5G-enabled mobile phone on stage at the Los Angeles Convention Center. A realtime, live demo—no pre-rendered tricked-up videos.
Jensen pointed out that high-fidelity 3D content and next-generation AR experiences take more computing power than any AR headset or mobile device can handle (including the Nvidia Tegra-based Magic Leap AR glasses). Therefore, Nvidia has introduced a new cloud-based solution to stream AR content to modern devices.
Instead of squeezing models down to a couple of megabytes and constraining rendering to what a phone alone can do, what if you could use a full-power game engine to render gigabyte-sized models in realtime in the cloud, streaming the results onto your device to combine with the world?
The demo was delivered through an app developed with the CloudXR SDK with 3D content built using Autodesk’s VRED 3D design software and streamed via a virtual machine hosted on an Nvidia Quadro RTX 8000 server.
Nvidia’s new CloudXR SDK makes it possible for app developers to run high-resolution 3D content on mobile devices, AR headsets, and smart glasses that would normally require desktop computing power.
The platform enables devices running mobile processors to transmit head pose and other positioning data to Nvidia servers, where CloudXR renders the 3D content for display in the user’s real-world environment and beams it back to the on-device application.
What do we think?
Several companies have pointed out that the heavy lifting of huge data sets such as maps, translations, geo-picking, and 3D rendering has to be done in the cloud. Simple 3D models of funny creatures for AR games is one thing, but serious, heavy-duty 3D engineering and modeling, such as PTC does with its AR systems needs a lot of computing horsepower behind it. This is what you’re not hearing from Google and Apple when they talk about AR. Not that they don’t know it, but rather that they think it takes away from the magicness of AR.
There are two classes of AR—consumer and commercial. The consumer version has to be no more conspicuous or uncomfortable than a pair of sunglasses. The commercial version (which includes scientific, engineering, medical, and other industrial applications) can tolerate a bit more awkwardness in the headset or glasses—but not much. There’s no way you are going to be able to even squeeze a smartphone into a pair of glasses, let alone a computer system capable of displaying a SLAM-based 3D topo map in realtime that compensates for your geo-location, pose, and movement.
But, streaming that kind of data to a client creates another need. And that need is going to be satisfied by 5G. So Nvidia is in the exact right place, at the exact right time with their CloudXR.
If you’re interested in AR, these questions are addressed in my book, Augmented Reality; Where We All Will Live.