Nvidia takes the message of supercomputing on the desktop to the world’s hungriest market for advanced technology. Kathleen Maher reports from Beijing, China.
By Kathleen Maher
[Editor’s note: A link to a YouTube video of Jen-Hsun Huang’s keynote address from the 2011 Nvidia GPU Technology Conference is posted at the bottom of this article.]
Nvidia’s roadshow, the GPU Technology Conference (GTC), and its Emerging Technologies sideshow have gained new steam with the introduction of Maximus this winter. Maximus, as you must know, combines a Tesla board with a Quadro workstation GPU board in the same system and, as Nvidia claims, puts a supercomputer in your workstation. Maximus helps push GPU compute out further into the mainstream of computing and opens up new doors for Nvidia.
That’s why Nvidia was banging on China’s door with the first ever GTC in China. Conferences are hard to pull off in China. People have to be convinced they’re getting valuable information before they will pay for registration, and take time off from work. Nvidia got a good showing for their first year. It didn’t hurt that Nvidia gave away graphics boards. Not a bad qualification, if you think about it. Now, after attendees hit the classes learning how to take advantage of GPU processors for a variety of tasks, Nvidia is at least sure they have an Nvidia-based GPU board to work on and, it is hoped, to use for development.
Making GPU computing easier
However, if GPU compute is going to reach a mass market as Nvidia hopes (and AMD hopes, and Intel kind of hopes), then it’s got to be easier. The mission of GTC is to bring the gospel of GPU compute to a wider audience, so it offers training in GPU compute techniques. For the most part it has concentrated on CUDA. This winter, the Nvidia train has gone from Autodesk University to the Supercomputer conference and on to China. Along the way, the message has been Maximus, OpenACC and Directives, all described below.
In China Nvidia CEO and co-founder Jen-Hsun Huang (“Jensen Wong”) talked about the importance of gaming, as you might expect, but he used gaming as a platform to talk about the potential for physics, simulation, rendering, and other capabilities that can take advantage of the GPU. There was just as much emphasis on CAD applications as there was about games.
OpenACC
Ever since GPU computing arrived to complicate the lives of developers there’s been a longing in the industry for tools that can automate the process of programming so that the applications can take advantage of the computer’s resources. Programming for multiple processors was hard enough, but throw in systems using GPUs and CPUs and it gets even more complicated.
OpenACC is a high-level tool being developed using the work done by The Portland Group, CAPS, Cray, and Nvidia. The idea is that programmers can add notes for the compiler, called Directives, which identify areas of code that can be accelerated. OpenACC works with C, C++, and Fortran code. Developers don’t have to actually re-write the underlying code. The PGI compiler takes care of the heavy lifting.
Also as part of the Nvidia’s newfound interest in openness, the company announced the release of the CUDA source code for the LLVM compiler. Nvidia says they’ll make the code available to academic researchers and software tool vendors so that GPU support can be enabled for more programming languages and CUDA applications will be supported on other processor architectures.
Meeting the exascale challenge
Huang promised attendees at GTC in China a new era of computing and he predicted its arrival for 2019. Huang said the computer industry is being challenged to push past the barriers imposed by power to reach exascale computing. “Size isn’t a problem,” he said. But, he observed, appropriately enough, building an exascale computer using CPUs alone would require a power source the size of Beijing National Stadium (better known as the bird’s nest), which just happened to be situated outside the China Convention Center where Huang was giving his speech. In other talks Nvidia likes to say it would take the Hoover Dam to power an Exascale computer build using CPUs.
The Titan computer is being built at the Oak Ridge National Laboratory. It’s using a Cray XK6 supercomputer using 18,000 Nvidia GPUs. It’s being designed to deliver 20 petaflops of peak performance, which will make it the world’s fastest supercomputer. Huang told the audience at GTC that Titan will be an exascale computer by 2019 and it will do it using only 20 Megawatts.
Maximus is a step on the way to new computing platforms that will transform the way we play as well as work. Using the PlayStation 3 as an example, Huang promised the audience gaming consoles and computers capable of “tens of teraflops” by 2019 and to demonstrate the quality of images we can expect he showed footage from the Assassin’s Creed movie which is being rendered frame by frame on a supercomputer. That, he said, will be the quality of games when they can run on the consoles of the future.
No end in sight
Games still pay the bills, but they’re not paying all the bills these days. It’s exciting that the work being done to speed tasks using GPUs will enable artists to interact with their work as soon as they have an idea, they can capture, sculpt, render and change their minds and the computer will keep up. Also, the advance of simulation technologies, including fluid dynamics, are going to enable better design as well as better games. Processing capabilities that can churn through genetic data to come up with cures for disease can also accelerate games like Assassin’s Creed. As computers become better able to recreate real world situations, there’s no end to what they can do for us in science, art, and entertainment.
As for Nvidia’s message of openness, well, we knew this was coming and Nvidia knew this was coming. The trick is in the timing. The industry is reaching a point where the competitive death grip the three major companies try to maintain on development technologies is unproductive. It’s slowing down innovation. Nvidia, by opening up CUDA so that it will run on everything is a great start. OpenACC, which isn’t necessarily confined to Nvidia processors, will also push the cause of GP GPU forward. In the meantime, Nvidia has Maximus, a computing platform that enables advanced workstation computing in an accessible package. Its competitors don’t have anything like it. Heck, when you’re ahead, why not open up and encourage the industry to make that great leap forward.
China, with its crazy huge buildings and terrible big problems seemed like just the place for Nvidia’s audacious message.