Nvidia enables OpenAI with 170 TFLOPS

CEO hand-delivers an artificial intelligence supercomputer to non-profit researchers.  

At the GPU Technology Conference in April, Nvidia introduced its supercomputer in a box, the DGX-1. Now Nvidia has delivered one to OpenAI, a non-profit artificial intelligence research company based in San Francisco. The OpenAI website says the group goal is to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Nvidia CEO Jen-Hsun Huang installs the DGX-1 at OpenAI headquarters in San Francisco while OpenAI benefactor Elon Musk looks on. (Source: Nvidia)
Nvidia CEO Jen-Hsun Huang installs the DGX-1 at OpenAI headquarters in San Francisco while OpenAI benefactor Elon Musk looks on. (Source: Nvidia)

Nvidia crows that the DGX-1 packs 170 teraFLOPS of computing power (or the equivalent of 250 conventional servers). The expectation is that the DGX-1’s compute power will shorten the training time required for OpenAI’s intelligent agents to learn, understand, and solve problems. The hope is to see compute times reduced from months to days, or even hours.

Doing that will take technology with the computing power to keep up with OpenAI’s researchers. “Building DGX-1 took 3,000 people working for three years,” Nvidia CEO Jen-Hsun Huang explained. “So if this is the only one ever shipped, this project would cost $2 billion.”

OpenAI—hailed by some as the “Xerox PARC of AI”—was founded last year with the aim to advance digital intelligence in ways that will benefit all humanity. The team is responding to the power of deep learning to solve a variety of problems, thanks to adaptable architectures as opposed to the specialized algorithms artificial intelligence has relied on in the past. “Artificial intelligence has the potential to be the most positive technology that humans ever create,” said OpenAI Chief Technology Officer Greg Brockman. “It has the potential to unlock the solutions to problems that have really plagued us for a very long time.”

One of the keys to tackling these challenges is what OpenAI’s researchers call “generative modeling.” If a machine is smart enough to not just recognize speech, but to use that data to generate appropriate responses on its own, then it will behave more intelligently. In a recent blog post, OpenAI team members Ilya Sutskever, Dario Amodel, and Sam Altman suggested four projects that might be tackled. For example [I paraphrase], are there powerful AI programs already at work that we don’t know about that might do harm? Let’s build an agent that can win online programming competitions, because a program that can write other programs would be cool. Develop AI systems that can defend against sophisticated hackers making heavy use of AI methods. Create a complex simulation with many long-lived agents to mimic the development of language and culture.

The blog invites people interested in tackling these kinds of problems to apply to OpenAI. Now there’ll be a powerful GPU-powered supercomputer to work with.