NVIDIA CEO Delivers World’s First AI Supercomputer in a Box to OpenAI
The world’s leading non-profit artificial intelligence research team needs the world’s fastest AI system.
That’s why NVIDIA CEO Jen-Hsun Huang last week hand-delivered the world’s first AI supercomputer in a box — our NVIDIA DGX-1 — to OpenAI in San Francisco.
“I thought it was incredibly appropriate that the world’s first supercomputer dedicated to artificial intelligence would go to the laboratory that was dedicated to open artificial intelligence,” Huang said.
OpenAI’s researchers will put the first production DGX-1 — packing 170 teraflops of computing power, equal to 250 conventional servers — to work on artificial intelligence’s toughest problems.
OpenAI’s team is working at the cutting-edge of a field that promises incredible advances. Imagine artificial personal assistants that can coordinate our digital lives and autonomous cars and robots that are accessible to everyone.
Doing that will take technology with the computing power to keep up with OpenAI’s researchers. Building DGX-1 took 3,000 people working for three years, Huang explained.
“So if this is the only one ever shipped, this project would cost $2 billion,” he said.
The Xerox PARC of AI
OpenAI’s researchers are eager to put it to work.
“The DGX-1 is a huge advance,” OpenAI Research Scientist Ilya Sutskever said. “It will allow us to explore problems that were completely unexplored before, and it will allow us to achieve levels of performance that weren’t achievable.”
OpenAI — already hailed by some as the “Xerox PARC of AI” — was founded last year to advance digital intelligence in ways that will benefit all humanity.
“Artificial intelligence has the potential to be the most positive technology that humans ever create,” said OpenAI Chief Technology Officer Greg Brockman. “It has the potential to unlock the solutions to problems that have really plagued us for a very long time.”
Talking with Machines
One of the keys to tackling these challenges is what OpenAI’s researchers call “generative modeling.” If a machine is smart enough to not just recognize speech — but to use that data to generate appropriate responses on its own — then it will behave more intelligently.
“You can take a large amount of data that would help people talk to each other on the internet, and you can train, basically, a chatbot, but you can do it in a way that the computer learns how language works and how people interact,” said OpenAI Research Scientist Andrej Karpathy.
Unleashing DGX-1, the First AI Supercomputer in a Box
The key to all this: speed. Researchers today are limited by the computational power in their systems.
“Our advances depend on GPUs being fast. Speed of our computers is, in some sense, the lifeblood of deep learning,” Sutskever said.
Signed, sealed, delivered: Huang and the team at OpenAI signed the first DGX-1 AI supercomputer in a box to be delivered.
“One very easy way of always getting our models to work better is to just scale the amount of compute,” Karpathy said. “So right now, if we’re training on, say, a month of conversations on Reddit, we can, instead, train on entire years of conversations of people talking to each other on all of Reddit.”
“And then we can get much more data in terms of how people interact with each other. And, eventually, we’ll use that to talk to computers, just like we talk to each other.”
Projects like these are the reason why we built DGX-1, and why we’ll be delivering DGX-1s to top AI research teams all over the world in the weeks ahead.
( Press Release Image: https://photos.webwire.com/prmedia/2/204723/204723-1.jpg )
WebWireID204723
This news content was configured by WebWire editorial staff. Linking is permitted.
News Release Distribution and Press Release Distribution Services Provided by WebWire.