Liqid and Orange Silicon Valley have teamed up to create a prototype for a “composable” graphics supercomputer. That means it is a computer built with graphics chips that can scale from a small size to a very large size based on the demands of its users. The companies believe it could be used as an on-demand infrastructure for artificial intelligence, deep learning, virtual reality, and graphics rendering.
It’s one more project that could make companies more agile when they tap computing resources. In this case, a single server could have one graphics chip or many. The companies are showing off the prototype supercomputer as a demonstration of the next generation of graphics processing unit (GPU) computers, which can be used to render high-end graphics akin to something like Pixar’s next animated film. They are showing the demo at the Supercomputing 2017 conference in Denver, Colorado, this week.
It is a computer designed for the on-demand economy, sort of like Amazon Web Services, but for customers who need a lot of graphics processing. By “composable,” the companies mean that the system is modular, where you can add one or multiple graphics chips in a server board depending on what you need. It’s like using GPUs as a utility, where you can add or subtract GPUs depending on the job you need done. Another word for it is to “liquefy” infrastructure, or make it as flexible as using and paying for water through a utility, said Gabriel Sidhom, the vice president of technology at Orange Silicon Valley, in an interview.
“You can scale this liquified GPU infrastructure almost infinitely,” Sidhom said.
It’s an unusual project for Orange Silicon Valley, the research arm of European telecommunications carrier Orange (formerly France Telecom). Sidhom said that his company might benefit from such infrastructure over time. The idea is to use the parallel processing capability of GPUs to transcend the serial processing bottlenecks of central processing units (CPUs), allowing for much faster performance on massive, unstructured data sets such as visual recognition tasks. So far, by using GPU computing, Orange has reduced its infrastructure costs by 40 times, said Soumik Sinharoy, the senior product manager at Orange Silicon Valley.
Liqid’s composable platform makes it easy to mix and match GPUs in the data center, making it truly adaptive and “hot swappable,” or easy to plug in or take out in either Windows or Linux environments.
“We believe the future of the data center is composable,” said Jay Breakstone, the CEO of Liqid, in an interview. “We believe companies will have access to infinite GPUs. We always pondered that, when we build a server, will it always stay the same in the rack and never change? We built a flexible platform where you can dynamically add GPUs.”
The whole goal is to reduce the “lumpiness of infrastructure.” Five years ago, deployments of servers in data centers were rigid, where customers got one kind of CPU regardless of what kind of task they were running.
Sinharoy said that customers still can’t really make easy changes at the hardware level when their demand for infrastructure changes.
“We see a future where users can configure on demand at a bare metal level, at the rack level, and at the data center level,” Sinharoy said. “The vision is for a multipurpose, multi-tenant infrastructure. My users should be able to demand 20 or 30 GPUs at the click of a button, and they shouldn’t have to share that with any other users.”
Liqid has 38 employees. Both companies work with GPU vendors such as Nvidia, which has its own GeForce Grid technology, which enables a single GPU to be shared by multiple users for everything from games to other applications. That enables cloud-gaming applications, such as streaming gameplay to a user over the Internet.
“The result is more utilization of resources,” said Breakstone.
Liqid also announced this morning that the composable GPU platform will be available as the Liqid Grid and Liqid Command Center in the first quarter.