Nvidia is teaming with Microsoft’s Azure to introduce the NDv2 instance in preview for supercomputing in the cloud. The instance can supply up to 800 Nvidia Tesla V100 chips designed with deep learning in mind via the cloud. CEO Jensen Huang shared the news today onstage at SC19, a supercomputing conference in Denver.
Nvidia also today released a reference design platform for companies to create Arm-based servers for supercomputers that can carry out high-performance computing or large AI simulations. Nvidia will work with Arm partners like Fujitsu to ensure compatibility between Arm CPUs and Nvidia GPUs, and companies like Cray and Hewlett Packard Enterprise (HPE) plan to build hyperscale cloud-to-edge servers based on the design. HPE completed its $1.4 billion acquisition of supercomputing company Cray in September.
The news comes the same day as Amazon Web Service shared plans to launch some of its most powerful cloud EC2 instances ever powered by AMD’s EPYC Rome processors, and a day after Intel revealed its Pone Vecchio GPU for datacenters.
In a conversation with VentureBeat’s Dean Takahashi shortly after the release of Intel’s news GPU architecture, Huang said he questions whether competitors have the software stack necessary to scale supercomputing tasks.
As part of today’s news, Nvidia is also introducing an Arm-compatible software development kit, following on Nvidia’s June pledge to bring its Cuda-X AI and HPC software to Arm CPUs for the creation of exascale supercomputers.
At an event last week, no exact figures were shared, but Intel VP of IoT Jonathan Ballon told VentureBeat that since its launch two years ago, OpenVINO software has seen the fastest adoption rates of any tool in company history, outpacing Cuda growth rates.