VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Nvidia on Monday announced several AI computing initiatives for enterprise AI product development and operation, including unveiling the company’s cloud-hosted Base Command AI software development platform with NetApp and dozens of new x86 servers from leading OEMs that are certified to run Nvidia AI Enterprise software.
The Santa Clara, California-based chipmaker is talking up AI at Computex 2021, which is returning to Taiwan as a hybrid in-person and virtual event this week after the cancellation of 2020’s expo due to the COVID-19 pandemic. Nvidia and the National Energy Research Scientific Computing Center (NERSC) last week flipped the “on” switch for Perlmutter, billed as the world’s fastest supercomputer for AI workloads.
The new Base Command platform gives developers access to the cloud-hosted computing power of Nvidia DGX SuperPOD AI supercomputers and NetApp data management tools. It is available to early access customers through a $90,000 monthly subscription, Nvidia said in press briefing last week.
Nvidia touted Base Command as a way for enterprise developers to “quickly move their AI projects from prototypes to production” with software designed “for large-scale, multi-user and multi-team AI development workflows hosted either on-premises or in the cloud,” the company said. The software development platform enables numerous researchers and data scientists to “simultaneously work on accelerated computing resources, helping enterprises maximize the productivity of both their expert developers and their valuable AI infrastructure,” the company said.
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Base Command offers a single pane of glass interface to view AI software development on integrated monitoring and reporting dashboards, with command line APIs available. Among the AI developer tools included are Nvidia’s NGC catalog of AI and analytics software, APIs for integrating with MLOps software, and Jupyter notebooks, the chipmaker said.
New x86 servers certified for Nvidia AI Enterprise workloads
Nvidia also announced the availability of new AI-optimized servers from major computer manufacturers as part of its Nvidia-certified systems program. These new systems are certified to run Nvidia AI Enterprise software and are either available now or coming later this year from OEMs. Participating companies include Advantech, Altos, ASRock Rack, Asus, Dell Technologies, Gigabyte, Hewlett Packard Enterprise, Lenovo, Nettrix, QCT, and Supermicro.
New x86 servers based on Nvidia Ampere architecture GPUs are available now, the company said. Nvidia-certified systems using BlueField-2 DPUs will come out later this year and non-x86 machines powered by Arm CPUs will arrive in 2022.
“Enterprises across every industry need to support their innovative work in AI on traditional datacenter infrastructure,” Nvidia head of enterprise computing Manuvir Das said in a statement. “The open, growing ecosystem of Nvidia-certified systems provides unprecedented customer choice in servers validated by Nvidia to power world-class AI.”
Das said the new servers will become “some of the highest-volume x86 servers used in mainstream datacenters, bringing the power of AI to a wide range of industries, including health care, manufacturing, retail, and financial services.”
The new Nvidia-certified systems are expected to run software such as the Nvidia AI Enterprise suite of AI and data analytics software on VMware vSphere, Nvidia Omniverse Enterprise for design collaboration and advanced simulation, and Red Hat OpenShift for AI development, with additional support for Cloudera data engineering and machine learning modeling tools, Das said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.