Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
In a highly anticipated keynote at COMPUTEX Taipei, Nvidia founder and CEO Jensen Huang unveiled a range of cutting-edge systems, software and services that leverage the power of generative AI to reshape various sectors — from advertising to manufacturing to telecommunications. The live event marked Huang’s first in-person keynote since the onset of the pandemic.
Huang said he believes these innovations will not only facilitate new business models but also significantly enhance the efficiency of existing models across a multitude of industries.
One of the keynote’s highlights was the official launch of Grace Hopper, a platform that combines the energy-efficient Nvidia Grace CPU with the high-performance Nvidia H100 Tensor Core GPU. This all-in-one module empowers enterprises to achieve unparalleled AI performance, said Huang.
Furthermore, Huang introduced the DGX GH200, an AI supercomputer with extensive memory capabilities capable of integrating up to 256 Nvidia Grace Hopper Superchips in a single data-center-sized GPU.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Advanced performance and memory
With an exaflop of performance and 144 terabytes of shared memory, the DGX GH200 surpasses its predecessors by almost 500 times, enabling developers to build intricate language models for GenAI chatbots, advanced algorithms for recommender system and sophisticated graph neural networks for tasks like fraud detection and data analytics. Huang said that tech giants like Google Cloud, Meta and Microsoft are already exploring the capabilities of DGX GH200 for their generative AI workloads.
Huang emphasized that, “DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies, pushing the boundaries of AI.”
Huang also unveiled the Nvidia Avatar Cloud Engine (ACE) for Games, a foundry service that empowers developers to create and deploy custom AI models for speech, conversation and animation. ACE equips non-playable characters with conversational abilities, allowing them to respond to inquiries with evolving lifelike personalities.
The toolkit encompasses essential AI foundation models such as Nvidia Riva for speech detection and transcription, Nvidia NeMo for generating customized responses and Nvidia Omniverse Audio2Face for animating these responses.
Additionally, Nvidia announced its collaboration with Microsoft to drive innovation in the generative AI era for Windows PCs. The partnership involves developing enhanced tools, frameworks and drivers that streamline AI development and deployment process on PCs.
The collaboration aims to enhance and expand the installed base of more than 100 million PCs with RTX GPUs featuring Tensor Cores, thereby boosting the performance of more than 400 AI-accelerated Windows applications and games.
Leveraging generative AI for digital advertising and manufacturing workloads
Huang said the potential of generative AI expands to the digital advertising industry, in which Nvidia collaborates with the marketing services organization WPP. Together, the companies have developed an innovative content engine on the Omniverse Cloud platform.
This engine enables creative teams to connect their 3D design tools like Adobe Substance 3D to create digital twins of client products within the Nvidia Omniverse. By utilizing GenAI tools trained on responsibly sourced data and powered by Nvidia Picasso, these teams would now be able to quickly produce virtual sets.
Nvidia said that this newfound capability empowers WPP clients to generate a multitude of ads, videos and 3D experiences customized for global markets, accessible on any web device.
Successful focus on manufacturing
The company also announced its successful focus on manufacturing, a $46 trillion industry comprising approximately 10 million factories. Huang highlighted that by leveraging Nvidia technologies, electronics manufacturers like Foxconn Industrial Internet, Innodisk, Pegatron, Quanta and Wistron are transitioning towards digital workflows, bringing the vision of fully digital smart factories closer to reality.
According to Huang, “The world’s largest industries create physical things. By digitally building them first, we can save billions.”
The integration of Omniverse and generative AI APIs has enabled these companies to establish connections between design and manufacturing tools, thereby constructing digital replicas of their factories known as digital twins.
Additionally, the companies are utilizing Nvidia Isaac Sim for simulating and testing robots and Nvidia Metropolis — a vision AI framework — for automated optical inspection. The company’s latest addition, Nvidia Metropolis for Factories, enables the creation of custom quality-control systems, providing manufacturers with a competitive edge and empowering them to develop cutting-edge AI applications.
A new range of AI supercomputers and versatile server solutions
Nvidia also revealed its ongoing construction of the impressive AI supercomputer, Nvidia Helios, which is expected to be operational later this year. The supercomputer will utilize four interconnected DGX GH200 systems with Nvidia Quantum-2 InfiniBand networking, delivering a bandwidth of up to 400Gb/s. Consequently, data throughput for training large-scale AI models will be significantly enhanced.
In addition to this groundbreaking development, Nvidia has introduced the Nvidia MGX, a modular reference architecture that empowers system manufacturers to efficiently and cost-effectively create diverse server configurations tailored for AI, HPC and Nvidia Omniverse applications.
With the MGX architecture, manufacturers can develop standardized CPUs and accelerated servers using modular components. These configurations support a range of GPUs, CPUs, data processing units (DPUs) and network adapters, including x86 and Arm processors.
Furthermore, the MGX configurations can be accommodated in air- and liquid-cooled chassis. Leading the way in adopting the MGX designs are QCT and Supermicro, with their respective introductions scheduled for August. Other prominent companies such as ASRock Rack, ASUS, GIGABYTE and Pegatron are expected to follow suit.
Nvidia’s impact on 5G infrastructure and cloud networking
Huang announced collaborative efforts to revolutionize 5G infrastructure and cloud networking. For example, a partnership with a major telecom giant in Japan will develop a distributed network of data centers that will leverage Nvidia’s Grace Hopper and BlueField-3 DPUs within modular MGX systems.
By incorporating Nvidia spectrum ethernet switches, the data centers will facilitate the delivery of precise timing required by the 5G protocol. This will lead to enhanced spectral efficiency and reduced energy consumption. The platform holds promise for various applications, including autonomous driving, AI factories, augmented and virtual reality, computer vision and digital twins.
Furthermore, Huang introduced Nvidia Spectrum-X, a purpose-built networking platform designed to enhance the performance and efficiency of ethernet-based AI clouds. Combining Spectrum-4 Ethernet switches with BlueField-3 DPUs and software, Spectrum-X delivers a 1.7X boost in AI performance and power efficiency. Leading system manufacturers such as Dell Technologies, Lenovo and Supermicro already offer Nvidia Spectrum-X, Spectrum-4 switches and BlueField-3 DPUs.
Generative AI supercomputing centers
Nvidia is also making significant strides in establishing generative AI supercomputing centers worldwide.
For example, in Israel, the company is constructing Israel-1, a cutting-edge supercomputer within its local data center. Israel-1 comprises Dell PowerEdge servers, the Nvidia HGX H100 supercomputing platform, and the Spectrum-X platform with BlueField-3 DPUs and Spectrum-4 switches. Its primary goal is to accelerate local research and development efforts.
And, in Taiwan, two new supercomputers are currently in development: Taiwania 4 and Taipei-1. Taiwania 4, built by ASUS and set to launch next year, leverages Arm-based Grace CPUs and an Nvidia Quantum-2 InfiniBand network. Nvidia said it is one of Asia’s most energy-efficient supercomputers.
On the other hand, Taipei-1, which is owned and operated by Nvidia, will feature 64 DGX H100 AI supercomputers, 64 Nvidia OVX systems, and Nvidia networking.
The company believes these additions will significantly enhance local research and development initiatives.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.