Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Intel unveiled its latest infrastructure processing unit (IPU) with plans to take on its rivals through the year 2026.

With this roadmap, Intel said it plans to create end-to-end programmable networks, deploying its full portfolio of based on field programmable gate arrays (FPGA) and application specific integrated circuits (ASIC) IPU platforms.

The company will also have open-software frameworks designed to better serve customer needs with improved data center efficiency and manageability. Intel made the announcement at its Intel Vision conference in Dallas, Texas, today.

About the IPU

An IPU is a programmable networking device designed to enable cloud and communication service providers, as well as enterprises, to improve security, reduce overhead and free up performance for central processing units (CPUs).


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

With an IPU, customers better utilize resources with a secure, stable, programmable solution that provides greater security and isolation to both service provider and tenant, Intel said.

About the IPDK

Intel unveiled its IPU portfolio at Intel Vision.

Intel said an open ecosystem is the best way to extract the value of the IPU. Intel’s IPUs are enabled by a foundation powered by open-source software, including the infrastructure programmer development kit (IPDK), which builds upon the company’s history of open engagements with SPDK, DPDK and P4.

Intel remarked that it has worked with the community to simplify developer access to the technology and help customers build cloud orchestration software and services. The IPDK allows customers to focus on their applications not on the underlying API, or on the hardware.

Intel’s IPU roadmap

Intel said that its second-generation 200GB IPU, dubbed Mount Evans, is its first ASIC IPU. And it said Oak Springs Canyon is Intel’s second-generation FPGA IPU shipping to Google and other service providers. Those are coming this year.

Intel also said that for 2023 and 2024, it will have its third-generation 400GB IPUs, code-named Mount Morgan and Hot Springs Canyon, expected to ship to customers and partners.

And in 2025 and 2026, Intel said it will ship its 800GB IPUs for customers and partners. The Mount Evans IPU was architected and developed with Google Cloud. It integrates lessons from multiple generations of FPGA SmartNICs and the first-generation Intel FGPA based IPU.

Hyperscale-ready, it offers high-performance network and storage virtualization offload while maintaining a high degree of control. The Mount Evans IPU will ship in 2022 to Google and other service providers; broad deployment is expected in 2023.

Habana Labs’ Gaudi2 deep learning training processor

Intel’s Gaudi2 processor.

Meanwhile, Intel’s Habana Labs division launched the Gaudi2 processor, a second-generation Gaudi processor for training. And for inference deployments, it introduced the Greco processor, the successor to the Goya processor.

The processors are purpose-built for AI deep learning applications. Implemented in seven-nanometer production, the processors use Habana’s high-efficiency architecture to provide customers with higher-performance model training and inferencing for computer vision and natural language applications in the datacenter.

The Greco is a second-generation inference processor for deep learning. It is built in seven-nanometer production and will debut in the second half of 2022.

At the conference, Habana demonstrated Gaudi2 training throughput performance on computer vision – ResNet-50 (v1.1) – and natural language processor – BERT Phase-1 and Phase-2 (version) – workloads, nearly twice that of the rival Nvidia A100 80GB processor, Intel said.

For data center customers, the task of training deep learning models is increasingly time-consuming and costly due to the growing size and complexity of datasets and AI workloads, Intel said. Gaudi2 was designed to bring improved deep learning performance and efficiency – and choice – to cloud and on-premises systems.

To increase model accuracy and recency, customers require more frequent training. According to IDC, 74% of machine learning (ML) practitioners surveyed in 2020 run five to 10 training iterations of their models, more than 50% rebuild models weekly or more often and 26% rebuild models daily or even hourly.

And 56% of those surveyed cited cost-to-train as the number one obstacle to their organizations taking advantage of the insights, innovations and enhanced end-customer experiences that AI can provide. The Gaudi platform solutions, first-gen Gaudi and Gaudi2, were created to address this growing need.

To date, one thousand HLS-Gaudi2s have been deployed in the Habana Gaudi2 data centers in Israel to support research and development for Gaudi2 software optimization and to inform further advancements in the forthcoming Gaudi3 processor.  

Habana is partnering with Supermicro to bring the Supermicro Gaudi2 Training Server to market in 2022’s second half. It is also working with DDN to deliver a turnkey server featuring the Supermicro server with augmented AI storage with the pairing of the DDN AI400X2 storage solution.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.