Intel announced its second-generation Xeon Scalable processors at an event in San Francisco today.

The Xeon Platinum 9200 processor is the flagship of the chips, with as many as 56 cores, or 112-cores in a two-chip system for data-centric computing problems. The processor has a 1.33 times average performance gain on workloads compared to the previous generation of chips.

Intel executive vice president Navin Shenoy, head of the Data Center Group at Intel, said it was “truly a beast” of a processor, and it was part of a whole portfolio of data-centric chips being launched today. The devices are shipping now.

“This is a big day for us,” Shenoy said. “It is the first truly data-centric launch in our history.”

The goal is to drive processing not only to the central processing unit (CPU) but to Intel products in the field programmable gate array and memory space.

Shenoy noted that half the world’s data was created in the last two years and only 2 percent of it has been analyzed.

“That leads us to great optimization,” he said.

Above: Bob Swan is CEO of Intel.

Image Credit: Dean Takahashi

The trends include a proliferation of cloud computing, the growth of AI and analytics, and cloudification of the network and the edge. In the past five years, Intel saw a 50 percent increase in compute demand, and it predicts the same again in the next five years.

The demand for diverse workloads is increasing. So Intel has been investing to move data faster with Ethernet and silicon photonics, store more with Optane products, and process everything with CPUs, FPGAs, and custom chips.

Bob Swan, CEO of Intel, said onstage that Intel is targeting a total available market of $300 billion for data-centric products, far beyond the size of the market for personal computer chips that Intel has traditionally played in.

AI is growing from a $2.5 billion chip opportunity, growing to $8 billion to $10 billion by 2022, Shenoy said.

He also said the Intel DL Boost, or special deep learning instructions, in the Intel Xeon Platinum 8200 processor can elevate the AI processing performance by 14 times.

Among the customers: Amazon will use the code-named Cascade Lake processors in various datacenter roles like AWS Alexa.

Patrick Moorhead, analyst at Moor Insights & Strategy, said in an email:

Overall, it was hard to ignore that Intel has become a full datacenter technology provider with huge investments in compute, storage and networking. It’s all-in on heterogeneous compute across CPU, graphics processing unit (GPU), FPGA and application specific integrated circuits (ASICs). This is the new Intel.

The most interesting things for me about the new Xeons are the addition of machine learning capabilities (DL Boost) built into the chip where, when latency counts, is good for specific inference workloads like recommendation engines. Not too many know that CPUs already dominate ML inference usage and this just gave datacenters another reason to continue doing this for certain workloads. This isn’t Intel’s big discrete AI accelerator play as those are slated to become real in 2020.

The Intel-stated 33% raw performance boosts in the mid-range of the Xeon stack was surprising and could put it in a better competitive position. The 2nd Gen Xeons also support Optane Persistent Memory, which for applications like SAP Hana, could radically improve total cost of ownership and speed and could become a future “no-brainer.” I believe Intel Xeon’s new tuning and management capabilities like Speed Select and Resource Director will strongly accepted by cloud and communications service providers.