A new GamesBeat event is around the corner! Learn more about what comes next.
Intel revealed in August that its next-generation Xeon processors would launch in fall 2018, and today the company made good on that promise.
The chipmaker debuted two new additions to its data-centric product lineup — the Xeon E-2100 series and Cascade Lake advanced performance — and provided an update on its broader momentum.
“There’s an exponential growth cycle in data driven by the push to the edge, the massive personalization of services, and the insatiable demand for new capability … [but] to the best of our knowledge … only between 1 and 2 percent of that data is being used and analyzed,” Lisa Spelman, vice president and general manager of Intel Xeon products and datacenter marketing, said during a conference call with reporters. “The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup … demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers.”
Chips in the Xeon E-2100 series are available starting this week through Intel and its distribution partners, and Cascade Lake advanced performance is set to launch in the first half of 2019.
Xeon E-2100 series
Intel’s Xeon E-2100 series — which was first revealed in July — is designed on the company’s 14nm++ Coffee Lake process and aimed at small and medium-sized businesses and cloud service providers, Spelman explained on a conference call with reporters. It’s optimized for tasks like file-sharing, storage and backup, virtualization, and productivity tasks.
Every SKU in the Xeon E-2100 series has six cores clocked at a base frequency of 3.8GHz (and Turbo Boost 2.0 up to 4.7GHz), with a 12MB cache and thermal design point (TDP) of up to 95 Watts. All but three sport Intel’s HD Graphics P630, a midrange integrated graphics chip with 24 execution units clocked at 350MHz that lacks embedded DRAM.
|Processor||Base Clock (GHz)||Turbo Boost 2.0 (GHz)||Cores/Threads||Intel UHD P630||Cache (MB)||TDP||Price|
Xeon E-2100 series processors boast up to 16 lanes of PCI-Express 3.0 and two channels of DDR4 2666 (PC4 21300) with EEC totaling 128GB. (EEC, for the uninitiated, is a kind storage that can detect and correct common kinds of data corruption.) USB 3.1 Gen 2 (up to 6 ports), USB 3.0 (up to 10 ports), and SATA Gen 3 (up to 8 ports) are in tow, plus support for Thunderbolt 3.0.
That’s in addition to the latest version of Intel’s Manageability Engine (ME) — version 12 — and other embedded services, including remote management through Server Platform Services, Active Management Technology, Rapid Storage Technology, Intel 1 Gigabit Ethernet, and Intel Wireless AC.
Intel’s claiming an overall 48 percent improvement over 2014’s Xeon series, and up to a 1.39 times boost “generation-on-generation.” And Navin Shenoy, executive vice president at Intel, said at the company’s Data Centric Innovation Summit in July that the processors are 11 times better at AI image recognition tasks than the Silver Lake-based Xeon series from 2017.
There’s a caveat (or several), however. Xeon E-2100 series processors are only available in a single-socket configuration, and support for 128GB of memory won’t arrive at launch — it’ll come sometime in 2019 through a BIOS update. Lastly, the CPUs require a Xeon E-enabled motherboard with a specialized workstation C246 chipset; they don’t play nicely with C236-based or 300-series motherboards.
Cascade Lake advanced performance
Intel’s pitching its forthcoming Cascade Lake advanced performance as a “new class” of Xeon Scalable Processors — one focused squarely on high-performance computing (HPC), artificial intelligence (AI), and infrastructure-as-a-service (IaaS) workloads. It’s a multichip Cascade Lake-based package comprising two sockets with a high-speed Ultra Path Interconnect, delivering a combined 48 cores per CPU and 12 DDR4 memory channels.
It’s no slouch, as you might expect. On the Linpack benchmark — a software library for performing numerical linear algebra — Cascade Lake advanced performance can achieve up to 1.21 times the performance versus Intel Xeon Scalable 8180 processor and 3.4 times AMD’s 32-core Epyc 7601. And on the Triad benchmark in Sustainable Memory Bandwidth in High Performance Computers (STREAM), Intel says it measured a 1.83 times and 1.3 times advantage over the Xeon 8180 and Epyc 7601.
Last, but not least, Cascade Lake advanced performance can process up to 17 times the number of images per second compared to Intel’s Xeon Platinum processor, the company claims. That’s thanks in part to a number of new AVX512 instructions designed to accelerate neural network performance.
Spelman said that in the third quarter of 2018, Intel shipped more than 8 million processors into an annual server, storage, and network TAM that’s greater than 30 million units. (The company first reported those numbers in October during its Q3 earnings call.) Clients include heavy hitters like Alibaba Cloud, Amazon Web Services, Baidu, Google Cloud, Kingsoft, Dell EMC, Novartis, Cray, and DataRobot, among others.
She also took the opportunity to give an update on Optane DC Persistent Memory (PM), which Intel announced in September and began shipping in August.
The newest in its 3D Xpoint memory portfolio — a non-volatile memory technology developed jointly by Intel and Micron Technology that, according to the former, offers memory-like performance at a significantly lower price point — is PIN-compatible with DDR4 and pairs large Optane caches (up to 512GB) with smaller DRAM pools (for instance, 256GB of DDR4 RAM combined with 1TB of Optane DC PM). It launched alongside a new Persistent Memory Development Kit (PMDK) designed to help enterprises tune databases on software for Optane.
Paired with the latest generation of Xeon Scalable Processors, Intel pegs Optane DC PM’s performance at 287,000 operations per second (compared with a conventional DRAM and storage combo’s 3,164 operations per second), with a restart time of only 17 seconds. Furthermore, it says Optane DC PM is up to 8 times faster in Spark SQL DS than DRAM (at 2.6TB data scale) and supports up to 9 times more read transactions and 11 times more users per system in Apache Cassandra.
The gains are in part thanks to Optane DC PM’s two operating modes: App Direct mode and Memory mode. App Direct mode allows apps to tap into Optane caches and DRAM directly, while Memory mode enables software running in a supported operating system or virtual environment to use DRAM as a cache and Optane as volatile memory.
Intel launched a beta for Optane DC PM on October 30. Google — an early partner — recently announced the alpha availability of virtual machines with 7TB of memory using Intel Optane DC PM and said that some of its customers have seen a 12 times improvement in SAP Hana startup times.
“We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers’ system requirements,” Spelman said.
The Xeon E-2100 series launch comes ahead of next-generation platforms set to debut in 2019.
Intel earlier this year teased Cooper Lake, a new 14-nanometer Xeon Scalable chip with better performance, new input-output features, instruction improvements dubbed Intel DL Boost, and Optane support. Cooper Lake will launch in 2019.
Ice Lake, meanwhile, is a 10-nanometer Intel Xeon Scalable processor that shares Cooper Lake features. It’s planned for 2020 shipments.
One thing’s clear: Intel is positioning its chip business for growth. In August, the company announced it had sold more than 220 million Xeon processors over the past 20 years, generating $130 billion in revenues. That’s a far cry from the $200 billion the server, storage, and network market is expected to be worth in 2022, but Intel intends to close the gap aggressively, with plans to capture $20 billion in the next four years.
Artificial intelligence (AI) remains a principal focus — showcased by its partnerships with Scanline VFX, which used 2,500 Xeon Scalable processors to computer-generate a 75 foot-long prehistoric shark in sci-fi film The Meg, and Philips, which tapped Xeon chips to speed up AI medical scan analysis. Intel said this summer that its Xeon CPUs generated $1 billion in revenue in 2017 for use in AI applications, which it believes will grow to $10 billion by 2022.
Intel’s Q3 2018 data-centric businesses grew 22 percent, led by 26 percent year-over-year growth in the Data Center Group (DCG).
“We are positioned to play in all segments,” Naveen Rao, corporate vice president at Intel’s AI products group, told VentureBeat in an interview earlier in August. “This growth is just beginning. It feels like we are in the top of the second inning.”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more