The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
During a press conference at the 2020 Consumer Electronics Show, Intel gave a small update on its ongoing AI and machine learning hardware acceleration efforts. Details were a bit hard to come by at press time, but platforms group executive vice president Navin Shenoy previewed the performance improvement that will arrive with the chip maker’s third-generation Xeon Scalable processor family, code-named Cooper Lake.
The 14-nanometer (14nm++) Cooper Lake, which will be available in the first half of 2020, will deliver up to a 60% increase in both AI inferencing and training performance. That’s compared with the 30 times improvement in deep learning inferencing performance Intel achieved in 2019 from 2017, the year the company released its first processor with AVX-512, a set of 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions.
Delivering this improvement in part is DL Boost, which encompasses a range of x86 technologies designed to accelerate AI vision, speech, language, generative, and recommendation workloads. It’ll support the bfloat16 (Brain Floating Point) starting with Cooper Lake products, a number format originally by Google and implemented in its third generation custom-designed Tensor Processing Unit AI accelerator chip.
By way of refresher, Cooper Lake features up to 56 processor cores per socket, or twice the processor core count of Intel’s second-gen Scalable Xeon chips. They’ll have higher memory bandwidth, higher AI inference, and training performance at a lower power envelope, as well as platform compatibility with the upcoming 10-nanometer Ice Lake processor.
Intel products are used for more datacenter runs on AI than on any other platform, the company claims.
The future of Intel is AI. Its books imply as much — the Santa Clara company’s AI chip segments notched $3.5 billion in revenue this year, and it expects the market opportunity to grow 30% annually from $2.5 billion in 2017 to $10 billion by 2022. Putting this into perspective, AI chip revenues were up from $1 billion a year in 2017. And Intel anticipates the AI silicon market will be greater than $25 billion by 2024.
Earlier this year, Intel purchased Habana Labs, an Israel-based developer of programmable AI and machine learning accelerators for cloud datacenters, for an estimated $2 billion. It came after the purchase of San Mateo-based Movidius, which designs specialized low-power processor chips for computer vision, in September 2016. Intel bought field-programmable gate array (FPGA) manufacturer Altera in 2015 and a year later acquired Nervana, filling out its hardware platform offerings and setting the stage for an entirely new generation of AI accelerator chipsets. And in August, Intel snatched up Vertex.ai, a startup developing a platform-agnostic AI model suite.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more