During its 2019 AI Summit this morning, Intel detailed its next-gen Movidius Myriad vision processing unit (VPU) code-named Keem Bay, which is optimized for inferencing tasks at the edge. The 72-millimeter chip boasts a new on-die memory architecture with 64-bit memory bandwidth and about 10 times the performance of the previous generation, according to Intel vice president of IoT Jonathan Ballon.
“It’ll deliver better-than-GPU performance at a fraction of the power, a fraction of the size, and fraction of the cost of comparable products,” said Ballon. “It complements our full portfolio of products, tools, and services.”
Keem Bay is a powerhouse, to be sure. Intel says that at between a third and a fifth of the the power, it’s four times faster than Nvidia’s TX2 (which draws about 30 watts) and 1.25 times faster than Huawei’s HiSilicon Ascend 310 AI accelerator. In some scenarios, it’s up to six times more power efficient than rival processors. Moreover, it delivers four times the inferences per second per TOPS versus Nvidia’s Xavier, and Ballon says that customers who take “full advantage” of Intel’s OpenVINO toolkit can get roughly 50% additional performance.
It’ll launch in the first half of 2020 in a variety of form factors, including PCI Express and M.2.
By way of refresher, Intel acquired San Mateo-based Movidius, which designs specialized low-power processor chips for computer vision, in September 2016. Its VPUs pack dedicated chips for computer vision and a dozen purpose-built (SHAVE) processor cores that speed up execution of AI algorithms, all of which are programmable with the Myriad Development Kit (MDK).
In the years following the acquisition, Intel launched the Myriad 2, which made its way into Google’s Clips camera, Flir’s Firefly, DJI’s Phantom 4 drone, and Tencent’s DeepGaze. Its successor — the Myriad X — boasted improved imaging and vision engines including additional programmable SHAVE cores and upgraded vision accelerators, as well as a native 4K image processor pipeline with support for up to eight HD sensors.
AI is an increasingly central part of Intel’s business. The company said during its most recent earnings call that annual revenue from AI reached $3.5 billion in 2019. That’s up from $1 billion a year in 2017, and over a third of the way toward its target of $10 billion by 2022.
“We’re one of the largest [in the market] due to our breadth and depth that allows us to go from a data center out to the edge,” said corporate vice president and general manager of AI products Group at Intel Corporation Naveen Rao onstage. “And we anticipate this growing, year on year.”