At Transform 2019 this month, Intel’s GM of Vision Markets, Farshid Sabet, took to the stage to address opportunities and innovations in the internet of things (IoT). As Sabet explained, he’s well-positioned to assess the many companies developing AI at the edge for applications across geographic locations. Such AI implementation ranges from datacenters connecting with the cloud to deploying various components at the edge.
“The question is: ‘Where is the right place to be able to deploy AI?'” Sabet asked. “Do we do it all in the cloud and make the devices dumb? Or a combination of less reliance on the cloud and more on the edge and things? There’s no single answer.” He explained that it depends on an organization’s needs in terms of latency, privacy, total cost of ownership, and system implementations, as well as where the data is implemented.
“At Intel, we see the applications range from the very low power to very high performance, to very customizable and to something that is easy and generic to use,” he said. This is why Intel has applications or processors that address each of these markets, he explained. For example, if you’re deploying AI in smart cameras, power concerns become especially important, whereas if your applications are in datacenters, performance and latency are the predominant concerns.
However, what Sabet and his team have been grappling with is the complexity of implementation. It’s hard enough to hire talent with machine learning experience and expertise, he said, you also need people who have expertise in either DSP, GPU, CPU, or VPU. And you have to make sure you hire layers of engineers who have expertise in the relevant areas.
Sabet, unsurprisingly, said the answer is OpenVINO, Intel’s software toolkit that allows inference at the edge and works with any of these computer vision architectures. “You don’t have to be expert in any of these specific products,” explained Sabet. “[The toolkit] allows optimal performance for each of these environments, as opposed to hand-crafting and getting to the low level to be able to do the programming you want. If you have end-to-end systems, you want to be able to develop solutions that could be deployed from the datacenter to the edge and to the device all at once. You don’t want to develop something that’s only for one of the nodes.”