On-device AI offers immediate response, enhanced reliability, increased privacy, and bandwidth efficiency, opening up enormous opportunities across devices. Learn more about the technology that’s already at the fingertips of developers, OEMs, and ISVs when you register for this VB Live event.

Register here for free.

The most important thing for developers to know about on-device AI is that it’s more than just the name implies, says Gary Brotman, senior director of product management at Qualcomm Technologies. This is a step in the evolution or life cycle of AI, from cloud to a device, and then to all devices.

“You’re really looking at a distributed form of intelligence, and the tools that arise to help with that will spur so much more innovation than we could possibly imagine,” Brotman says.

The movement of AI from the cloud to devices has been accelerating, aided by a number of important factors that together have unlocked these new use cases that consumers are hungry for, such as better voice assistants, more powerful camera features, smarter cars, and more.

The technology behind edge computing

It’s thanks to a leap in technological evolution, Brotman explains. The chips that power all the portable devices and mobile devices that we have in our hands and around us today have enough processing power to take the powerful algorithms, previously anchored in the cloud, and run them effectively on a device with exceptional performance while using less energy.

Then there’s the abundance of data, whether it’s from social media, the device itself, or its performance data, which gives developers enough to work with to create the advanced AI algorithms and models that are driving innovative use cases — use cases that can take advantage of the improved processing speed, making it possible to take what was stuck in the cloud to now run effectively and efficiently on edge devices.

Benefits to developers and consumers alike

One of the major benefits of leveraging edge computing is privacy, says Brotman. Consumers are far more alert to the risks they take with their personal information, especially the biometric data — such as your fingerprint, your iris, your voice, or your face that smartphones can now use to unlock the phone — as well as password and payment information.

“You should not have to send those bits of personally identifiable information up to the cloud to benefit from what the device is going to deliver to you,” he says. “All that processing can reside locally on that device, where the benefit is derived.”

Performance is another huge gain that on-device AI delivers. If all the processing a device requires is presented local or available local, then you don’t have to worry about round-trip time to the cloud for the pattern match, or that anomaly detection, or whatever the use case may be, to get the benefit of that particular feature. For instance, if facial recognition data were sent to the cloud for verification and then brought back down, there’s a delay or latency that’s inherent in that loop. If you’re doing all that locally, you can get real-time performance.

For personal assistants, if the device is able to process and understand what you said immediately and give you an immediate response that acknowledges it heard you, while in the background it’s fetching whatever you’re asking for from the cloud in parallel, then you remove pauses, and make that dialogue more of a conversation and less of a mechanical, robotic interaction. Immediacy, especially when it comes to your voice — and how you use your voice to interact with devices — becomes more real-time and more natural.

“Consumers are going to gravitate more toward that experience because of that,” Brotman points out.

Real-time personalization also becomes a major selling point for applications running on the edge, when the device is able to use immediate context to understand a query or request. If the context from that device, taken in by the camera sensor or the microphone sensor, is available and processed in real time, that device, over time, could then provide a higher degree of personalization, as opposed to sending all that data back to the cloud to then be processed, and at some time in the future be used.

Why to invest in on-device AI

There are a wide number of reasons to invest, Brotman says, especially now as the technology is starting to take off and consumers are starting to become aware of the possibilities. Even if they don’t understand what’s going on under the hood, they can see that their existing features are becoming faster and more performant or more compelling, or new emerging feature categories are becoming more mature.

The use cases that consumers have come to love will become more accurate, real-time and more immersive, such as voice assistant technology, real-time translation, photography applications and more, and AR and VR experiences will come into their own as they become more AI-driven, as opposed to classic computer vision-driven.

The other benefits for the business are reductions in cost. When you’re doing more processing in software, when you achieve more with software, there are opportunities to reduce hardware costs. The programming of neural network models and AI algorithms is more generalized. Rather than write specific software for each type of object that you would want to identify, like a cat or a dog or a car, you could train models to look for all of the above, and not only have a general approach to programming, but the performance and KPIs are going to be better.

To learn about the on-device-AI use cases that are exploding now in every industry, what kind of technology is required to drive on-device AI, and the best advice for developers ready to invest, don’t miss this VB Live event!

Don’t miss out!

Register for free here.

You’ll learn:

  • How AI-powered smartphone apps are igniting the consumer imagination — and why
  • How on-device AI is changing the future of connected devices
  • How developers, OEMs, and ISVs can take advantage of on-device AI processing
  • The technology you need to drive the best performance of on-device AI


  • Gary Brotman, Senior Director, Product Management, Qualcomm Technologies

More speakers coming soon!

Sponsored by Qualcomm Technologies