Presented by Qualcomm Technologies, Inc.

To data scientists, the raw potential of AI and complex neural networks was clear from the start, or close to it. But it’s only in the past five years that device hardware has become sophisticated enough to make good on the full promise, and bring AI all the way to the edge.  On-device AI is what makes AI a reality for the consumers. And now devices of every size, even with lower battery capacity, are able to handle powerful, power-efficient on-device neural networks. It’s the evolution of computing from the cloud, taking inferencing right to the source.

“We’ve spent almost a decade of research on how to make AI work best on the edge,” says Ziad Asghar, senior vice president of product management, Qualcomm Technologies, Inc. “From that, we’ve developed hardware that’s able to do more inferencing for any given amount of power, and AI software stack (Qualcomm AI Stack) and tools to bring the Connected Intelligent Edge to life.”

Leveling up AI use cases and unlocking new ones

AI use cases have made their way to devices already — AI enhanced pictures and videos, AI-based voice assistants, better sound and voice quality, real-time language translation, and more are significantly improved with connectivity and data processing, while numerous brand-new use cases are just starting to make themselves known across camera, gaming, sensors and connectivity, on all devices at the edge.

On the consumer-facing side, use cases embrace everything from smartphones, XR, compute and earbuds to connected intelligent vehicles and smart homes. On the business side, they support digital transformation in the industrial and manufacturing space, connected healthcare and a leap ahead for the AI software tools and platforms companies need to stay competitive in a rapidly changing environment.

Asghar describes the Connected Intelligent Edge itself as a network with multiple nodes, or different products, within it — and many of the new possibilities lie in these device clouds. In a smart home, for example, that might include security cameras, the cars in the garage, appliances, PCs, mobile devices and tablets, all with some amount of AI processing capability.

Those security cameras might recognize a family member in order to open up the smart lock at the front door and activate environmental controls. But the Connected Intelligent Edge also disseminates AI across the whole network, so that use cases are handled with the best accuracy with the best power consumption. If there’s not enough processing power on one product, it can be handed up the line to a more powerful device.

For instance, a security camera shifting a possible false alarm to the unit that can handle anomalies and more complex incidents. The data never leaves the device or local network, so that privacy is assured. And handling latency-sensitive use cases on the device means real-time results, and a a better consumer experience.

Purpose-built AI hardware and developer tools

“From an AI developer perspective, they want a product that excels in terms of performance and in terms of power,” Asghar says. “Which means you want the best-in-class underlying hardware capability.”

That means more processing for any given amount of power. It also means the ability to write software quickly and get to a product quicker, because time to market is key. At the same time, developers need the flexibility of using AI frameworks they’re familiar with, and tools to optimize and improve performance even further. On the hardware side, Qualcomm offers the Hexagon processor with three accelerators (scalar, vector and tensor) that allow a developer to go from mapping directly to how a neural network is set up all the way to the fully connected layer.

At the most recent Snapdragon Summit, Qualcomm had several significant AI announcements, among them, considerable hardware improvements, including an upgrade of Snapdragon 8 Gen 2 AI performance by a factor up to 4.35x. It’s the first commercial design on the edge that can do integer 4-bit (INT4) AI inferencing, which means the same calculations take far fewer bits, and expend significantly less energy while maintaining accuracy.

The company also announced a new technology called micro tile inferencing, which breaks a neural network into many small pieces so that it can be processed all at once, rather than layer by layer, and very efficiently, saving a great deal of power.

And just before MWC, the company announced that it was able to run Stable Diffusion, a text-to-image AI model on a reference device powered by Snapdragon 8 Gen 2. Typically, generating images using Stable Diffusion requires vast amounts of computing power in the cloud but thanks to Qualcomm Technologies’ AI research and the power of on-device AI on the Snapdragon 8 Gen 2, they were able to not only optimize, quantize and deploy these large models using Qualcomm AI Stack onto the device but they were able to run these models in a matter of seconds. This is a significant leap from current capabilities and will allow massive AI models to run locally on devices which means better convenience, power savings, security and more.

But, Asghar says, having great hardware for AI isn’t enough. As the company continues to optimize its hardware, they’ve focused on leveling up software and tools at the same time. For instance, a loss in accuracy has historically been a primary challenge behind reducing a network running at 32-bit floating point down to four bits. Now the Qualcomm AI Studio provides the tools necessary to maintain precision even at integer INT, reducing the power by a factor of 64 and exponentially increasing the number of neural networks that can be run in the same power envelope. This is critical to do AI processing at the edge.

“The huge benefit to the Qualcomm AI Stack is enabling people to use Qualcomm technology easily and effectively without having to do a lot of setup work,” he explains. “Now I can take a mobile model to a security camera application without having to do new work. Why? Because it’s the same AI stack across all of our products. It’s really the notion of create once, and then take it anywhere.”

The Qualcomm AI stack supports popular AI frameworks and runtimes and offers developer libraries and services. The company has built SDKs for its product lines on top of this foundation — for example, Snapdragon Ride for automotive, Intelligent Multimedia SDK for IoT processing, Spaces (part of the Snapdragon Spaces XR Developer Platform for AR glasses) and more, including SDKs focused on specific verticals.

In Snapdragon Spaces, for example, Qualcomm has built AR-specific functions directly into the platform for hand and eye tracking for foveated rendering, 3D reconstruction of spaces, plane detection and more. A developer creating a new use case using AR or VR can pick up those routines and other pre-built pieces and build the final product on top, and get to a finished product faster.

The recently announced Qualcomm AI Studio brings together all the AI stack tools into a new GUI, along with visualization tools to simplify the developer experience — and provide the ability to see the complete model workflow from model design to optimization to deployment and profiling in action.

“If you’re doing anything at the edge, in a constrained power envelope or form factor, then really the best technology for you to bring it into production is based on Qualcomm,” he says. “With Qualcomm AI stack we want to make it as easy as possible for developers to be able to deploy their projects and get to market fast.”

Dig deeper: Learn more about what’s on The Edge of Possible.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact