Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Google is helping smartphones better recognize images without requiring massive power consumption, thanks to a new set of models the company released today. Called MobileNets, the pre-trained image recognition models let developers pick between a set of models that vary in size and accuracy to best suit what their application needs.

Right now, a lot of the machine learning inside mobile apps works by passing data off to cloud services for processing and then providing the resulting insights to users once they return over the network. That means it’s possible to use very powerful computers in a data center and alleviate the burden for processing information on a smartphone. The drawback to that approach is that latency and privacy suffer.

By processing data on a user’s smartphone, it’s possible to return results a lot faster, and data never has to leave the phone. However, optimizing a machine learning model for use on mobile is a tall order. Eating up a bunch of battery with computationally intensive machine learning operations is no good.

Above: A table shows key statistics about the different MobileNet models Google made available on June 14, 2017.

Image Credit: Google

That’s where MobileNets come in: Google has handled all of the optimization ahead of time, so developers just need to implement the model in their application. The models range from one that uses 569 million multiply and addition operations to one that uses just 14 million of those operations.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

In this case, the more operations one of the MobileNet models uses, the higher its accuracy, in exchange for an increased load on a device’s resources.

It’s a move by Google to capitalize on a trend of increased local machine learning processing. The news comes a month after the company revealed TensorFlow Lite, its framework for running machine learning models created using TensorFlow more efficiently on low-power Android devices.

Developers can deploy the models now using TensorFlow Mobile, a system that is designed to help with deploying models onto Android, iOS, and Raspberry Pi.

This release builds on work that Google published in a paper earlier this year.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.