Share

Google researchers develop AI for better facial recognition and object detection on smartphones

Apps that detect objects, classify images, and recognize faces are nothing new in the world of smartphones; they’ve been popularized by apps like Google Lens and Snapchat, to name a few. But ubiquity is no substitute for quality, and the underlying machine learning models most use — convolutional neural networks — tend to suffer from either slowness or inaccuracy. It’s a computational trade-off forced by hardware constraints.

There’s hope on the horizon, though. Researchers at Google have developed an approach to artificial intelligence (AI) model selection that achieves record speed and precision.

In a new paper (“MnasNet: Platform-Aware Neural Architecture Search for Mobile“) and blog post, the team describes an automated system, MnasNet, that identifies ideal neural architectures from a list of candidates, incorporating reinforcement learning to account for mobile speed constraints. It executes various models on a particular device — Google’s Pixel, in this study — and measures their real-world performance, automatically selecting the best out of the bunch.