During today’s Made by Google event in New York City, the machine learning image analysis tool Google Lens took its latest step forward with an expected but welcome update: The Google Camera app for Pixel 3 phones now includes real-time Google Lens analysis, enabling the OS to suggest actions based on what the Camera app sees.
The Google Lens suggestions feature works by superimposing recognition dots over actionable elements in the live camera feed, with each dot tappable to bring up possible actions. For instance, a business card might contain an email address that Lens recognizes as actionable, generating a pop-up to open Gmail.
“When you point your camera at information you want to remember or don’t feel like typing in,” Google’s Pixel VP Mario Queiroz noted, “like a URL or QR code on a flyer or an email address on a business card — Google Lens suggests what to do next, like creating a new contact.”
If you want to access the full Lens experience from the Camera app, you just long press in the camera to bring it up. Additionally, you can long press on an image from the recent apps screen on Pixel 3 to instantly evoke Lens for items you’ve found in other apps.
Google first debuted a real-time analysis mode for Lens back in May at its I/O keynote, but the feature was shown as a component of the standalone Lens application, with plans to add analysis to camera apps on select devices. Prior Pixels, LG’s G7, and select Sony phones were slated to receive the feature.
Related Lens additions included the abilities to automatically match text in images with Google-surfaced data — a menu’s items could tell you ingredients or show pictures — and find items on Google similar to whatever the camera’s currently looking at. These advanced AI- and machine learning-assisted features have the potential to give Google Camera a major leg up on alternatives such as Apple’s Camera app, which focus almost exclusively on snapping photos and videos.