Google Lens, Google’s AI-powered analysis tool, can now recognize over 1 billion products from Google’s retail and price comparison portal, Google Shopping. That’s four times the number of objects Lens covered in October 2017, when it made its debut.
Aparna Chennapragada, vice president of Google Lens and augmented reality at Google, revealed the tidbit in a retrospective blog post about Google Lens’ milestones.
“I’ve spent the last decade leading teams that build products which use AI to help people in their daily lives, through Search, Assistant and now Google Lens,” she wrote. “Every waking moment, we rely on our vision to make sense of our surroundings, remember all sorts of information, and explore the world around us … I see the camera opening up a whole new set of opportunities for information discovery and assistance.”
Products, in this context, refers to product labels. Google Lens leverages an optical character recognition engine — combined with AI systems that recognize different characters, languages, and fonts, plus language and spell-correction models borrowed from Google Search — to match barcodes, QR codes, ISBN numbers, and other alphanumeric identifiers to product listings in Shopping’s enormous catalog.
“Now, style is even harder to put into words,” Chennapragada explains. “That’s why we think the camera — a visual input — can be powerful here … Lens can show you … useful information like product reviews.”
That’s not all Google Lens’ computer vision algorithms can recognize, of course.
The growing list includes furniture, clothing, books, movies, music albums, video games, landmarks, points of interest, notable buildings, Wi-Fi network names and passwords, flowers, pets, video games, beverages, celebrities, and more. Lens reads and prompts you to take action with words in menus and signage, and, when pointed at outfits or home decor, recommends items that are stylistically similar. Moreover, perhaps most useful of all, it can automatically extract phone numbers, dates, and addresses from business cards and add them to your contacts list.
Google Lens has evolved dramatically over the past year. According to Chennapragada, Lens, which is trained on labeled images fed through Google’s open source TensorFlow machine learning framework, is beginning to recognize objects more reliably thanks to datasets with pictures “that look like they were taken with smartphone cameras.”
It’s improving in more palpable ways, too.
Back in May at its I/O keynote, Google took the wraps off of a real-time analysis mode for Lens that superimposes recognition dots over actionable elements in the live camera feed — a feature that launched first on the Pixel 3 and 3 XL. Lens recently came to Google image searches on the web. And more recently, Google brought Lens to iOS through the Google app, and launched a redesigned experience across Android and iOS.
As for what the future holds in store for Lens, Chennapragada is betting big on AI-driven enhancements.
“Looking ahead, I believe that we are entering a new phase of computing: an era of the camera, if you will,” she wrote. “It’s all coming together at once — a breathtaking pace of progress in AI and machine learning; cheaper and more powerful hardware thanks to the scale of mobile phones; and billions of people using their cameras to bookmark life’s moments, big and small.”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more