Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Ever ponder the meaning of a portrait or painting hanging on the wall of your favorite coffee shop? Thanks to Google Lens, Google’s AI-powered search and computer vision tool, you’re now only a few taps away from learning all about it. Wescover, a San Francisco-based startup building a catalog of local artists and their work, today announced that it’s teaming up with Google to supply Google Lens with information about art and design installations.

As of this week, Google Lens can match designs and artwork in Wescover’s database with artists and stories in places like hotels, restaurants, and city streets. It’s as easy as launching Lens and pointing your phone’s camera at the art in question.

Wescover curated an initial map of art pieces Google Lens recognizes throughout San Francisco, and the company says it will continue publishing content to Lens globally in the near future. The company claims to have documented more than 50,000 images of unique art and design from 6,000 local brands and independent artists to date.

Wescover Google Lens


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

“We’re excited to see the difference our content is making. Each exact match gives creators the credit they deserve and enables consumers with trust to find what they’re looking for,” said Wescover CEO Rachely Esman in a statement. “If you love the chair at the Ace Hotel, it’ll take hard work to find the exact same product. While another blue chair may look similar, it won’t have the same quality, materials, or story.”

Wescover’s integration follows a Google Lens feature that highlights top meals at a restaurant, along with dish ratings and reviews. In other news, Lens recently gained the ability to split a bill or calculate a tip after a meal; read signs and other text for people who can’t read or don’t understand the printed language; and overlay videos atop real-world publications like Bon Appetit in augmented reality.

Google Lens began as a feature exclusive to Pixel smartphones, but it has evolved dramatically in recent years. It quickly spread to Google Photos, and Lens now ships natively in flagship smartphones from companies like Sony and LG.

The growing list of things Lens can recognize spans over 1 billion products from Google Shopping, including furniture, clothing, books, movies, music albums, and video games. (That’s in addition to landmarks, points of interest, notable buildings, Wi-Fi network names and passwords, flowers, pets, beverages, and celebrities.) Lens can also surface stylistically similar outfits or home decor and read words in signage and prompt you to take action. Perhaps most useful of all, it’s able to extract phone numbers, dates, and addresses from business cards and add them to your contacts list.

At its I/O keynote back in May, Google took the wraps off a real-time analysis mode for Lens that superimposes recognition dots over actionable elements in the live camera feed — a feature that launched first on the Pixel 3 and 3 XL. Lens recently came to Google image searches on the web, and more recently Google brought Lens to iOS through the Google app and launched a redesigned experience across Android and iOS.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.