Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.

Hundreds of millions of people contribute over 20 million reviews, ratings, and other pieces of content to Google Maps’ over 200 million points of interest daily — it’s how the platform continues to grow so rapidly. But user contributions are intrinsically fraught. That’s why increasingly, Google is using AI and machine learning to spot malicious contributions at submission time, ensuring they don’t reach the over 1 billion users who regularly use Maps.

In a blog post, Google said that it uses automated detection systems, including machine learning models, to scan millions of contributions to detect and remove policy-violating content. In the case of reviews, its systems audit every review before they’re published to Maps, looking for signs of fake or misleading content. And its machine learning models watch for specific words and phrases and examine patterns in the types of content an account has contributed in the past, while taking into account suspicious review patterns.

Of course, AI isn’t perfect, which is why Google employs teams of trained operators and analysts who audit reviews, photos, business profiles, and other types of content, both individually and in bulk. In 2019 alone, with the aid of machine learning systems that improved in their ability to block policy-violating content and detect anomalies for manual review, human moderators removed more than 75 million policy-violating reviews and 4 million fake business profiles. They also took down more than 580,000 reviews and 258,000 business profiles that were reported directly to Google, and they reviewed and removed more than 10 million photos and 3 million videos that violated Maps’ policies and disabled more than 475,000 user accounts.

“The vast majority of contributions made to Maps are authentic, with policy-violating content seen less than one percent of the time. And we’ll continue to develop new tools and techniques to fight against bad actors,” said Maps director of product Kevin Reece. “Contributed content is an indispensable part of how we’re making Maps richer and more helpful for everyone.”


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

The metrics update comes after a Wall Street Journal report concluded that there are millions of fake listings on Maps, which have since been removed by Maps’ moderation team. In response to this and other controversies relating to user-submitted content, including an incident involving a drawing of an Android logo urinating on an Apple logo, Google has at various points shuttered public map editing tools and introduced new Maps moderation features.

Google, it’s worth noting, is far from the only tech giant to apply AI and machine learning techniques to spot content that runs afoul of its policies. In October, Pinterest reported that AI helped reduce self-harm content on its platform by 88%. That same month, Twitter said that 50% of all abusive tweets are flagged by its automated tools even before users report them. For its part, Facebook proactively identifies over 96.8% of prohibited content using AI, including bullying and harassment, child nudity, global terrorist propaganda, violence and graphic content, and others.

Google recently revealed that Maps now covers places across 220 countries and that Local Guides, a community of Maps users who contribute reviews and more, has more than 120 million members. Maps includes accessibility info (like wheelchair-friendly entrances and restrooms) for more than 50 million places around the world. And more than 5 million websites and apps use Google Maps Platform every week, a set of APIs and SDK that let developers integrate Maps with existing apps.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.