Google today shared details of Google Play’s efforts to protect Android users with its teams of engineers, policy experts, product managers, and operations professionals that monitor the store for misleading, inappropriate, or harmful apps. In 2017, Google removed more than 700,000 apps that violated Google Play’s policies, or 70 percent more apps than the year before.
Google does not share total Google Play app numbers anymore, so we have to rely on third-party estimates to put this 70 percent figure into perspective. Statista pegs the total number of apps on Google Play at 2.6 million in December 2016 and 3.5 million in December 2017, a 35 percent growth. How many of those were bad apps, however, is anyone’s guess.
All we know is that the number of bad apps removed grew faster than the total number of apps in the store, which makes sense if you take into account the next statistic Google revealed today: 99 percent of apps with abusive content were identified and rejected before anyone could install them in 2017.
This was possible, Google says, thanks to its implementation of machine learning models and techniques to detect abusive app content and behaviors such as impersonation, inappropriate content, or malware. The company claims that the odds of getting malware is 10x lower via Google Play than if you install apps from outside sources.
The Google Play team last year developed new detection models and techniques that can identify repeat offenders and abusive developer networks at scale. This resulted in blocking 100,000 bad developers in 2017, making it more difficult for bad actors to create new accounts and attempt to publish more bad apps.
Google gave three examples of bad apps it removed in 2017:
- Copycats: Deceive users by impersonating famous apps, since those titles get a lot of search traffic for particular keywords. Impersonating apps are snuck into the Play Store through deceptive methods such as using confusable unicode characters or hiding impersonating app icons in a different locale. In 2017, Google took down more than a quarter of a million copycat apps.
- Inappropriate content: Apps that contain or promote content such as pornography, extreme violence, hate, and illegal activities are not allowed. The improved machine learning models sift through massive amounts of incoming app submissions and flag them for potential violations, aiding human reviewers to detect and block problematic apps. In 2017, Google took down tens of thousands of apps with inappropriate content.
- Potentially Harmful Applications (PHAs): Malware that can harm people or their devices, such as apps that conduct SMS fraud, act as trojans, or phish users’ information. Finding these bad apps is non-trivial as the malicious developers go the extra mile to make their app look as legitimate as possible. Google says it reduced the rate of PHA installs in 2017 via Google Play Protect “by an order of magnitude” compared to 2016.
Google believes that while the majority of developers have their audience’s best interest at heart, some bad apps and malicious developers “attempt to evade detection and enter the Play Store to put people and their devices in harm’s way” since “the massive scale and the global reach of Google Play make the platform a target for bad actors.” Indeed, despite the record-high takedowns of bad apps and malicious developers, many still evaded Google Play’s security.
Security firm Check Point, for example, this month alone reported malicious flashlight adware apps and malware displaying porn ads on Google Play. The former spanned 22 different flashlight and utility apps with up to 7.5 million downloads, while the latter included 60 game apps downloaded up to 7 million times.
For the various bad apps that slip through, Google says it takes them “extremely seriously, and will continue to innovate our capabilities to better detect and protect against abusive apps and the malicious actors behind them.” At this rate, Google will remove a million bad apps in 2018.