AI and machine learning model architectures developed by Alphabet’s DeepMind have substantially improved the Google Play Store’s discovery systems, according to Google. In a blog post this morning, DeepMind detailed a collaboration to bolster the recommendation engine underpinning the Play Store, the app and game marketplace that’s actively used by over two billion Android users monthly. It claims that as a result, app recommendations are now more personalized than they used to be.
In an email, a Google spokesperson told VentureBeat that the new system was deployed this year.
It’s not the first time the DeepMind team has contributed its expertise to the Android side of Google’s business, it’s worth noting. The U.K.-based subsidiary created on-device learning systems to boost Android battery performance, and its WaveNet system was used to generate voices that are now served to Google Assistant users. But it’s a particularly stark illustration of how embedded London-based DeepMind, which Google paid $400 million to acquire in January 2014, has become with Google’s ventures.
Google Play’s recommendation system contains three main models, as DeepMind explains: a candidate generator, a reranker, and an AI model to optimize for multiple objectives. The candidate generator can analyze more than a million apps and retrieve the most suitable ones, while the reranker predicts the user’s preferences along “multiple” dimensions. The predictions serve as the input to the aforementioned optimization model, whose solution gives the most suitable candidates to the user.
In the pursuit of a superior recommender framework, DeepMind initially deployed to Google Play a long short-term model (LSTM), a type of model capable of learning long-term dependencies. But it says that while the LSTM led to significant accuracy gains, its hefty computational requirements introduced a delay.
To address this, DeepMind replaced the LSTM with a Transformer model, which further improved model performance but which increased the training cost. The third and final solution was an efficient additive attention model that learns which apps a user is more likely to install based on their Google Play history.
In order to avoid introducing bias, the additive attention model incorporates importance weighting, which takes into account the impression-to-install rate (i.e., how often an app is shown versus how often it’s downloaded) of each app in comparison with the median impression-to-install rate. Through the weighting, the candidate generator downweights or upweights apps on the Play Store based on installs.
The next step in the recommender pipeline — the reranker model — learns the relative importance of a pair of apps that have been shown to a user at the same time. Each of the pair is assigned a positive or negative label, and the model attempts to minimize the number of inversions in ranking.
As for the Play Store’s optimization model, it tries to achieve a primary recommendation object subject to constraints of secondary objectives. DeepMind notes that these goals might shift according to users’ needs â for example, a person who had previously been interested in housing search apps might have found a new flat, and so is now interested in home decor apps. The model, then, makes per-request recommendations based on objectives during recommendation-serving time, and it finds the trade-offs between secondary objectives along a curve so as not to affect the first objective.
“One of our key takeaways from this collaboration is that when implementing advanced machine learning techniques for use in the real world, we need to work within many practical constraints,” wrote DeepMind. “Because the Play Store and DeepMind teams worked so closely together and communicated on a daily basis, we were able to take product requirements and constraints into consideration throughout the algorithm design, implementation, and final testing phases, resulting in a more successful product.”