Presented by Adikteev

The leaky bucket problem has existed in the mobile marketing world ever since the very first app launched. Many user acquisition strategies will net a huge chunk of new customers. Unfortunately, most of those strategies aren’t very discerning, and a whole lot of those new users won’t be your ideal target audience. This means as many as 70% of those users can churn on the very first day, leaving your app for good. The challenge is how to keep the most valuable users already engaged in the app instead of acquiring a large number of new users who may not have the same LTV potential.

App marketers need to both figure out when a user will churn — and have an actionable program to prevent it. Today, predictive user churn algorithms are the most efficient, cost effective and accurate way to prevent high-value user churn, especially as privacy regulations in the app stores slash the amount of visibility into user behavior.

Machine learning can help identify the likelihood that a user will churn, tied to their relative importance to the advertiser. It takes into account factors such as how aggressive a marketer needs to be to keep that user within their ecosystem, and the approach needed to win their attention and loyalty, and more.

Why a predictive churn algorithm is crucial

Knowing when a user is no longer interested in your product defines when, where and how much you should be spending to keep them in your ecosystem, according to Cameron Thom, Head of SaaS Products at Adikteev.

“In internal studies, we’ve seen that it’s much more efficient to reach that user prior to churn than waiting until they actually exit,” Thom says. “And you also need to understand how aggressive you can be in your bidding once they have churned to bring them back in the app.” For example, in a gaming or ecommerce app, it can mean identifying which users are nearest to dropping off, and luring them back with substantial loyalty rewards.

When calculating churn Adikteev chooses AUC as the metric, which stands for “area under the curve.” The curve in this case is the receiver operating characteristics (ROC) curve. It’s a statistical measure that can be used to evaluate the machine learning model predictions using a probabilistic framework. To put it simply, it grades the model between zero and one. A grade of one means the model is 100% correct and 0.5 is purely random. A good model reaches an AUC of 0.8 or above.

The churn horizon, i.e, the number of days of inactivity required to consider a user a churner, can also be adjusted as a model parameter. This means that the machine learning model can adjust to apps’ specificity. Let’s take an ecommerce app as an example. If a user stops interacting with the app for 5 days, it’s a little bit early to consider them a churner. On the other hand, in a hyper-casual app, the probability of churn after the same days of no interactions is higher. The churn horizon takes all of this into account.

With all of these factors considered, Adikteev ran a churn prediction algorithm with a gaming app. The AUC ROC score for the model fell between 0.8 and 0.9, or 80-90%, which is a strong result for prediction. This is a huge step up from where marketers are sitting now, blindly trying to plug the hole instead of efficiently tackling the problem before it starts.

The algorithm under the hood

“Any app marketer out there can take action against churn with relative accuracy,” Thom says. “If you’re being a little bit more aggressive in your approach, you’re capturing a good number of the users who will be likely to churn.”

These machine learning models are not a one-size-fits-all approach. Companies can build internal, custom-tailored models for each app they’re marketing, or work with a vendor to develop a solution that nails down the right user segments to watch, and the right way to target them, in a way that fits the app’s needs.

Model accuracy depends on how much data you have. The more it has to work with, the better it’s able to correctly identify and sort users, which helps gauge the effectiveness and sensitivity of the model. If an AUC ROC score is sitting at 0.6 or 0.7, that number will rise over time as the data science team (either internal or external) continues to optimize and refine the model to prevent drift, and as data pours in.

“It’s an issue for the data science teams, as they hone their craft over time,” Thom says. “It’s about evaluating the strength of the external vendor with case studies relative to their genre of app. Investing well there is going to be key, as is building up the program for the marketer to then tailor the messaging and work around the different cohorts of users they have.”

It’s also important to find the best prediction method for your type of business. There are three popular methods. The first is a rule-based method based on the RFM explanatory variables (recency, frequency and monetary value). The other two are machine learning approaches: the clustering approach, which groups users so that similar users (in some sense, based on explanatory variables like RFM and possible others) are in the same group; and the binary classifier approach, which uses machine learning to sift through user features to isolate the important ones.

The rule-based method based on RFM variables is the one most often used, and the least resource intensive. However, the machine learning approaches (clustering and binary classifier), have the potential to offer higher ROI, but these require more resources to deploy.

“Not all businesses have the resources to build and tune and run a churn prediction model in a live production environment,” Thom says. “You need to understand what your business can support and invest in well, either internally or externally. Having clean data is a great start, and then working with either your internal or external teams to adapt your model over time, relative to changes in the marketplace, and also to your individual product.”

Dig deeper: Find out from Adikteev how their User Churn prediction model works.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact