Are you a mobile app owner or product marketer? Jon Chew will be speaking at VB’s Mobile Developer Roadshow in Vancouver on March 25 where we’ll be sharing the latest trends and case studies in mobile app acquisition and monetization strategies. This event is invite-only: Find out if you qualify here.
The Roadshow’s four stops include: Seattle (March 24), Vancouver (March 25), Toronto (April 1), and Montreal (April 2). Get all the info here.
Warning: Analytics terms to follow. If you ever get lost in the article, feel free to hit me up @jonchew on Twitter :)
Retention is a key metric for most mobile game developers — or a lot of other mobile apps, for that matter. It’s essential in evaluating the quality of a user, which in turn is important for both user acquisition and monetization. However, for certain key reasons, we haven’t officially standardized what it really means. It’s annoying.
I have a dream where we all just use one calculation… in the same way that other key metrics like ARPDAU (day X revenue / day X players) or stickiness (DAU/MAU) have been standardized. Why can’t retention be the same way?
How does retention work?
Simply put, retention is the measurement of how long, in days, a player is engaged with the product. Often told as a percentage, retention tracks the progress of a player from install date (D0) till drop off (D1~DN). When most developers are looking for key numbers to judge the effectiveness of their products, they’re looking at D1, D7, and D30. Respectively, this is the number of players returning one day later, one week later, and one month later. In theory, this is a nice indication of success.
Thomas Sommer from AppLift put it best:
“Everyone talks about it, but there seems to be no clear consensus on a common definition of what retention really means nor how it is actually calculated.”
Before I dive into the issue, I want to make it clear that I’m referring to the standardization of the calculation, not the benchmark.
Benchmark vs Calculation: The good folks over at MixPanel believe in the one-shoe-doesn’t-fit-all scenario. I agree. Benchmarks can vary depending on genre, device, game design, etc. The calculation, however, is the data-driven math to determine the actual retention number. When the calculation is inconsistent between different companies, studios, agencies, and 3rd-party tools, who else can you really trust except yourself?
Reporting becomes frustrating when two people claim two different numbers on the same metric. Does a user who plays at 11:00pm at night and comes back at 1:00am the next morning count as “returning the next day”? Should they be part of the day-1 retention pool? I mean, they’ve only really played for 2 hours. Shouldn’t it be 24-hours later? If they played 23 hours between two days but didn’t follow the 24-hours-later rule, should they still count? It’s questions like these that are stumping many companies and causing discrepancy issues in reporting for several user acquisition specialists out there (… and I haven’t even gotten to absolute vs. rolling retention yet… ).
It makes me skeptical at conferences like GDC or Casual Connect when companies claim 70% day-1 retention. How exactly are they getting that number? I don’t doubt the number, but I am curious how different it would be using the formula I’d use.
When trying to optimize very, very, very expensive advertising campaigns or implement cohort-based monetization strategies, it becomes difficult to properly assess the quality of players coming from each partner if each partner is reporting a different number from each other. Coming from both sides of the coin, it’s very easy to lose faith in a product when you can’t trust the information it’s telling you.
Finding a solution
While I have my own beliefs of what retention should and shouldn’t be, I can’t say for sure I’m right. It would take a summit of key industry folks to come together, make a consensus, then share that consensus with major influencers to set the standards. Is it possible? Of course. Is it probable? Probably not. Perhaps we need a governing data-scientist organization to make our lives a little easier. The more accurate and consistent data I can get, the better my duties in acquisition and monetization will become.
Someday I’d like to see a universal retention formula. We’re in one of the most data-driven, ahead-of-the-curve industries; we should set a model for the future that helps and benefits our entire community.
Jon Chew is the User Acquisition Specialist at BANDAI NAMCO Studios Vancouver. Prior, he led the Analytics team at East Side Games, a fierce indie studio. Jon loves games, Korean BBQ, and all things digital. Follow him on Twitter @jonchew.