Enterprise companies tackle mobile marketing automation slightly differently—and that's why they're on top. Register today for this free VB Insight webinar
with AEG's VP of Social and Marketing on May 28th
The “always be testing” bug runs deep amongst my colleagues and I, but our experiments extend far beyond simply testing images against text or personalized subject lines versus generic ones. More specifically, we are major proponents of longer term customer studies in addition to one-off A/B tests.
Today’s Conversions vs. Tomorrow’s Value
Whereas an ad hoc campaign A/B test might focus on driving incremental revenue through offer testing (e.g. testing free shipping vs. $20 off vs. 10% off in a welcome email to see which offer yields the stronger revenue per send), cohort studies are designed to look at the impact of different treatments on customers over time. Cohorts are groups of customers that share at least one common trait in a time-bound period: users who signed up from a Facebook sweepstakes in January would be one cohort, whereas users acquired from a similar campaign in March would be a separate cohort.
NOTE: Cassie Lancellotti-Young will be speaking at VentureBeat’s GrowthBeat conference in San Francisco, August 5-6. For tickets and to find out more about the event, head over to the GrowthBeat event site.
Let’s stick with something along the lines of the aforementioned example: Seeking to optimize its welcome stream in the name of incremental conversions, a retailer tests a 10% discount offer to new subscribers at various points in the first 14 days. Not surprisingly, the promotional offer results in a lift in gross conversions (Marketing 101: promotion moves product!). After digging one level deeper into the numbers, the retailer notes that the offer also prompted an increase in average order value (AOV), likely due to classic stockpiling effects. Sounds like we’ve found a winner in the promotion takers, right?
No, we have not. Instead, we find ourselves with several follow-on questions around the downstream impact of this promotion; namely, did the early discount offer train the customer to buy on promotion and erode downstream lifetime value?
Dissecting the Surface Metrics
Perhaps this marketer is particularly in tune with the numbers and actually analyzes some downstream numbers several months after the initial welcome stream and notes that AOV remains higher for those who initially converted on discount. With this new data point, it seems as though we have finally identified a solvent winner in the discount cell, have we not?
In fact, we unfortunately still have not. Consider the chart below, which now also takes into account the purchase frequency of the two different cohorts over two years. Despite the discount converters yielding transactions between 3% and 5.4% more valuable than those customers paying full price, they purchased at a rate of 7.2% less than those full-price shoppers (again, could be a function of stockpiling), meaning they netted out to be 3.3% less valuable than those with the seemingly more expensive carts.
Even with this revelation, though, it’s important to revisit the QQQ: the quantity/quality quandary. Are there enough customers at that higher two-year value to keep gross revenue compelling? If the number of buyers falls considerably without that promotional incentive, the marketer will need to think twice.
Short-Term Experiments vs. Long-Term Learning
Recently, many of our clients have leveraged cohort-based tests to assess the impact of email frequency on subscriber opt-out rates. More specifically, they use customer-level variables to dictate two groups: one that receives emails daily for the first 60 days and another that receives only three messages per week. From there, they can compare and contrast 60-day opt-out rates and other engagement metrics to understand the potential impact of tweaking frequency.
Campaign-centric testing – adjusting elements such as subject lines and calls to action – can certainly be valuable for driving incremental ROI, but it’s mission-critical to double-check how you think about testing. Rather than running separate A/B tests on your welcome email, your day2 email, your day7 email, etc., should you be developing welcome series A vs. B and ensuring that one cohort receives ALL A cells and the other ALL Bs (read: cutting the test groups at the customer level vs. the campaign level)? Probably so.
So while you should always be testing, make sure you’re playing the long game while you’re at it. Revisit your results regularly to corroborate that optimizing for near-term conversions or engagement is not at the expense of long-term customer value. This is the difference between driving quick wins versus sustainable lifetime value.
Cassie Lancellotti-Young, VP of Analytics and Optimization, Sailthru.
VentureBeat’s VB Insight team is studying marketing and personalization...
Chime in here, and we’ll share the results