Chatbot retention has been a real problem. It’s so poor that most people don’t even get past the first two messages. According to İlker Köksal, the CEO of BotAnalytics, the initial drop-off is huge: “About 40 percent of users never get past the first text, and another 25 percent drop off after the second message. Daily retention rate is at a paltry 1–2 percent, and the monthly retention rate for bots isn’t much better, sitting at about 7 percent.” Fortunately, after hacking for the better part of 6 months, a few bots — such as the weather bot Poncho — have found the light and are seeing awesome retention and engagement rates.
There is such a wide variety of chatbot use cases that it does not make sense to compare against the average. You might have a use case that solves a one-time problem, in which event you hope the user never has to come back (such as the DoNotPay lawyer bot). The best way to benchmark is by comparing your bot to mobile apps in your category. At the bare minimum, your goals should be to surpass them.
1. Find your core value
Poncho, like most other bots, suffered from low engagement. The bot did its job very well; it could tell you what the weather forecast is while telling you a funny joke. And when it came to personality, Poncho was playful and funny, easily one of the best bot personalities out there.
So why was Poncho struggling? To put it mildly, Poncho was facing a number of giant challenges. For one, why would someone use the bot? It’s easier to get weather forecasts from Siri or your home screen (on Android Devices). Further, apps like Google Now automatically push the weather to you. How could Poncho compete with this?
Poncho began segmenting their best users, people who were using the bot daily. After digging into the data, they tried to answer the “why?” question. They wanted to find out why this group of users came back, what problem Poncho was solving, and, more importantly, what problems Poncho could solve. After doing a lot of digging, they got to the most critical element and found their core: Users wanted to be notified of upcoming weather events that were unusual. For example, every time Poncho tells me “bring your Umbrella, it is going to rain in 10 min in SF,” I feel a wave of gratitude. Siri doesn’t do that, and neither does Google Now. Poncho figured out a big differentiator.
2. Optimize your bot for your highest quality users
Once you have your most successful cohort (or statistical group), you can start creating a user persona based on this data. This is extremely powerful as it is based on actual user behavior and can give you insights into why users do what they do. One of the easiest ways to pull this data is by using Facebook Audience Insights. Create a custom audience based on your top-performing cohort and Facebook will quickly compare your audience to the entire Facebook population. Facebook will give you a ton of data such as demographics, interests, affinity groups, location, purchase activity, online activity, lifestyle, relationship status, and more. During this process, you might discover that you have a few different User Personas. You will have to take this into account and personalize their experience according to their needs.
3. How and where to A/B test first
One of the best things about bots is how quickly you can run experiments. Once you have a clear understanding of your User Personas and have tracked their behavior in the bot, it is time for A/B testing. In this phase, you simply make a hypothesis, run an experiment, and track the ensuring behavior. Your goal is to improve retention and engagement. The best place to start A/B testing is at the very top of your funnel: onboarding. The ideal onboarding takes your user on journey that seamlessly teaches them how to use your product.
Currently, most bots’ users don’t get past the second message. The drop-off rate is over 50 percent. Your initial focus is getting over that and then improving with each succeeding interaction. Even modest improvement are critical. In fact, it is better to improve by 1 percent daily for 10 days than to have no improvement for 9 days and a 10 percent improvement on the tenth day. Small wins add up very quickly and can lead to exponential growth.
In the end, it’s all about how well you know your users and the problem you’re solving for them. Ideally, you should be running A/B tests and updating your chatbot’s copy weekly. Always try to improve, even by the smallest margins. Think exponentially, not linearly!
This story originally appeared on Chatbotslife.com. Copyright 2016
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here