“It’s not you — it’s me.”
Many people, journalists, and even companies who have invested in the chatbot industry have spoken out about the so-called chatbot engagement problem.
The issue they are seeing is simple: Users initially play with a chatbot because it is fun and novel, but then quickly lose interest until, eventually, they go inactive (or quit using the bot altogether).
Conclusion? Chatbots have an engagement problem! People do not engage with chatbots! Chatbots are doing a Pokemon Go!
Let’s hold on for a minute and look at the bigger picture.
You are responsible for your chatbot’s engagement metrics.
Before blaming the industry in general, we need to get this straight: You are in charge of and responsible for the engagement your chatbot receives.
It’s not as simple as releasing the bot and expecting everyone to cherish it and instantly use it every waking minute. You need to make people love your chatbot.
Are things not working out? Change your chatbot!
Ask feedback from your users, make changes, ask more feedback, and improve.
If you are receiving mediocre metrics, the simplest reason might be your chatbot is not the best it can be, yet. Don’t blame the industry being “not ready for it.”
Engagement is a personal metric. Stop comparing.
Now that the responsibility aspect is out of the way, let’s discuss how you should measure and benchmark your chatbot’s engagement. Ask the question: What does success look like to you?
One of the biggest mistakes I see is people measuring success by looking at the number of interactions between the chatbot and the user.
Talk about a vanity metric.
Much like measuring page views via a website’s analytics, measuring interactions with a chatbot does not predict, or show, any level of success.
If your users keep engaging with a bot because they are not getting what they are looking for and need to ask again and again, is that good engagement? Is that success?
Before measuring engagement, decide what success looks like for you. Then, draw conclusions based on the type of engagement you need to measure.
Are you building a news chatbot? You might measure engagement based on links clicked.
Are you building a data capture chatbot to qualify leads before they talk to a human? You might measure success by the number of suitably qualified leads received. In this case, high engagement would be bad. You don’t want people to keep talking to your chatbot — you want them to answer the questions and shoot off to your sales team.
Benchmark your chatbot’s metrics on things you know.
The second biggest mistake people make when measuring engagement is comparing their results with other chatbots in the market.
There are three issues with that.
One, there aren’t enough chatbots out there publishing reliable numbers for comparison.
Two, there are very few chatbots in general. You’ll find it hard to spot other chatbots in your niche or chatbots doing the same thing as you.
Three, benchmarking your chatbot’s engagement metrics (or any other metric for that matter) against other chatbots doesn’t make sense. Your chatbot should perform tasks better than whatever channel you are currently using to perform said tasks. Hence, it would make a lot more sense to benchmark your chatbot’s engagement against the other channel you were previously using, not random other chatbots.
No doubt there are many chatbots dealing with low engagement and success metrics. My hope is this article gets the industry back into focusing on what it should be doing: building great solutions.
Chatbot engagement is not terrible — you have just built a bad chatbot. Improve it.
Alex Debecker is the founder of Ubisend, a SaaS platform for enterprise-grade chatbot development.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.