In one of the year’s most telegraphed tech reveals, Facebook announced in April that it was opening its Messenger APIs so brands could deploy chatbots to create rich, automated customer engagement on the app. But barely 24 hours passed before users discovered that the first chatbots on Messenger could not understand some simple questions, were slow to respond, and did not interact in much of a conversational manner. In other words, they were more chatbust than chatbot.
Weather app Poncho took the lion’s share of the tribal frustration by giving quippy, off-topic responses to questions it didn’t understand. And it clearly was not understanding much. It’s easy to slay the risk takers who oftentimes become the first failers, but we shouldn’t throw the chatbots out with the bathwater.
Messenger now boasts over 900 million active users who generate 60 billion messages a day. Add to this the fact that the artificial intelligence market is estimated to grow from $419.7 million last year to $5 billion by 2020, according to Research and Markets. Then consider that 10,000 plus developers are currently building chatbots on Messenger — and that Apple has announcement that iOS 10 will allow third-party app integration with iMessages, making the new iMessage a true chatbot platform.
It’s easy to see that the market is more than ready for mainstreaming the chatbot experience. However, reality has not yet met expectations because consumers are expecting more than spammy, ineffective chatbots out of the gate. So here are three easy ways to improve the chatbot experience:
1. Dominate the domain
Many of the Messenger chatbots coming out after F8 failed to set the proper expectations, and some just tried to do too much. Chatbots are not meant to be Google search on Messenger; not yet anyway. Google has spent decades collecting and connecting data and data stores compared to the few weeks Poncho had.
The key to creating great automated experiences with a chatbot is to properly identify and own the domain in which it will operate. If you are an outdoor retailer, stick to the subject. Mine your customer data for the most commonly asked questions, and create the appropriate responses for them and for all the thinkable variations of them. Then provide guidance to those questions that fall outside of the domain.
For example: The question, “What jacket should I wear to Provo?” isn’t easily answered unless you are hooking your chatbot up to a large data source that contains hundreds, if not thousands, of place names and their respective climates. Frame your chat experience with your customers and guests so when they try to trip you up (and they will always try), you can artfully bring them back into the fold. Respond based on words the domain is knowledgeable on — say, “jacket” — and bring the conversation into the comfort zone (apparel you sell) vs. what you don’t (weather you know little about).
2. Supervise while your bot’s in training
I liken the growth of chatbots to calling into a contact center. If you contact your wireless provider’s customer service and happen to catch an agent on his or her first day, mistakes will be made, answers will be slow, and most likely the experience will not be perfect. But call that same agent in a year or two and they’ll be able to answer just about any question you could possible come up with and several more you hadn’t even thought of. The more questions a chatbot fields, the more interactions it has. The more mistakes it makes, the better it will become over time. But the key is to have it learn in a supervised environment.
In a proscriptive or “rules-based” development environment, internal logic is completely visible and easily editable by developers and those managing the chatbot. Conversely, many machine learning frameworks do not provide easily-understood visibility into the chatbot’s learning logic. Human assistance ensures the bot becomes smarter with every interaction, while the lack of any direct human control can have unforeseen and negative consequences, as we saw with Microsoft’s Tay.
3. Map out different paths for happy vs. not-happy users
An ideal design for a chatbot experience is to create several primary paths for consumers to go down. For example, a happy path for informational/transactional engagement, a not-so-happy path for issue resolution, a humorous path for the teasers, and so on. Then create a corpus (response collection) for each path while identifying keywords/phrases associated with the respective paths — such as “livid” (issue resolution) and “checking” (informational) — so that snark and humor only kick in if all other possible interpretations of an utterance can be ruled out. The last thing you want to do is answer a complaint with a cute comment meant to amuse.
A conversational UX allows the user to talk or type anything they want, which creates a more human-like experience. It also leads the user down a narrowing path. And if both users and developers are patient, the chatbot will be able to collect a wide range of inquiries, lowering the “trip-up” rate and improving the success rate over time.
The bottom line
The key here is that, despite some early failures, no one has any plans to bail on bots. Artificial intelligence, natural language understanding, and machine learning technologies are ready and able to take on the chatbot challenge. But whenever users engage with a new technology for the first time, designers discover areas they overlooked. Early adopters bear the burden of these shortcomings, but updates to the technology will smooth out the pathways to ensure that later travelers will be able to take these roads for granted.
Joe Gagnon is Chief Customer Officer and GM of Aspect Software’s Cloud Solutions. For over 20 years he has studied the evolving relationship between companies and consumers and how content and customer interaction affects that relationship. He has worked with companies from Penn Foster, to IBM, Exit 41, and E&Y.