Chatbots are hot right now in the startup community. It seems like a new bot releases every day. I’m not always sure that chat interfaces are ideal solutions — I find scanning the news on Flipboard much quicker than chatting for the news — but the blending of A.I. with a chat interface is a match made in heaven for many verticals, including health care.
Developing scripts for our chatbot, Joy, over the past seven months has taught me about the unique challenges of writing for a chatbot. It’s closer to writing for a role-playing video game than anything else. I’ve used a combination of analytics, user testing, and a childhood spent playing Dungeons & Dragons to refine our scripts and improve engagement.
Since we are in the health care space, we’re answering complex questions in a complex field. We have to take industry jargon, provided by insurance companies or health professionals, and make it engaging for our end user. It takes more than a few iterations to get from a technical first draft to a message that’s not only accessible but also sounds human.
After all the time I’ve spent writing and rewriting scripts, here are my best suggestions about writing for a chatbot:
1. Use the Flesch-Kincaid grade level as a guide
Our audience comes from a huge cross-section of society. We sell our product directly to employers of a highly educated workforce as well as to people who didn’t finish high school. We’ve found that, regardless of the audience, making your scripts as simple as possible results in the greatest level of engagement. You can use the Flesch-Kincaid Grade Level feature in Microsoft Word or an app like Hemingwayapp.com to figure out the present grade level of your scripts. I don’t think there is a hard and fast rule of what grade level you want to write to, but in general, the lower the better.
2. Use emojis and templated responses wisely
When we started writing scripts, using emojis seemed a little taboo. The health care industry is old-school, and anytime we demoed a script with an emoji in a room full of people, someone would make a negative comment. The funny thing is that when we did user testing with these same people but one person at a time, they laughed and smiled when they saw an emoji. It was amazing how quickly people formed an emotional connection with Joy, and emojis helped a lot in developing this connection. We did learn to only use emojis in positive affirmation responses and to introduce them later in the onboarding process.
We use templated responses for many interactions that the user inputs, and one thing we learned is that people don’t like being “forced” to send emojis back to a bot. Our responses tend to be fairly generic because people need to connect to the things we are allowing them to say. We may not need to rely on templates forever, as our A.I. is getting smarter every day. In the past few months Joy has learned to understand more free-form text and even emoji responses.
3. Limit the length and number of messages
One thing we quickly discovered in developing our scripts is that users don’t like to read long blocks of text. Engagement drops with every line of text over three lines, which we call the “glanceable tipping point.” Sometimes we exceed it, but we try to only do so when giving users a specific piece of information they are requesting. They need to be invested in the answer they are about to receive. If they ask for advice from a local doctor who accepts their insurance, they will take the time to read a long message because the information matters to them.
We’ve also noticed that users don’t like receiving too many messages in a row without a break. Our analytics made it clear that writing too many messages causes information overload. We’ve now added a user input option after 4-5 messages to break up the text and give the user a few seconds to catch their breath. Even a simple response like “OK” or “cool” works to pace the influx of texts.
4. Watch your message pacing
Our A.I. can type a thousand words per minute, but that’s not what people want from a chat interface. We intentionally pace how fast a user receives our messages to make the experience feel more natural. We’ve tested a lot of different levels of speed and found that adding a .02 second delay helps with engagement. We hope to create a feature that will analyze the way a user interacts with our system and adjust the pacing for each individual, and we already have enough data to appropriately adjust the pace by age.
5. It’s not all about the chat
When we started, we wanted everything to reside within the chat interface. It quickly became apparent that an all-chat interface didn’t work for our purposes. When we first rolled out online doctor consultations, the feature was only accessible through the chat interface. We weren’t getting the utilization we were hoping for, as it was out of sight and out of mind. We ended up developing a menu where people could start different chat scripts and saw a dramatic increase in utilization both within chat and in the new menu. Sometimes you need to think outside the chat window.
Writing scripts for our chatbot has been a fun challenge. We’re still in this technology’s infancy, so every industry is starting with a clean slate and a different perspective. A retailer-bot might try to inject a little more of a Dale Carnegie approach to its language, while a financial company might be more direct and to the point. The great thing is that it’s easy to use simple online marketing techniques like A/B testing and analytics to test and improve your scripts as you go. I believe chatbots will drastically change the way we interact with companies in the future. Our chatbot is supported by people right now, but our A.I. is getting more and more sophisticated. It also allows our system to be more proactive than with human capital alone. Most people are amazed at the features we provide at such a low price point, and it’s only possible with a chatbot A.I.