There’s lots of hype around chatbots – and so, so many ways to get it wrong. Make no mistakes: Join our latest VB Live event featuring PwC’s Global AI Lead  Data & Analytics and other experts to learn how you can harness the power of artificial intelligence to create the kind of emotional connections that sell your stuff.

Register here for free.

Chatbots are supposed to be the love child of customer service and artificial intelligence, offering companies a cost-effective, customer-delighting way to answer questions and resolve issues. It’s cheaper than throwing a human at every complaint or inquiry, and it looks pretty slick too.

The problem is that an awful lot of the chatbots out there, especially the ones attached to messenging apps like Facebook or the intercom-style ones you find on company websites, tend to be basically decision-tree chatbots — digital versions of the old Choose Your Own Adventure books.

“This is something we’re trying to educate our clients about when we are called in to look into chatbots,” says Dr. Anand Rao, global AI lead for PwC Data & Analytics.

Current chatbots are mostly rulebased, not generative, Rao explains. They really don’t understand the full meaning of the sentences or the questions that they’re asked. They’re basically looking for certain keywords and matching those keywords with the responses at the other end, which leads to failures both small and irritating and pretty spectacular.

“There are some classic examples — when someone is calling about a death in the family, the chat bot comes back and says ‘That’s great! Tell me more about the death in your family!'” Rao says. “So it’s not a very good way of starting a conversation.”

The key thing is for clients to understand the boundaries around some of these chatbots, he says. The problem is often that when you start small with a proof of concept, companies don’t often experience the limitations that can be showstoppers in the real world.

“If it works in a limited setting with a few sets of keywords, people can be fooled, or impressed by what it can do,” he explains. “But it very quickly falters as you try and scale it to more functional domains, and if you want it to have more reasonable conversations as opposed to single interactions.”

There is a very real risk here, he adds. If you introduce a number of these limited chatbots indiscriminatorily and very quickly across your company’s marketing ecosystem, customers tend to turn against them quickly and very vocally — leading to brands rejecting the idea of chatbots altogether.

Rao explains that what companies need to do is evaluate what they’re doing more broadly before making a tool selection, and then determine how to take that step by step.

That includes understanding the full spectrum of what you want to do in terms of chatbots, whether it’s general service or very specific areas of service, frequently asked questions, or support related questions. And where exactly do you want to deploy? Also, you need think a little bit long term before you make a tool selection: how do you really want to scale it across an entire functional area?

“The other issue we find is that most organizations don’t have the right data: the labeled data that allows machine learning,” Rao says. “But depending on how you structure it, the traditional, rule-based chatbot could be a good way in which you can start getting that information.”

Even if you’re building something simple, he adds, you can build it in a way that allows it to start learning. Having a human on call is one surprising way to implement that process.

It’s fairly standard practice to have a human ready to step in when a chatbot isn’t able to answer a question (or it should be). But over time, the chatbot can be accumulating data around what is happening when it transfers control to a human; what triggered the need to transfer, and how is the human customer service agent responding?

Essentially, the AI is eavesdropping on a human conversation and picking up the emotional cues.

That can help refine the data, and the machine can start learning and growing.

This is something Rao’s firm recently did for a financial services client, which was interested in using deep learning to analyze their calls with one question in mind: What makes a good call? Not only in terms of sales revenue, but also what makes a good call in terms of the satisfaction of the customer: how were they educated, or made to feel at ease.

“Now take that a step further where you can start embedding some of those aspects into the chatbot,” Rao says. “As it learns by just observing humans, it starts picking up some of the cues on how it can interact as best practice with humans.”

In time, we’ll see systems that can understand not just the natural language, but the actual conversation, Rao says. In other words, they’ll understand the context around why you’re asking certain questions.

“That’s not quite emotional yet — they’re not empathizing with you, but at least they understand you, and are giving you answers in a much larger context,” he explains. “They can start answering questions and start connecting the conversations over a broader context. I think that step is the intermediary step before the next stage: emotionally connecting with our customers.”

Don’t miss out!

Register here for free.

In this webinar, you’ll:

  • Learn how deep learning can help your customers get a “human” through chat
  • Cut through the hype around chatbots and learn what really matters
  • Understand privacy issues around AI and how it may impact your org’s security
  • Integrate a successful marketing campaign using chatbot interactions


  • Dr. Anand Rao, Global AI Lead for PwC Data & Analytics
  • Scott Horn, CMO, 24[7]
  • Stewart Rogers, Director of Marketing Technology, VentureBeat
  • Rachael Brownell, Moderator, VentureBeat