Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

We love stories of dramatic breakthroughs and neat endings: The lone inventor cracks the technical challenge, saves the day, the end. These are the recurring tropes surrounding new technologies.

Unfortunately, these tropes can be misleading when we’re actually in the middle of a technology revolution. It’s the prototypes that get too much attention rather than the complex, incremental refinement that truly delivers a breakthrough solution. Take penicillin. Discovered in 1928, the medicine didn’t actually save lives until it was mass-produced 15 years later. 

History is funny that way. We love our stories and myths about breakthrough moments, but oftentimes, reality is different. What really happens — those often long periods of refinement — make for far less exciting stories.

This is where we’re currently at in the artificial intelligence (AI) and machine learning (ML) space. Right now, we’re seeing the excitement of innovation. There have been amazing prototypes and demos of new AI language models, like GPT-3 and DALL-E 2. 


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

Regardless of the splash they made, these kinds of large language models haven’t revolutionized industries yet — including ones like customer support, where the impact of AI is especially promising, never mind general business cases.

AI for customer experience: Why haven’t bots had more impact? 

The news about new prototypes and tech demos often focuses on the model’s “best case” performance: What does it look like on the golden path, when everything works perfectly? This is often the first evidence that disruptive technology is arriving. But, counter-intuitively, for many problems, we should be much more interested in the “worst case” performance. Often the lowest expectations of what a model is going to do are much more important than the upper ones. 

Let’s look at this in the context of AI. A customer support bot that sometimes doesn’t give customers answers, but never gives them misleading ones, is probably better than a bot that always answers but is sometimes wrong. This is crucial in many business contexts.

That’s not to say that the potential is limited. An ideal state for AI customer support bots would be to answer many customer questions — those that don’t need human intervention or nuanced understanding — “free form,” and correctly, 100% of the time. This is rare now, but there are disruptive applications, techniques and embeddings that are building toward this, even in today’s generation of support bots. 

But to get there, we need easy-to-use tools to get a bot up and running, even for less technical implementers. Thankfully, the market has matured over the past 3 to 5 years to get us to this point. We’re no longer facing an immature bot landscape, with the likes of only Google DialogFlow, IBM Watson and Amazon Lex — good NLP bots, but very tricky for non-developers to use. It’s ease of use that will get AI and ML into an adoptable and impactful product. 

The future of bots isn’t some new, flashy use case for AI

One of the biggest things I’ve learned seeing companies deploy bots is that most don’t get the deployments right. Most businesses build a bot, have it try to answer customer questions, and watch it fail. That’s because there’s often a big difference between a customer support rep doing their job, and articulating it correctly enough that something else — an automated system — can do it, too. We typically see businesses have to iterate to achieve the accuracy and quality of bot experience they initially expect.

Because of this, it’s crucial that businesses aren’t dependent on scarce developer resources as part of their iteration loop. Such reliance often leads to not being able to iterate to the actual standard the business wanted, leaving it with a poor-quality bot that saps credibility.

This is the major component of that complex, incremental refinement that doesn’t make exciting stories but delivers a true, breakthrough solution: Bots must be easy to build, iterate and implement — independently, even by those not trained in engineering or development. 

This is important not just for ease of use. There’s another consideration at play. When it comes to bots answering customer support questions, our internal research shows we’re facing a Pareto 80/20 dynamic: Good informational bots are already about 80% to where they’re ever going to go. Instead of trying to squeeze out that last 10 to 15% of informational queries, industry focus now needs to shift towards uncovering how to apply this same technology to solve the non-informational queries.

Democratizing action with no-code/low-code tools

For example, in some business cases, it isn’t enough just to give information; an action has to be taken as well (that is, reschedule an appointment, cancel a booking, or update an address or credit card number). Our internal research showed the percentage of support conversations that require an action to be taken hit a median of roughly 30% for businesses.

It needs to be easier for businesses to actually set their bots up to take these actions. This is somewhat tied to the no-code/low-code movement: Since developers are scarce and expensive, there’s disproportionate value to actually enabling the teams most responsible for owning the bot implementation to iterate without dependencies. This is the next big step for business bots.

AI in customer experience: From prototypes to opportunities

There’s a lot of attention on the prototypes of new and upcoming technology, and at the moment, there are new and exciting developments that will make technology like AI, bots and ML, along with customer experience, even better. However, the clear and present opportunity is for businesses to continue to improve and iterate using the technology that’s already established — to use new product features to integrate this technology into their operations so they can realize the business impact already available.

We should be spending 80% of our attention on deploying what we already have and only 20% of our time on the prototypes.

Fergal Reid is head of Machine Learning at Intercom.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers