Presented by Five9


For years, businesses have sought to provide customers with more self-service options and increase automation rates in their contact centers using speech-enabled interactive voice response systems (IVRs). They have also invested heavily in developing web chatbots.

However, these systems were complicated to develop and required organizations to purchase, host, and manage a vast array of software, hardware, and equipment. Applications were also created in silos, requiring multiple development projects while making it difficult for applications to share data and context.

A number of disruptive innovations have made it easier and more affordable to deploy AI-and-speech-enabled self-service. Vendors like IBM, Google, and Amazon migrated the underlying speech recognition, text-to-speech, and natural language understanding technology to the cloud, packaging it as software-as-a-service. Organizations can now pay as they go for these services rather than buying the technology outright.

Additionally, the learning models powering these cloud speech services are training with millions of utterances as consumers speak to their smart devices. This enables the technology to more accurately determine a speaker’s intent because it has learned to recognize the many different ways people phrase their questions and requests.

Furthermore, speech application development has become much more streamlined with no-code development tools. Organizations of all sizes are now using this technology to develop and deploy intelligent virtual agents (IVAs) in their contact centers. IVAs can answer the phone with an open-ended question (“How can I help you?”), understand what a customer is asking for, react, and complete simple tasks and transactions.

So, what does it take to develop an IVA that makes self-service as effortless as speaking to Alexa or Siri? Read on to find out.

1. Determine which skills your IVA needs to have

Just like human contact center agents, each intelligent virtual agent has a set of skills. For example, an IVA with basic skills might simply answer the phone, ask the caller if they want to maintain their place in a call queue and schedule a callback. Or an IVA can have more advanced skills, such as understanding human speech in multiple languages, responding to frequently asked questions in multiple languages, and processing a credit card payment. Organizations can determine the skills their IVA will need based on the types of customers they serve, and the contact center tasks they want to automate (more on that in step 3).

2. Decide which channels to support

Customers can interact with IVAs using their channel of choice. They can speak to IVAs over the phone or communicate through text-based channels including SMS, social media messaging apps like WhatsApp, and web-based chatbots. Businesses can determine their customers’ preferred channels through contact center reporting or surveys and extend the organization’s self-service applications across multiple touchpoints. But they must ensure their applications share the same back-end components — databases, reporting, payment gateways, etc. This allows an IVA to maintain context so the conversation can progress seamlessly as it is passed from one channel to another.

3. Choose the tasks you will automate

Organizations can use their contact center reporting data, surveys, and input from business leaders and front-line service agents to determine the service tasks that are best suited for automation in their contact centers. Consider which customer intents will be the easiest to fulfil via IVA, which tasks will deliver the most bang for the buck when automated, and which will be most effective at freeing up human agents for more complicated tasks. Once you’ve determined which tasks are most feasible and desirable for IVA handling, weight the opportunities to decide which to tackle first. At the top of the list are the things that are easiest to achieve and deliver the highest value.

4. Select which conversational AI engines you want to use

Next, you’ll need to choose the underlying speech services that will power your IVAs. For example, vendors like IBM, Google, and Amazon provide hundreds of different voices with varying accents and tones, and an organization might choose the service with the voice they think their customers will like best. Or they may find one service to be more accurate than another when testing their application. You’ll want to work with an IVA provider that offers plenty of options and the flexibility to switch between the cloud-based speech services freely. Speech technology evolves quickly, and you’ll want to be able to take advantage of the latest advancements without being locked into a particular vendor’s services.

5. Understand how your IVA will be integrated with your UC of CCaaS platform

When executing a contact center task, an IVA must be able integrate with an organization’s back-end systems to retrieve the information or application it needs to resolve a request. These systems can include Customer Relationship Management platforms (CRMs), knowledge bases, calendars, and payment gateways, for example. But IVAs can also streamline operations in a cloud-or-premise-based Unified Communications environment. For example, you can integrate and centralize legacy phone and customer service systems into a cloud-based solution that is easier to manage, creating a series of phone extensions that allows your IVA to transfer calls from one location to another in your organization.

6. Build your tasks

As mentioned earlier, this step has become much easier thanks to no-code, drag-and-drop IVA development platforms. No-code development platforms provide a web-based design environment, where users with minimal technical expertise can assemble a speech application by dragging and dropping graphical building blocks that tell it what to do. Many platforms also provide pre-built IVA templates for common tasks such as credit card payments or appointment management, which further streamlines the development phase. In some cases, new IVA tasks can be designed and deployed in matter of days.

7. Train and tune your AI

An IVA determines how it will fulfil a customer’s intent by matching their utterances to keywords in the task flow. Then it executes the actions that have been assigned to those keywords. IVAs get smarter and more accurate at recognizing customer intents through a supervised learning approach. This approach involves using the data gathered from live caller utterances to train the IVA to match new keywords to intents so that it can understand the different ways users phrase the same intent.

8. Review performance

IVAs can log information such as timestamped transcriptions of caller utterances, the detected intents, which prompts and/or transfer announcements have been played to the caller, and the final call destination. This data will help you understand how your application is performing and where you might be able to make improvements. Supervised learning is an ongoing, and currently manual, process because intents are constantly evolving. Fortunately, we’re starting to see IVA providers address this challenge with cost-effective solutions that minimize human supervision in training and help organizations assess the optimal number of intents that will maximize IVA performance. This will make it easier to automatically improve intent detection accuracy, and to identify new intents that will maintain the quality of an IVA over time.

Dig deeper: To learn more about IVA development, and what you can create with the latest speech services and code-free design environments, click here.    

Callan Schebella is EVP, Product Management at Five9.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.