The hype around voice AI, coupled with the tension around integration, has been an ongoing battle. It was only a few years ago that the tech industry was touting voice as the next major platform after mobile. Last year, comScore predicted that by 2020, voice will dictate half of all searches.

As some reports show, the voice lag is much more real than expected. At the Transform 2019 AI conference in San Francisco today, executives from Amazon, Microsoft, and IBM acknowledged their shortcomings while emphasizing strategies for the future.

“We’ve lost trust with most people when it comes to voice,” said Noelle LaCharite, principal program manager at Microsoft AI. “People don’t believe that’ll work. And if it doesn’t work the first time, your chances are gone.”

Part of the problem is that companies and developers didn’t adequately prepare audiences for what to expect, she said; it was only in November 2014, for example, that Alexa came on the market. While Amazon emphasized that the Echo and Echo Dot were its bestselling items in 2016 and that sales of Echo devices were 9x more than the prior year, developers didn’t emphasize how much context would influence how a user would interact with the technology.

Nor did companies and developers acknowledge the inevitable privacy issues and pushback from some consumers. They don’t, in fact, want their devices to listen to them.

But privacy is actually an effect of the broader problem of transparency, according to LaCharite. Most people would trade their data for convenience if that is what they value, she said.

“But we have to be — as vendors, as the people who are championing these products — we have to be extremely transparent about that, what that means,” LaCharite said. “And I think we saw a little bit of backlash of that because we just didn’t have the chance to do that.”

If she had to do one thing over, she added, she would have explained some of the effects much more clearly so that people would have the comfort of knowing what to expect.

The reality that different contexts demand different kinds of voice interactions is another hurdle for companies, said Lisa Falkson, a senior VUI designer at Amazon. How people interact with voice assistants at home is vastly different from how they might interact with the technology elsewhere — say, in a car. A person is not going to ask for navigation cues in the kitchen.

Hence why these voice technologies must listen more to learn more. It’s also why Amazon is making a huge investment in AI, Falkson said. The goal is to make Alexa accessible everywhere — not just at home, and not just through a transactional means of communication.

The trick, though, is recognizing the powers and limits of the technology. Just as context is key to understanding the capabilities of a voice assistant at a particular time, it may also indicate when “voice may not be the best avenue,” said Mitchell Mason, senior offering manager for Watson Assistant at IBM. Depending on the situation and the kind of privacy a person wants to protect, voice may not be the ideal platform.

For leaders and businesses interested in spearheading voice technologies, the executives emphasized that starting small with narrow use cases is best — for building up the technology, and for developing audience trust and engagement.

“Whatever your highest value is, do that piece really well first, and then you can expand from there and learn,” said Falkson.