Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
The announcement of Apple’s HomePod smart speaker at last week’s WWDC event marks the latest entrant to a growing market for voice assistants, currently led by tech giants Amazon and Google.
Each new launch brings the promise of a slicker user experience and a more efficient use of our time. But it’s also driving the formation of a new kind of relationship between the user, the tech, and the company or brand behind it.
It might take several iterations of these voice assistants to integrate seamlessly into our daily routines, but the first stage for any company looking to take advantage of this new tech will be to ask: How do users feel about it all?
We recently partnered with Mindshare Futures and J. Walter Thompson Innovation Group for their Speak Easy research project to answer that very question. Our portion of the study involved observing 102 smartphone users as they carried out a selection of tasks using Amazon’s Alexa, Google Assistant, text-based search, and questions directed to a real person. While the users performed these tasks, we monitored their neurological responses to the experience using steady state topography.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
The findings combine user brain data with survey and interview responses to get a broad picture of our developing relationships with voice technology.
Lightening the load
We’ve all heard about Siri, Alexa, Google Home, and Cortana, but many of us don’t naturally warm up to voice assistants. In fact, our neuro-research shows a significant lack of emotional response to interactions with an assistant compared to face-to-face human interaction. However, it should be noted that even in the relatively short period of time people were taking part in the study (about 30 minutes), these responses improved as users become more at ease with the technology.
Getting used to voice assistants should come quite naturally to us. Our research found that the likes of Alexa demand far less of users than text-based interactions. This lighter cognitive load is probably because speaking comes more naturally to us than text-based interaction does. That, of course, makes the whole process quicker, too — we speak at a rate of about 150 words a minute, three times faster than most typing.
That’s encouraging news for companies investing in new voice applications to give users a more convenient experience. As for developing a more emotional connection between users and this new technology — that’s a fine line to walk.
Voice assistants can evoke a surprising range of responses from users that we might normally expect to be reserved for humans.
Mindshare and JWT’s study showed over one third (37 percent) of participants claiming they loved their assistants so much they wished it was a real person. Perhaps the most alarming find was that one quarter of users said they fantasized about their voice assistant!
While these claims may sound extreme, we can understand how these feelings arise.
Have you ever spotted a surprising arrangement of wall fittings, an electrical plug, or a bathroom sink that clearly resembles a face? It’s a side effect of our tendency to understand our surroundings in easily relatable terms — and what’s more relatable to humans than a human face?
Likewise, we may find ourselves attributing thoughts and feelings to objects or animals.
So as assistants become more humanlike in terms of their services and responses to us, we will in turn attribute a greater degree of human personality to them. This could, to some extent, result in strong feelings that we’d normally have toward a fellow person. Strong feelings of love, anger, and frustration can all be felt towards Alexa or Siri, and by extension the brands behind them, simply because of the way we relate to the world around us.
So there is a dilemma for voice tech developers — do you pursue humanlike personality attributes and navigate the emotional side effects that may produce or keep your assistant firmly in robot territory?
The best strategy seems to be pursue more humanlike connections, but only to a point.
A more natural, human experience encourages more natural interactions that are easier for us to manage. As a result, we’ll use those assistants more. However, there is a point at which the human-technology spectrum falls into “uncanny valley” territory.
This is the phenomenon of encountering a robot, or in this case a computerized persona, that falls between the “human” and “robotic” categories — almost human, but with something not quite right. Things that fall into the uncanny valley tend to trigger strong feelings of unease and even revulsion that turn us away from interacting with the offending object in future.
Research and deliver
The uncanny valley can make developing voice assistant tools seem like a precarious balancing act, but thorough user testing in these formative stages will help to highlight the aspects that might turn a potential user off of the technology.
Otherwise, the most important consideration for future progress in voice interfaces is simple utility. Eighty-seven percent of Mindshare and JWT’s respondents agreed that “when technology works properly, it really simplifies my life.”
This is the promise that years of sci-fi has set up for voice developers — efficiency and simplicity, to the point of preemptive service. Once the emotional nuances are ironed out, the biggest barrier to adoption for Apple, Google, and Amazon’s tech will be if their assistants don’t deliver on that promise.
Heather Andrew is the UK CEO at Neuro-Insight, a neuroscience-related market research company.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.