Ben Fox Rubin had a great post in CNET’s CES blog that discussed whether voice assistants were equipped to provide health and emotional support. He starts out by citing a JAMA study from early 2016 that found voice assistants mostly lacking when asked about some basic physical and mental health situations. This highlights two interesting points:
- What is the purpose of voice assistants?
- How will services break down between general purpose and specialist voice assistants?
What is the purpose of a voice assistant?
Voice assistants were designed to help execute tasks, remind users of important items, fetch information, and make entertainment more readily accessible. These all require integration with other services to fulfill user needs.
That is a core selling point for Mycroft, an open source voice assistant platform. It is giving away the software, and one of its biggest values is a copious number of third-party integrations. Most of these third parties will also integrate with Amazon and Google offerings because of the scale of these companies and their user bases. If you want control over your own voice assistant, these integrations are critical.
So where are the integrations with applications that can analyze emotions and moods, or that can serve as your on-call doctor? These were simply not the core use cases that voice assistant designers tackled first.
Fetching information is fraught with dependencies
The “fetch information” task is particularly onerous for voice interfaces, even though it is a core use case. First, your language engine must properly understand intent. Then, it must locate and return information that provides the best answer.
You may think of search as mature, but how often does Google provide you with the right answer with the first search result? How often is that second, third, or later entry a better fit? For health queries like those in the study, the assistant needs to present the best answer first. There is no audio scanning of search results today. You get one and only one answer.
This is particularly challenging in health care, when the most common answer to direct questions is “It depends.” How old are you? What is your medical history? What risk factors are you exposed to? Which of the medical studies with seemingly opposite conclusions should be presented?
When Google Home announced WebMD availability earlier this month, The Verge had this headline: “Google Home is going to help you misdiagnose yourself.” The article included this prescient question:
Clearly the only skill [for Google Home] we care about is WebMD because what could go wrong?
IBM’s Watson has focused its attention on medical diagnoses to assist physicians and not to replace them. It also has been working furiously to ingest and analyze large volumes of information so it can do justice to the complex nature of medicine. It even acquired Merge Healthcare for $1 billion just so it could better understand medical imaging. And the underlying AI has been in development and use for more than a decade.
The consumer-oriented solutions from Amazon, Google, Microsoft, and Apple were developed for more mundane tasks and information retrieval with low stakes associated with errors. It’s not surprising that these solutions performed inconsistently with higher complexity and higher stakes queries about physical and mental health.
Generalist versus specialist approaches
Then you have solutions like Sense.ly. It uses a voice assistant to help users record their medical information and access appropriate answers to health questions. Sense.ly is also an assistant to health care providers looking to stay better connected to their patients.
The solution doesn’t address home automation, cooking instructions, or simple math problems. The technology behind Sense.ly almost certainly could be extended to these everyday tasks, but that is not its purpose. It is designed to interact solely around health and wellness topics. It is a specialist voice assistant and is unlikely to be replaced by something like Alexa anytime soon.
Where we are headed
The fact that people want to ask voice assistants about health questions speaks to the power of the voice interface and the expert implementation of voice user experiences. Users just assume they can or should talk to their voice assistant as if it were a wise and understanding human. That leads them to confess their feelings and ask about personal health topics even to an inanimate device sitting in their kitchen. Sense.ly’s director of user experience, Cathy Pearl, commented in a Voicebot interview this month that the specialist solution goes even further in making a connection:
We have some patients that get really attached to the avatar [voice assistant]. There is a daily check-in and we have a good compliance rate because people feel accountable to the avatar. Some patients will apologize to the avatar if they miss a check-in. They will share information about their day. This is important to compliance.
Developers deliberately attempted to make their voice assistants more human-like. The result is that we treat them more like humans. Personal connections or at least personal conversations with voice assistants are the inevitable outcome. However, we don’t ask our doctor for cooking instructions or a chef for medical advice. Voice assistants have some super-human qualities, but we shouldn’t expect generalists to compete effectively with specialists for some time to come, particularly in health care.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more