Elevate your enterprise data technology and strategy at Transform 2021.
Google Home, Google’s eponymous smart speaker, supports multiple users. So do the devices in Amazon’s Echo lineup. And if a newly granted patent application is anything to go by, Apple’s HomePod might be next.
AppleInsider today spotted an Apple patent –“User profiling for voice input processing” — published by the U.S. Patent and Trademark Office that describes how voice recognition could be used to identify a user based on their unique speech pattern, or “voice print,” and how a smartphone or smart device could leverage that information to tailor its behavior.
For example, if you asked a phone with such a system to see your email (“Hey Siri, show me my email”), it would recognize your voice and, as a result, surface only messages from an inbox associated with your profile. Another user would have a different experience — they’d see their messages instead.
The patent goes a step further. The system would learn users’ unique intonations, pronunciations, style of language, device usage patterns, and preferences over time in order to respond to voice commands more quickly and accurately.
As The Verge points out, Siri (Apple’s digital assistant) can already distinguish an iPhone user’s voice from other people, but it doesn’t currently support multiple users. That said, there’s evidence to suggest multiuser support is on the way — in an iOS 11.2.5 beta, developers spotted code strings referencing “custom responses” and support for multiple voices.
There’s no guarantee any of what’s described in the patent will come to pass, of course, but an adaptive, personalized approach to voice recognition might be the key to closing the so-called “accent gap.” Recent studies show that popular voice assistants are up to 30 percent less likely to understand non-American speakers than native-born users, and that the datasets used to train most voice assistants favor speakers from particular regions of the country.
Siri in its current state might not be perfect, but Apple has made inroads to improve it. It recently enhanced the assistant’s ability to recognize local points of interest (POIs), reducing errors for popular places by as much as 48.4 percent. And earlier this year, it brought Siri-powered calling capabilities to the HomePod in private beta and introduced Siri shortcuts, a feature that gives iOS users the ability to create custom voice commands that can connect to any app.
Still, improving Siri’s performance remains a challenge for Apple in part because of the company’s adherence to an offline, on-device approach to machine learning. In July, reportedly in an effort to formulate a clearer development roadmap for the digital assistant, the Cupertino company consolidated its Core ML and Siri teams under a new artificial intelligence and machine learning division headed by John Giannandrea, a former Google executive who joined the company in April.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more