Issuing voice commands to your mobile device is going to get a lot easier thanks to improvements in underlying technology from chip maker Audience. You’ll be able to leave it on your desk and say, “Audience, play jazz music,” and the device will comply.
The Mountain View, Calif.-based Audience has smart audio chips that are modeled after the way the human ear functions. It’s announcing at the 2014 International CES, the huge tech trade show in Las Vegas this week, that its new eS700 series audio chips will enable new kinds of communication with voice commands for mobile devices. The technology shows there’s still a lot of room for improvement in taking voice technology in mobile devices to higher levels of quality.
The Audience chips will also improve audio quality for your voice calls so that you can hear a caller better and so that the phone can more easily remove background noise when you are talking on the phone. The technology should simplify your life a little, since the average mobile phone user reaches for a mobile device up to 150 times a day.
With the intelligent voice command recognition, Audience intends to enable a new kind of smart device that is always in listening mode, using a small amount of battery power to listen for voice commands, much the way that Microsoft’s Xbox One video game console can be awakened with the voice command “Xbox on.” The Audience eS700 series chips can listen for particular commands, and they minimize false wake-ups.
Audience calls the new feature VoiceQ. It enables a device to continuously listen to its surroundings and act upon a configurable voice command without the need for interaction via touch. The device wakes up based on factory-programmed commands, but then it will perform tasks that you specify, such as “Audience, play music.” It takes away the pause that happens between waking up a device and performing the instruction, even in noisy conditions.
“We believe that dependable, always-on voice detection and actuation is the next must-have mobile technology,” said Peter Santos, the president and CEO of Audience, in a statement. “Our experience in delivering low-power, high-performance advanced voice makes us uniquely capable to deliver on the promise of a compelling always-on voice experience.”
Audience is on its fourth generation of advanced voice processing technology. It can suppress the sound of wind in the background of your call, and it provides support for up to three microphones at a time in one device. It can restore the quality of voices on calls in harsh environments. And the newest versions enable speakerphones to pick up voice signals from 360 degrees.
Developers can access these features via an applications programming interface, which they can use to program new apps that use voice commands. The company’s eS704 and eS702 voice processors and its eS754 and eS752 audio codec chips are available in samples today. They are expected to be in mobile devices in the second half of 2014.
Audience has been working on technology to reproduce the way the human ear perceives sound since 2000. It has taken that knowledge and put it into computer models that are embedded inside its chips. Those chips can reproduce sound so you can hear it in your mobile calls. They can make garbled voice calls sound better, which is more important as mobile networks become clogged with voice and data. The Audience chips are already used in 160 mobile devices and Audience has shipped 350 million voice processor chips to date. Among the devices using it are the Google Nexus 10 tablet, the Samsung Galaxy S4, and the Samsung Galaxy Note III.
Audience was founded by Lloyd Watts, a researcher who worked with legendary brain and computer chip expert Carver Mead. Watts worked on audio technology at Paul Allen’s think tank, Interval Research, but that shut down in 2000. Allen’s Vulcan Ventures invested in Audience, and Mead joined Audience’s board for a time. The company got its first real capital in 2004 and started selling chips. It went public in May 2012.
More: MobileBeat 2016 is focused on the paradigm shift from apps to AI, messaging, and chatbots. Don't miss this opportunity: July 12 and 13 in San Francisco.