Amazon’s Alexa voice assistant has become better at understanding what people are saying and, by extension, it’s doing a better job of completing the goals or tasks that people are trying to accomplish when they interact with Alexa through the Amazon Echo speaker or other smart devices.
Specifically, the error rate for goal completion has gone down by a factor of two since Alexa launched on the Echo in November 2014, said Rohit Prasad, vice president of Alexa Machine Learning and Speech at Amazon, at VentureBeat’s 2016 MobileBeat conference in San Francisco.
A single task might require a few voice queries. If you want to play a song, Prasad said, the task is only completed once the song is playing, and that might mean correcting Alexa over multiple turns about the particular music artist, album, or song.
“We measure quite closely how well we are doing,” Prasad said. Amazon has previously promised its customers that it would improve these services, and the company has followed through on that, Prasad said.
Alexa now has more than 1,500 skills available for users to enable, up from more than 1,400 as of the end of last month, Prasad said.
Amazon uses its own speech recognition system for Alexa, one that’s different from the DSSTNE deep learning framework that Amazon open-sourced earlier this year, Prasad told VentureBeat. Deep learning is a type of artificial intelligence that relies on artificial neural networks to train on lots of data, like speech recordings, and then makes inferences about new data. Apple and Google have embraced deep learning right alongside Amazon.
Yesterday’s Prime Day was not only the biggest sales day for Amazon — it was also the biggest sales day for Echo so far, Prasad said. It should be noted here that it was just a few days ago that Amazon made it possible for Alexa to order millions of products on Amazon. Amazon has not said how many Echo devices it has sold.
Stay abreast of the latest news on bots, messaging, and AI from MobileBeat 2016. Read our coverage here.