Google today introduced its own personal digital assistant called, aptly, the Google Assistant. It’s based on existing technology, including Google Now.
The assistant will power Google’s upcoming Amazon Echo competitor, Google Home.
Speaking at the beginning of the keynote at the Google I/O developer conference in Mountain View, California, Google chief executive Sundar Pichai described the technology as “a conversational assistant” that can carry on “an ongoing two-way dialogue.”
“We want to be there for our users asking them, ‘How can I help?'” Pichai said.
The technology is voice-activated and available across devices, but the interface is packaged up inside of a chatbot — something like, say, Facebook Messenger’s M.
“We are getting ready to launch something later this year,” Pichai said.
The Google Assistant is built into the Allo mobile messaging app, which is also being introduced today.
Over the years Google has amassed a whole lot of AI talent and technology. As a result, more and more of Google’s software relies to some degree on AI. And the systems are regularly improving — last year at I/O Google chief executive Sundar Pichai said Google’s speech recognition technology was down to an 8 percent word error rate.
Indeed, today Pichai said that 1 in every 5 Google search queries through the Google app on Android in the U.S. is a voice query.
But even though Google’s AI technology represents value for the company internally, in the past year Google has shared some of that technology with the rest of the world as open source code. Earlier this year Google’s AI got its 15 minutes of fame when the AI-powered Go player AlphaGo from Google’s DeepMind research division beat top-ranked Go player Lee Sedol.
The bot form factor is telling. Back in December the Wall Street Journal reported that Google was at work on a mobile messaging service in which users would be able to interact with multiple AI-powered chatbots.
Since then, both Facebook and Microsoft have launched toolkits for building bots. People on both iOS and Android can interact with bots through multiple messaging apps — including Facebook Messenger, with its M personal assistant that draws on both AI and human trainers — but until this point the companies that actually rule the biggest mobile operating systems, Apple and Google, had not made their moves. Now it’s clear that Google is a few steps ahead of Apple.
Earlier this week The Information reported that Google would give third-party developers access to speech recognition, some aspects of the Google Now personal digital assistant, and Android’s Google Now On Tap feature for recognizing known entities mentioned in text onscreen and providing explanations and relevant links.
While the world’s biggest technology companies have been fixated on chatbots lately, the whole idea itself is not new. (If SmarterChild doesn’t ring a bell, you need to read up.) It’s just a question of interface. Messaging is arguably the most popular type of application on mobile devices, and so it does make sense for a wide variety of services to communicate with people through this medium. Voice-activated assistants like the Alexa-powered Amazon Echo speaker are another emerging medium, but currently bots seem more likely to be adopted by the masses.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here