Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Developers and the rest of the world will soon be able to make bots that interact with Google Assistant and new Google devices made public, the company said today in a special presentation in San Francisco.
“The Google Assistant will be our next thriving open ecosystem,” said Scott Huffman, lead engineer of Google Assistant.
The creation of bots for Google Assistant will be possible through Actions on Google, which is due out by early December. A software development kit (SDK) that brings Google Assistant into a range of device not made by Google is due out next year.
Today in San Francisco, Google unveiled its Daydream VR headset, Pixel and Pixel XL smartphones, Google Wifi, Chromecast Ultra, and the voice-enabled Google Home, a competitor with Amazon Echo. More than 300,000 people watched the special presentation on YouTube.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Google Assistant will be able to communicate with Android smartwatches, the Allo chat app, Google Home, and devices that incorporate the Google Assistant in the future.
Google Assistant already allows people to search the web or make appointments. New bots will offer additional functionality, like the ability to hail an Uber, or talk with smart home Internet of Thing devices.
Google CEO Sundar Pichai said today that personal computers, the web, and mobile were among the biggest influencers of computing in decades past. Artificial intelligence will be at the center of the next big shift in computing, he said, one that will take place over the course of the next 10 years, and Google Assistant was made to act as your assistant across devices as the world burrows deeper into the age of AI.
“Our goal is to build a personal Google for each and every user,” Pichai said.
Initial Actions on Google partners include OpenTable for restaurant reservations, WebMD for health advice, IFTTT for Internet of Things, and media partners like CNN and CNBC.
Actions on Google will carry out two kinds of requests to complete tasks: direct actions and conversational actions.
“Direct actions are great for things like home automation, media requests, and communication. When I say ‘Turn on the living room lights,’ the Philips Hue or Smart Things lights should just come right on,” Huffman said.
Conversational actions are for the kinds of tasks that require more explanation.
“When I say I need an Uber, my assistant will be able to bring Uber right into the conversation. Then Uber can say, ‘Where would you like to go?’ and I can respond with my destination. Uber might ask, ‘Would you like an UberX again?’ Maybe I’ll say ‘We need an UberXL this time.’ Once your ride is confirmed, Uber can say ‘Your driver is Betsy, and she’ll arrive in three minutes in a black Chevy Suburban.'”
Bots and virtual assistants already made with API.ai will be able to integrate into Google Assistant, but API.ai will not be the only way Google supports the creation of bots for the Google Assistant, Huffman said.
“Thousands of expert and novice developers have already built conversational interactions with API.ai, and these can become conversational actions with the Google Assistant. We’ll support other conversation building tools as well,” he said.
Last month, a day before the public launch of Google’s new chat app Allo, the search giant acquired API.ai. More than 60,000 developers have used the platform to train machine learning and create bots and virtual assistants.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.