Last week, Google gave the world more information about Duplex, its experimental conversational AI that makes phone calls to schedule appointments or make restaurant reservations on your behalf. With initial trials expected to begin in the coming weeks, the company shared additional details about how it will navigate communication between a Google Assistant user and businesses.
We’ll soon find out just how effective Google Assistant is at speaking for users, but should Duplex succeed, the logical next question is what will happen when AI that carries out tasks for people goes mainstream and must communicate not just with people, but with other bots?
If Duplex is a harbinger of things to come, bot-to-bot communication will soon be critical, which is why the makers of assistants like Siri, Alexa, and Google Assistant need to work together.
AI assistants have been around for a while, and growing rates of adoption can be attributed in large part to their declining word error rate and increased ability to understand human language. But collaboration between the major players will be essential in allowing the next generation of assistants to develop helpful use cases like Duplex and enjoy widespread adoption.
The most significant example of cooperation between the makers of assistants we have so far is the Alexa-Cortana partnership announced in 2017. Amazon and Microsoft are planning to provide Cortana access on Echo devices and Alexa access on Windows 10 PCs because, they said, we will come to live in a multi-assistant world.
This feature, which we got to see in action for the first time in May, essentially makes the most widely available assistant in homes (Alexa) work with the most widely available assistant on personal computers (Cortana). This goes further than the nonexistent interaction between other assistants today, and Alexa can even tell jokes about Cortana, but it only changes where you can access the assistants and does not appear to incorporate the sort of intercommunication that will make living in a multi-assistant world a viable option.
An example of how this could function is the Open Neural Network Exchange (ONNX) format made available last year for interoperability of Microsoft’s Cognitive Toolkit, Facebook’s Caffe2, and PyTorch that brings together an ecosystem of engines and frameworks to train and deploy AI.
If the makers of assistants also agreed upon a common framework for communication, it could encourage users to rely on these assistants for an increasing number of tasks. This would open up opportunities for the makers of Siri, Alexa, and the like, but it would also boost the entire conversational computing space. People could come to entrust more tasks to an uber assistant like Alexa or make room in their lives for specialized assistants designed to help them do their job more effectively or accomplish other goals.
That would be a very different world from the one we live in today, where people are still most likely to use AI assistants to play music, set a timer, or check the weather.
Thanks for reading,
AI Staff Writer
Siri Shortcuts, which enables developers to create Siri voice triggers for specific app features, is now available to select third-party developers for testing purposes.
Researchers at MIT CSAIL have developed a system called PixelPlayer that can isolate the sound of instruments from videos.
Artificial intelligence has without question been a menace to modern democratic society. Malicious bots notably interfered in the 2016 presidential election in the United States, and they meddled in Mexican elections held earlier this week. Perhaps even more alarming is a study published last month that found the majority of people in democratic societies around the […]
Baidu today unveiled a new chip for AI, joining the ranks of Google, Nvidia, Intel, and many other tech companies making processors especially for artificial intelligence. Kunlun is made to handle AI models for edge computing on devices and in the cloud via data centers. The Kunlun 818-300 model will be used for training AI, and […]
EXCLUSIVE: Smart speakers and stereos aren’t the only way to get a party started. AmpMe has an app that makes it possible to sync and play music through smartphones or Bluetooth speakers. Today AmpMe announced it will begin to roll out the use of ultrasonic inaudible sounds to sync multiple devices, detect latency, and power its […]
Promising new research out of the University of Tokyo Institute of Industrial Science shows that the dispersion of radioactive material can be accurately predicted with the help of machine learning.
Computers are scoring long form answers on anything from the fall of the Roman Empire, to the pros and cons of government regulations. (via NPR)
An AI system has wiped the floor with some of China’s top doctors when it comes to diagnosing brain tumors and predicting hematoma expansion. (via The Next Web)
Tech companies are beginning to accept that the artificial intelligence they’re building their futures on could be flawed. (via Quartz)
Uber’s Elevate conference might place most of the focus on flying taxis, but the ride-hailing company’s CEO had some things to say about its currently suspended autonomous-vehicle tests, too. (via CNET)