You ever been mid-sentence while speaking with Siri or Google Assistant and you get cut off? Just like when you’re speaking to a human, sometimes when you’re in the middle of a conversation with an AI assistant, you get interrupted by another person talking, or by an important push notification on your phone, or you just forget what you wanted to ask and your phone goes to sleep.
This happens all the time.
When it happens and you’re using an app or web browser on a smartphone, you typically just pick up where you left off — but that’s not always possible with AI assistants.
This is where Siri falls short compared to major competitors like Google Assistant and Alexa.
Let’s say you want to use Siri to plan a night out with friends.
Apple Maps can show you nearby restaurants and businesses, and with a single question you can see Yelp for reviews, OpenTable for reservations, and other important info about the restaurant, all within in a pretty good-looking card.
Search movies with Siri and you can buy a ticket using the Fandango app. Once the night is over, Siri can summon apps like Square Cash or PayPal if you need to send payments to friends.
Altogether this makes for a pretty cool experience — but every time one of these interruptions takes place when you stop talking to Siri or your phone goes to sleep, all the web searches, translations, or questions answered during the course of a conversation disappear.
Actually, it’s not every time. After you say “Hey Siri” or hard-press the home wake button, you can scroll up to see the last question you asked Siri, but that’s where history stored on the device ends.
This limitation makes it nearly impossible to begin a conversation, then come back later to finish. You can’t scroll through your history to find something you already spoke to Siri about or look back at your research into the best nachos in San Francisco. Your only option is to get everything done within one to two interactions. After that, any calendar events or reminders you created will be recorded, but virtually everything else resets.
By contrast, to see previous interactions with Google Assistant on Android phones, all you have to do is swipe up to get a listing of your most recent conversations. You can also visit the My Activity section in the Google Home or Google Assistant app. It doesn’t even matter if that conversation started on a Google Home Mini, natively on an Android phone, or even via the Google Assistant app on an iPhone.
Similarly, all interactions with Echo smart speakers, Fire tablets, or apps with Alexa inside (Amazon Music, e.g.) can be seen in the home tab of the Alexa app. Like Google, the app stores audio recordings of every exchange in which the smart speaker heard (or thought it heard) the Alexa wake word.
Microsoft makes control of privacy settings and recordings available in the Cortana Notebook. It’s not yet clear how the Harman Kardon Invoke smart speaker will interact with other Cortana-enabled devices like notebooks running Windows 10 or, as of yesterday, Skype.
As hands-free voice interaction with Siri becomes scattered across more devices like cars and wireless earbuds and smart speakers, this missing feature could become even more noticeable.
In order for AI assistants to be your omnipresent helper, with access to everything from your calendar to your credit card number, you need a central place to view — and sometimes delete — your interactions. This will become important as more assistants gain the power to make more purchases with voice alone. It’s a feature that also seems pretty essential to carry out more complex queries in the future that don’t start and finish with a single question but are more like ongoing projects. For example, take the scenario recently suggested by Siri co-creator Adam Cheyer.
“Think of going to Google in 2007 for help planning my trip to my sister’s wedding, It’s not just about speech [recognition]. It’s hard … logging into sites and clicking and filling in forms,” Cheyer said while speaking on a panel last month in which he said voice will define the next decade of computing. “If I say ‘Help me plan a trip to my sister’s wedding next month’ — now, understanding the words, that’s good, but I have to figure out ‘How am I going to get there? Where am I going to stay? What should I wear? Who am I going to bring? What should I do while I’m there? Who do I meet?'”
Chances seem rather slim that you would get all that done in a single interaction.
There’s no doubt Siri is smarter than ever. With the release of iOS 11 out last month, a smattering of improvements were made for Siri: personalization that syncs across devices, a more expressive voice, more extensive search results, and the ability to interact with more personal finance and productivity apps.
With all the new machine learning-powered apps, augmented reality games, Face ID tech Apple says has a one in a million chance of being tricked, and other cool changes here or on the way soon, this lack of interaction history may be Siri’s biggest flaw. And it’s something Apple will have to address if it wants to give users a seamless experience across the 375 million devices with Siri inside that by the end of the year will include the HomePod smart speaker with no visual interface.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here