Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next.
What do Google Assistant, Siri, Alexa, and Cortana have in common? They tell jokes of varying cleverness, most of which are the work of writing teams operating behind the scenes. They’re entertaining, but preliminary research suggests they also play a part in making interactions with assistants engaging.
Of course, there’s always room for improvement. In pursuit of assistants capable of tailoring jokes to individual users’ tastes, Amazon researchers investigated joke selection methods that tap either a basic natural language processing model or a machine learning model. They say that when tested against production data, both approaches “positively” impacted user satisfaction and potentially improved joke-telling.
Training the models required an extensively annotated data set, which the team compiled by recording a set of voice assistant users’ reactions to jokes. Two implicit feedback strategies were employed, one in which a joke was labeled “positive” (i.e., funny) if a user requested a new joke within five minutes of hearing it and a second that marked as positive all joke requests followed by new ones within 1 to 25 hours.
To compare the different labeling techniques, the team conducted an A/B test in a production setting, in addition to a comparison involving historical data and a selected labeling strategy. A joke data set containing thousands of unique jokes across categories (e.g., sci-fi and sports) and types (puns, limericks, and more) was used to validate each model, along with data from approximately 80,000 English-speaking “customers” in total (presumably Alexa users, though the researchers don’t say so explicitly).
The results show that the proposed natural language processing model consistently outperformed a rules-based method for both labeling strategies. They note that the machine learning model performed well in terms of accuracy and performance, but that its architecture — which is far larger in size than that of the natural language processing model — would make it difficult to extend to new countries and languages.
The researchers leave to future work comparing additional methods and developing a method that might easily extend to new languages.
A superior sense of humor could bolster assistant usage among those who haven’t climbed aboard the bandwagon. An estimate from eMarketer in August pegged the number of monthly users of voice assistants at roughly 112 million, up from 102 million in 2018, but a separate survey and report from PricewaterhouseCoopers found that poor understanding of what AI assistants are capable of doing and a general lack of trust could hamper the segment’s growth.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more