Did you miss a session from GamesBeat Summit Next 2022? All sessions are now available for viewing in our on-demand library. Click here to start watching.
Watch out, ghostwriters: Artificial intelligence (AI) is coming for you. In a paper accepted at the NeurIPS 2018 conference in Montreal (“Content preserving text generation with attribute controls“), data scientists from the University of Michigan and Google Brain describe a machine learning architecture that’s capable of not only generating sentences from a given sample, but changing the mood, complexity, tense, or even voice of the original text while preserving its meaning.
This might one day be used for paraphrasing, the team posits, or machine translation and conversational systems. And it could complement systems like those demonstrated by Microsoft Research in November, which leverage sophisticated natural language processing techniques to reason about relationships in weakly structured text.
“In this work, we address the problem of modifying textual attributes of sentences,” the researchers wrote. “To our knowledge, we demonstrate the first instance of learning to modify multiple textual attributes of a given sentence without parallel data.”
The team first tackled the problem of sentiment control. They sourced a restaurant reviews dataset — a filtered version of the Yelp reviews dataset — and a large collection of IMDB movie reviews for a total of 447,000 and 300,000 sentences, respectively, which they used to train the system.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
After training, using a test dataset of 128,000 restaurant reviews and 36,000 movie reviews, the researchers attempted generating text snippets with positive sentiment from sentences with negative sentiment, and vice versa.
Evaluated on BLEU — short for “bilingual evaluation understudy,” a standard metric for evaluating machine-translated text — the AI system was able to outperform two leading text generation methods. Moreover, it consistently generated grammatically correct sentences related to the input sentence — to such a degree that study participants on Amazon’s Mechanical Turk judged its output to be more realistic than that of previous methods.
The generated sentences are surprisingly coherent. In one example, “The people behind the counter were not friendly whatsoever” became “the people at the counter were very friendly and helpful.” In another, the model flipped around “that’s another interesting aspect about the film” to “there’s no redeeming qualities about the film.”
More impressive still, the researchers in another test used the system to control multiple attributes — including mood, tense, voice, and sentiment — of a sentence simultaneously. After training on a dataset of 2 million text snippets from the Toronto BookCorpus dataset, the model was able to translate sentences from indicative mood in the future tense (“John will not survive in the camp”) to subjunctive mood in the conditional tense (“John couldn’t live in the camp”).
“We demonstrate that our model effectively reflects the conditioning information through various experiments and metrics,” the researchers wrote. “While previous work has been centered around controlling a single attribute and transferring between two styles, the proposed model easily extends to the multiple attribute scenario. It would be interesting future work to consider attributes with continuous values in this framework and a much larger set of semantic and syntactic attributes.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.