Icecaps, an open source toolkit for neural conversational networks from Microsoft Research, made its debut today. Icecaps is an acronym that stands for Intelligent Conversation Engine: Code and Pre-trained Systems, and it uses multitask learning to do things like give conversational AI systems multiple, different personas.

A combination of personality embeddings and word embeddings is key to Icecaps’ ability to personalize personas. Using such an approach, AI assistants could speak in different ways based on the person they’re talking to or to match specific scenarios.

“Several of these tools were driven by recent work done here at Microsoft Research, including personalization embeddings, maximum mutual information-based decoding, knowledge grounding, and an approach for enforcing more structure on shared feature representations to encourage more diverse and relevant responses,” Microsoft researchers Vighnesh Leonardo Shiv said today in a blog post.

Pre-trained models for developers to build upon or use immediately will be released in the coming months, starting with stochastic answer networks and personalized transformers.

“We had hoped to include these systems with Icecaps at launch. However, given that these systems may produce toxic responses in some contexts, we have decided to explore improved content-filtering techniques before releasing these models to the public,” the Icecaps GitHub page reads.

The Icecaps library uses the TensorFlow machine learning framework and SpaceFusion, a way to inject regularization into multitask learning environments and improve efficiency.

A cohort of a dozen members of Microsoft Research published a paper today on ACL with more details about the making of Icecaps.

In May, Microsoft also used multitask learning to create MT-DNN, an NLP model derived from Google’s BERT currently ranked third on the GLUE language understanding benchmark leaderboard behind Facebook’s RoBERTa and Google’s XLNet.