Facebook AI Research today announced the open source release of PyText, an NLP modeling framework that currently produces more than a billion predictions a day for users of Facebook and its family of apps.
PyText is the technology behind voice commands in Facebook’s new Portal smart displays as well as Facebook Messenger’s intelligent assistant M.
M Suggestions listen to words used in conversations that take place on the chat app to suggest things like booking an Uber ride, wishing a friend happy birthday, or recommending a Spotify song or Food Network recipe.
Also announced today: a number of new features for Portal smart displays, including custom control of Smart Camera, which uses AI to frame video calls, as well as the ability to simply say “Call mom” instead of the need to say her full name, a capability powered by PyText.
“We are planning to use PyText as our main NLP platform going forward,” Facebook AI Research engineers Ahmed Aly Hegazy and Christopher Dewan said in a blog post today. “AI researchers and engineers can now use PyText to more quickly and easily experiment with and deploy systems to perform document classification, sequence tagging, semantic parsing, multitask modeling, and other tasks.”
Use of PyText has improved the accuracy of conversational AI models used in core Facebook applications by up to 10 percent. When used in distributed training across multiple servers and GPU clusters, PyText has reduced required training time by 3-5 times.
The PyText framework for conversational AI is built with PyTorch 1.0 and can operate with ONNX as well as Caffe2 for inference at scale. To get started, PyText comes with a library of prebuilt AI models as well as tutorials.
PyText is preceded by DeepText, another conversational AI service from Facebook that scours the words you use. Unlike DeepText, however, it is able to dynamically implement graphs, and, according to a paper published in October on the subject, relies on a component architecture to simplify workflows and support quick experimentation.