Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Facebook AI Research, together with Google’s DeepMind, University of Washington, and New York University, today introduced SuperGLUE, a series of benchmark tasks to measure the performance of modern, high performance language-understanding AI.
SuperGLUE was made on the premise that deep learning models for conversational AI have “hit a ceiling” and need greater challenges. It uses Google’s BERT as a model performance baseline. Considered state of the art in many regards in 2018, BERT’s performance has been surpassed by a number of models this year such as Microsoft’s MT-DNN, Google’s XLNet, and Facebook’s RoBERTa, all of which were are based in part on BERT and achieve performance above a human baseline average.
SuperGLUE is preceded by the General Language Understanding Evaluation (GLUE) benchmark for language understanding in April 2018 by researchers from NYU, University of Washington, and DeepMind. SuperGLUE is designed to be more complicated than GLUE tasks, and to encourage the building of models capable of grasping more complex or nuanced language.
GLUE assigns a model a numerical score based on performance on nine English sentence understanding tasks for NLU systems, such as the Stanford Sentiment Treebank (SST-2) for deriving sentiment from a data set of online movie reviews. RoBERTa currently ranks first on GLUE’s numerical score leaderboard with state-of-the-art performance on 4 of 9 GLUE tasks.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
“SuperGLUE comprises new ways to test creative approaches on a range of difficult NLP tasks focused on innovations in a number of core areas of machine learning, including sample-efficient, transfer, multitask, and self-supervised learning. To challenge researchers, we selected tasks that have varied formats, have more nuanced questions, have yet to be solved using state-of-the-art methods, and are easily solvable by people,” Facebook AI researchers said in a blog post today.
The new benchmark includes eight tasks to test a system’s ability to follow reason, recognize cause and effect, or answer yes or no questions after reading a short passage. SuperGLUE also contains Winogender, a gender bias detection tool. A SuperGLUE leaderboard will be posted online at super.gluebenchmark.com. Details about SuperGLUE can be read in a paper published on arXiv in May and revised in July.
“Current question answering systems are focused on trivia-type questions, such as whether jellyfish have a brain. This new challenge goes further by requiring machines to elaborate with in-depth answers to open-ended questions, such as ‘How do jellyfish function without a brain?'” the post reads.
To help researchers create robust language-understanding AI, NYU also released an updated version of Jiant today, a general purpose text understanding toolkit. Built on PyTorch, Jiant comes configured to work with HuggingFace PyTorch implementations of BERT and OpenAI’s GPT as well as GLUE and SuperGLUE benchmarks. Jiant is maintained by the NYU Machine Learning for Language Lab.
In other recent NLP news, on Tuesday Nvidia shared that its GPUs achieved the fastest training and inference times for BERT, and trained the largest Transformer-based NLP ever made up of 8.3 billion parameters.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.