We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Google-affiliated researchers today released the Language Interpretability Tool (LIT), an open source, framework-agnostic platform and API for visualizing, understanding, and auditing natural language processing models. It focuses on questions about AI model behavior, like why models made certain predictions and why they’re performing poorly with input corpora. LIT incorporates aggregate analysis into a browser-based interface that’s designed to enable explorations of text generation behavior.

Advances in modeling have led to unprecedented performance on natural language processing tasks, but questions remain about models’ tendencies to behave according to biases and heuristics. There’s no silver bullet for analysis — data scientists must often employ several techniques to build a comprehensive understanding of model behavior.

That’s where LIT comes in. The tool set is architected so that users can hop between visualizations and analysis to test hypotheses and validate those hypotheses over a data set. New data points can be added on the fly and their effect on the model visualized immediately, while side-by-side comparison allows for two models or two data points to be visualized simultaneously. And LIT calculates and displays metrics for entire data sets to spotlight patterns in model performance, including the current selection, manually generated subsets, and automatically generated subsets.

LIT supports a wide range of natural language processing tasks like classification, language modeling, and structured prediction. It’s extensible and can be reconfigured for novel workflows, and the components are self-contained, portable, and simple to implement, its creators claim. LIT works with any model that can run from Python, the Google researchers say, including TensorFlow, PyTorch, and remote models on a server. And it has a low barrier to entry, with only a small amount of code needed to add models and data.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

To demonstrate LIT’s robustness, the researchers conducted a series of case studies in sentiment analysis, gender debiasing, and model debugging. They show how the tool set can expose bias in a coreference model trained on the open source OntoNotes data set, for example revealing where certain occupations are associated with a high proportion of male workers. “In LIT’s metrics table, we can slice a selection by pronoun type and by the true referent,” the Google developers behind LIT wrote in a technical paper. “On the set of male-dominated occupations, we see the model performs well when the ground-truth agrees with the stereotype — e.g. when the answer is the occupation term, male pronouns are correctly resolved 83% of the time, compared to female pronouns only 37.5% of the time.”

The team cautions that LIT doesn’t scale well to large corpora and that it’s not “directly” useful for training-time model monitoring. But they say that in the near future, the tool set will gain features like counterfactual generation plugins, additional metrics and visualizations for sequence and structured output types, and a greater ability to customize the UI for different applications.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.