Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Let the OSS Enterprise newsletter guide your open source journey! Sign up here.
Meta (formerly Facebook) this week announced the release of Bean Machine, a probabilistic programming system that ostensibly makes it easier to represent and learn about uncertainties in AI models. Available in early beta, Bean Machine can be used to discover unobserved properties of a model via automatic, “uncertainty-aware” learning algorithms.
“[Bean Machine is] inspired from a physical device for visualizing probability distributions, a pre-computing example of a probabilistic system,” the Meta researchers behind Bean Machine explained in a blog post. “We on the Bean Machine development team believe that the usability of a system forms the bedrock for its success, and we’ve taken care to center Bean Machine’s design around a declarative philosophy within the PyTorch ecosystem.”
It’s commonly understood that deep learning models are overconfident — even when they make mistakes. Epistemic uncertainty describes what a model doesn’t know because the training data wasn’t appropriate, while aleatoric uncertainty is the uncertainty arising from the natural randomness of observations. Given enough training samples, epistemic uncertainty will decrease, but aleatoric uncertainty can’t be reduced even when more data is provided.
Probabilistic modeling — the AI technique that Bean Machine adopts — can measure these kinds of uncertainty by taking into account the impact of random events in predicting the occurrence of future outcomes. Compared with other machine learning approaches, probabilistic modeling offers benefits like uncertainty estimation, expressivity, and interpretability. Analysts who leverage it can understand not only an AI system’s prediction, but also the relative likelihood of other possible predictions. Probabilistic modeling also makes it simpler to match the structure of a model to the structure of a problem. And with it, users can interpret why particular predictions were made — which might aid in the model development process.
Bean Machine — built on top of Meta’s PyTorch machine learning framework and Bean Machine Graph (BMG), a custom C++ backend — lets data scientists write out the math for a model directly in Python and have BMG to do the work of probabilistic modeling, inferring the possible distributions for predictions based on the declaration of the model.
Uncertainty as measured by Bean Machine can help to spotlight a model’s limits and potential failure points. For example, uncertainty can reveal the margin of error for a house price prediction model or the confidence of a model designed to predict whether a new app feature will perform better than an old feature.
Further illustrating the importance of the concept of uncertainty, a recent Harvard study found that showing uncertainty metrics to both people with a background in machine learning and non-experts had an equalizing effect on their resilience to AI predictions. While fostering trust in AI may never be as simple as providing metrics, awareness of the pitfalls could go some way toward protecting people from machine learning’s limitations.
Bean Machine quantifies predictions “with reliable measures of uncertainty in the form of probability distributions … It’s easy to encode a rich model directly in source code, [and because] the model matches the domain, one can query intermediate learned properties within the model,” Meta continued. “This, we hope, makes using Bean Machine simple and intuitive — whether that’s authoring a model, or advanced tinkering with its learning strategies.”
Bean Machine has been available on GitHub as of early December.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.