Presented by Dataiku

Not only do customers care whether AI results are explainable, but internally, white-box AI is less risky. But what does it mean in practice? And how can businesses move away from black-box systems to more explainable AI? Learn how white box AI brings value and more when you join this VB Live event, part of our special issue on Power in AI.

Register here for free.

“It’s pretty obvious that when you don’t understand how a decision or process is made, you run the risk of making a bad decision,” says Triveni Gandhi, data scientist at Dataiku. “You can’t just expect a model to be magical and be correct every time. You need to make sure that the decisions it’s making are aligned with your stated goals, both in an ethical way and as far as underlying business value.”

“Black box AI” refers to opaque machine learning algorithms that offer few to no clear clues about how they’ve come to the conclusions they produce. Users input data, the system generates an answer, and the path between the two is inscrutable. Researchers are now shining a light on the unconscious inherent biases in the data fed to AI systems that are programmed by fallible humans. AI has produced stunning results, but it has also led to surprising failures — see the very public recent controversy around the Apple Credit card, for instance.

The impact of black box decisions is particularly stark in the financial and health care industries, Ghandi points out.

“In these heavily regulated fields like finance, health care, and insurance, the black box model can be very problematic,” she says. “There’s a lot of regulation in these industries, and not a lot of transparency around why decisions are made — why was a loan denied, or why was an insurance premium raised?” That can lead to trouble with regulators.

In the medical field, image detection might be able to pinpoint a probable tumor, but at the same time, a doctor can’t simply call the patient in and deliver the news; they need to understand the reasons why the AI has determined that a spot on the image is cancer, so that the doctor can communicate with certainty to the patient.

The whole idea of white box modeling falls under this larger umbrella of responsible AI. It’s not just a drive toward being more ethical, but about being able to showcase where decisions are being made and how they’re being made. Making it explainable makes it more acceptable to anyone involved or affected by the algorithm. Plus, it mitigates regulation issues and liability, and improves governance.

The drive toward white box, explainable AI isn’t just lip service, Ghandi says. A number of boot camps and master’s programs are introducing ethical, explainable AI as a track within their programs. The University of San Francisco has a center for data ethics that’s embedded within their broader data science curriculum.

“It’s definitely becoming much more top of mind,” she says. “We’re seeing this shift that wasn’t there before. People are starting to talk about it, and more than just talk about it. They’re trying to find ways to actually implement it and use it, so that they’re making the most of it.”

For companies, it’s just good business. At a governance level, it ensures that it is clear to the entire company why decisions are being made, and for uncovering problematic inputs when a decision ends up being incorrect.

The first step is implementing the kind of models that offer open results and will return variable importance, or feature information. New packages and resources are constantly being developed on how to create explanation and interpretation from a model. Staying up to date and following along with the latest implementations of different mathematical modeling systems that address the black box problem is important.

This opens up the next step in AI collaboration, Ghandi says: democratization. Businesses can implement a unified platform that brings different stakeholders together, so that they’re accessing not only the data that they need, but also each other’s work, enabling collaboration and interaction, which increases democratization.

“The benefits are enormous,” she adds. “It’s about putting people together who have different skill sets to attack a problem collaboratively. You’re allowing people to do what they do best, but also making sure that they’re working together toward a common goal, which then drives a lot of results.”

To learn more about why explainable AI is a powerful business differentiator, the ways white box AI is transforming how businesses work, and more, don’t miss this VB Live event.

Don’t miss out!

Register here for free.

Key takeaways:

  • How to make the data science process collaborative across the organization
  • How to establish trust from the data all the way through the model
  • How to move your business toward data democratization


  • Triveni Gandhi, Data Scientist, Dataiku
  • David Fagnan, Director, Applied Science, Zillow Offers
  • Rumman Chowdhury, Managing Director, Accenture AI
  • Seth Colaner, AI Editor, VentureBeat