VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


During its Build 2020 developer conference, which takes place online this week, Microsoft announced the addition of new capabilities to Azure Machine Learning, its cloud-based environment for training, deploying, and managing AI models. WhiteNoise, a toolkit for differential privacy, is now available both through Azure and in open source on GitHub, joining new AI interpretability and fairness tools as well as new access controls for data, models, and experiments; new techniques for fine-grained traceability and lineage; new confidential machine learning products; and new workflow accountability documentation.

The effort is a part of Microsoft’s drive toward more explainable, secure, and “fair” AI systems. Studies have shown bias in facial recognition systems to be pervasive, for instance, and AI has a privacy problem in that many models can’t use encrypted data. In addition to the Azure Machine Learning features launching today, Microsoft’s attempts at solutions to those and other challenges include AI bias-detecting tools, internal efforts to reduce prejudicial errors, AI ethics checklists, and a committee (Aether) that advises on AI pursuits. Separately, Microsoft corporate vice president Eric Boyd says the teams at Xbox, Bing, Azure, and across Microsoft 365 informed the development of — and used themselves — some of the toolkits released this morning.

“Organizations are now looking at how they [can] develop AI applications that are easy to explain and comply with regulations, for example non-discrimination and privacy regulations. They need tools with these AI models that they’re putting together that make it easier to explain, understand, protect, and control the data and the model,” Boyd told VentureBeat in a phone interview. “We think our approach to AI is differentiated by building a strong foundation on deep research and a thoughtful approach and commitment to open source.”

Microsoft responsible ML

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 

Learn More

The WhiteNoise toolkit, which was developed in collaboration with researchers at the Harvard Institute for Quantitative Social Science and School of Engineering, leverages differential privacy to make it possible to derive insights from data while protecting private information, such as names or dates of birth. Typically, differential privacy entails injecting a small amount of noise into the raw data before feeding it into a local machine learning model, thus making it difficult for malicious actors to extract the original files from the trained model. An algorithm can be considered differentially private if an observer seeing its output cannot tell if it used a particular individual’s information in the computation.

WhiteNoise provides an extensible library of differentially private algorithms and mechanisms for releasing privacy-preserving queries and statistics, as well as APIs for defining an analysis and a validator for evaluating the analyses and calculating the total privacy loss on a data set. Microsoft says it could enable a group of hospitals to collaborate on building a better predictive model on the efficacy of cancer treatments, for instance, while at the same time helping to adhere to legal requirements to protect the privacy of hospital information and ensuring that no individual patient data leaks out from the model.

A separate toolkit backed by Microsoft’s AI and Ethics in Engineering and Research (Aether) Committee that will be integrated with Azure Machine Learning in June, Fairlearn, aims to assess AI systems’ fairness and mitigate any observed unfairness issues with algorithms. From within a dashboard, Fairlearn defines whether an AI system is behaving unfairly toward people, focusing on two kinds of harms: allocation harms and quality-of-service harms. Allocation harms occur when AI systems extend or withhold opportunities, resources, or information — for example, in hiring, school admissions, and lending. Quality-of-service harms refers to whether a system works as well for one person as it does for another, even if no opportunities, resources, or information are extended or withheld.

Fairlearn follows an approach known as group fairness, which seeks to uncover which groups of individuals are at risk for experiencing harms. A data scientist specifies the relevant groups (e.g., genders, skin tones, and ethnicities) within the toolkit, and they are application-specific; group fairness is formalized by a set of constraints, which requires that some aspect (or aspects) of the AI system’s behavior is comparable across the groups.

Microsoft responsible ML

According to Microsoft, professional services firm Ernst & Young used Fairlearn to evaluate the fairness of model outputs with respect to biological sex. The toolkit revealed a 15.3% difference between positive loan decisions for males versus females, and Ernst & Young’s modeling team then developed and trained multiple remediated models and visualized the common trade-off between fairness and model accuracy. The team ultimately landed on a final model that optimized and preserved overall accuracy but reduced the difference between males and females to 0.43%.

Last on the list of new toolkits is InterpretML, which debuted last year in alpha but which today became available in Azure Machine Learning. InterpretML incorporates a number of machine learning interpretability techniques, helping to elucidate through visualizations model’s behaviors and the reasoning behind predictions. It can recommend the parameters — or variables — that are most important to a model in any given use case, and it can explain why these parameters are important.

“We want[ed] to make this available to a broad set of our customers through Azure Machine Learning to help them understand and explain what’s going on with their model,” said Boyd. “With all of [these toolkits], we think we’ve given developers a lot of power to really understand their models — they can see the interpretability of them [and the] fairness of them, and begin to understand other parameters they’re not comfortable with making predictions or that are swaying the model in a different way.”

Microsoft Build 2020: read all our coverage here.