Google today released an experimental module for TensorFlow Privacy, its privacy-preserving TensorFlow toolkit, that enables assessments of the privacy properties of various machine learning classifiers. The company says it’s intended to be the foundation of a privacy testing suite that can be used by any AI developer regardless of skill level.

The merits of various AI privacy techniques remain a topic of debate within the community. There are no canonical guidelines to produce a private model, but a growing body of research suggests AI models can leak sensitive information of training data sets, creating a privacy risk. The mitigation approach favored by TensorFlow privacy is differential privacy, which adds noise to hide individual examples in the training data. But this noise is designed for academic worst-case scenarios and can significantly affect model accuracy.

This motivated researchers at Google to pursue an alternative in membership inference attacks. The membership inference attacks method, which the new TensorFlow Privacy module supports, builds classifiers that infer whether a particular sample was present in the training data set. The more accurate the classifier is, the more memorization is present and thus the less privacy-preserving the model is — the intuition being that attackers who make predictions with high accuracy will succeed in figuring out which data was used in the training set.

The tests provided by the new module are black-box, meaning they only use the outputs of models rather than the internals (weights) or input samples. They produce a vulnerability score that determines whether the model leaks information from the training set, and they don’t require any retraining, making them relatively easy to perform.

“After using membership inference tests internally, we’re sharing them with developers to help them build more private models, explore better architecture choices, use regularization techniques such as early stopping, dropout, weight decay, and input augmentation, or collect more data,” Google Brain’s Shuang Song and Google software engineer David Marn wrote in a post on the TensorFlow blog. “Ultimately, these tests can help the developer community identify more architectures that incorporate privacy design principles and data processing choices.”

Google says that moving forward, it will explore the feasibility of extending membership inference attacks beyond classifiers and develop new tests. It also plans to explore adding the new test to the TensorFlow ecosystem by integrating it with TensorFlow Extended (TFX), an end-to-end platform for deploying production machine learning pipelines.

In related news, Google today added support for Go and Java to the foundational differential privacy library it open-sourced last summer. It also made available Privacy on Beam, an end-to-end differential privacy solution built on Apache Beam (a model and set of language-specific SDKs) that relies on the lower-level building blocks from the differential privacy library and combines them into an “out-of-the-box” solution that takes care of the steps essential to differential privacy. In addition, Google launched a new Privacy Loss Distribution tool for tracking privacy budgets that allows developers to maintain an estimate of the total cost to user privacy for collections of differentially private queries and to better evaluate the overall impact of their pipelines.


The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here