Bias is a well-established problem in artificial intelligence (AI) and is typical of models trained on unrepresentative datasets. It’s a tougher challenge to solve than you might think, particularly in image classification tasks where racial, societal, and ethnic prejudices frequently rear their ugly heads.

In a crowdsourced attempt to combat the problem, Google partnered with the NeurIPS competition track in September to launch the Inclusive Images Competition. This challenges teams to use Open Images — a publicly available dataset of 900 labeled images sampled from North America and Europe — to train an AI system evaluated on photos collected from regions around the world. It’s hosted on Kaggle, Google’s data science and machine learning community portal.

Pallavi Baljekar, a Google Brain researcher, gave a progress update this morning during a presentation on algorithmic fairness.

“[Image classification] performance … has [been] improving drastically … over the last few years … [and] has almost surpassed human performance [on some datasets]” Baljekar said. “[But we wanted to] see how well the models [did] on real-world data.”

To that end, Google AI scientists set a pretrained Inception v3 model loose on the Open Images dataset. One photo — a caucasian bride in a Western-style, long and full-skirted wedding dress — resulted in labels like “dress,” “women,” “wedding,” and “bride.” However, another image — also of a bride, but of Asian descent and in ethnic dress — produced labels like “clothing,” “event,” and “performance art.” Worse, the model completely missed the person in the image.

“As we move away from the Western presentation of what a bride looks like … the model is not likely to [produce] image labels as a bride,” Baljekar said.

Google AI

Above: Wedding photographs labeled by a classifier trained on the Open Images dataset.

Image Credit: Google AI

The reason is no mystery. Comparatively few of the photos in the Open Images dataset are from China, India, and the Middle East. And research has already shown that computer vision systems are susceptible to racial bias.

2011 study found that AI developed in China, Japan, and South Korea had more trouble distinguishing between Caucasian faces than East Asians, and in a separate study conducted in 2012 facial recognition algorithms from vendor Cognitec performed 5 to 10 percent worse on African Americans than on Caucasians. More recently, a House oversight committee hearing on facial recognition technologies revealed that algorithms used by the Federal Bureau of Investigation to identify criminal suspects are wrong about 15 percent of the time, while Amazon‘s Rekognition program misidentified 28 members of Congress as criminals, with a strong bias against individuals of color.

The Inclusive Images Competition’s goal, then, was to spur competitors to develop image classifiers for scenarios where data collection would be difficult — if not impossible.

To compile a diverse dataset against which submitted models could be evaluated, Google AI used an app that instructed users to take pictures of objects around them and generated captions using on-device machine learning. The captions were converted into action labels and passed through an image classifier, which were verified by a human team. A second verification step ensured people were properly labeled in images.

In the first of two competition stages, during which 400 teams participated, Google AI released 32,000 images of diverse data sampled from different geolocations and label distributions from the Open Image data. In the second stage, Google released 100,000 images with different labels and geographical distributions from the first stage and training dataset.

Google AI

Above: Examples of labeled images from the challenge dataset.

Image Credit: Google AI

So what were the takeaways? The top three teams used a combination of networks and data augmentation techniques, and their AI systems maintained relatively high accuracy in both stage one and stage two. And while four out of five of the top teams’ models didn’t predict the “bride” label when applied to the original two bride images, they did recognize a person in the images.

“Even with a small, diverse set of data, we can improve performance on unseen target distributions,” Baljekar said.

Google AI will release a 500,000-image diverse dataset on December 7.