With respect to climate change, poaching, and encroachment on natural habitats, some animal populations have fared far worse than others. It’s estimated that the populations of more than 4,000 species shrunk by 60% between 1970 and 2014, and a recent United Nations global assessment found that as many as 1 million species are at risk of extinction within the next decade.
That’s why Google has partnered with Conservation International and other organizations — the Smithsonian’s National Zoo and Conservation Biology Institute, North Carolina Museum of Natural Sciences, Map of Life, World Wide Fund for Nature, Wildlife Conservation Society, and Zoological Society of London, with support from Google’s Earth Outreach program and the Gordon and Betty Moore Foundation and Lyda Hill Philanthropies. The goal is to help process one of the world’s largest and most diverse databases of photographs taken from motion-activated cameras. As of today, the fruits of their labor is available through Google Cloud as a part of Wildlife Insights, an AI-enabled platform that streamlines conservation monitoring by expediting trap-camera photo analysis.
As Google Earth Outreach program manager Tanya Birch explains in a blog post, the thousands of trap cameras around the world placed and monitored by biologists and land managers snap millions of photos each year. The volume makes it challenging to sift through the images efficiently, and not every animal is easy to spot — particularly when they’re in the dark or hiding behind a bush. Upwards of 80% of photos contain no animals at all because the camera trap was triggered by blowing grass or other extraneous elements.
That’s where Wildlife Insights’ data set comes in. It’s publicly available, enabling people to explore trap-camera images and filter them by species, country, and year. Google says that its AI Platform Prediction service, which hosts trained AI models in the cloud and runs data through those models with as little latency as possible, has made it possible for Wildlife Insights to analyze up to 3.6 million photos an hour. That’s up to 3,000 times faster than with human experts, who label 300 to 1,000 images per hour, on average.
Google says it helped train a model to classify species in an image using its open source TensorFlow machine learning framework, and to automatically remove photos that don’t contain an animal. It achieved high accuracy across 614 species, with an 80% to 98.6% classification accuracy for species like jaguars and African elephants, as well as rarer wildlife, like white-lipped peccaries.
“With this data, managers of protected areas or anti-poaching programs can gauge the health of specific species, and local governments can use data to inform policies and create conservation measures,” said Birch. “While we’re just at the beginning of applying AI to better understand wildlife from sensors in the field, solutions like Wildlife Insights can help us protect our planet so that future generations can live in a world teeming with wildlife.”
Google is far from the first to apply AI to ecology. Sibling company DeepMind recently detailed ecological research its science team is conducting to develop AI systems that will help study the behavior of animal species in Tanzania’s Serengeti National Park. Microsoft recently highlighted a Santa Cruz, California-based startup called Conservation Metrics that’s leveraging machine learning to track African elephants. Separately, a team of researchers developed an algorithm trained on Snapshot Serengeti that can identify, describe, and count wildlife with 96.6% accuracy, and Intel’s TrailGuard AI system prevents poaching by detecting motion with cameras using an on-device AI algorithm.