Floods are among the most common — and deadly — natural disasters in the world. Every year, they’re responsible for tens of thousands of fatalities and hundreds of millions of displaced homeowners. And they’re extraordinarily costly — in the U.S. alone from 2005 to 2014, the average flood claim was $42,000, and total flood insurance claims averaged more than $3.5 billion per year.
Accurate flood forecasting is a desirable goal, needless to say; according to some studies, early warning systems can reduce deaths and economic damages by over a third. Fortunately, it’s one scientists continue to inch toward with the help of artificial intelligence (AI). In a new paper (“ML for Flood Forecasting at Scale“) published on the preprint server Arxiv.org, researchers from Google, the Israel Institute of Technology, and Bar-Ilan University describe a machine learning system that accurately predicts riverine floods — that is, floods from overrun riverbanks.
The study is a retrospective on Google’s work in Patna, India late last year, where the Mountain View company piloted a flood-predicting model in partnership with the Central Water Commission of India. And it builds on research published by Harvard and Google in August 2018, which described an AI model capable of predicting the location of aftershocks up to one year after a major earthquake, and by Facebook AI researchers in December, who developed a method to analyze satellite imagery and quantify damage from fires and other disasters.
“Effective riverine flood forecasting at scale is hindered by a multitude of factors, most notably the need to rely on human calibration in current methodology, the limited amount of data for a specific location, and the computational difficulty of building … models that are sufficiently accurate,” the team wrote. “Machine learning is primed to be useful in this scenario: learned models [frequently] surpass human experts in complex high-dimensional scenarios.”
As the paper notes, one of the biggest challenges in building a flood prediction model is parameter calibration, an optimization process aimed at matching the algorithm’s predictions to certain baseline measurements. The standard approach involves significant manual work, and often results in models that aren’t generalizable.
The researchers overcame a few of those barriers by drawing on real-time measurements and short-term forecasts of river water levels, from which their model generates an inundation map — a map that shows where flooding may occur over a range of water levels — estimating the extent of the predicted flood. They claim that, based on alerts produced for the 2018 monsoon season, the forecasts are accurate down to a resolution of 300 meters, with over 90 percent and 75 percent recall and precision, respectively.
“[Th]e physical processes [of floods] are relatively well understood for several decades now, and relatively little calibration [was] required,” the study’s authors wrote.
That said, it’s not a perfect model, owing to high computational costs from the physics-based simulations and inaccuracies due to erroneous inputs. But the team believes that machine learning techniques hold the key to improving predictions in future work, and that those techniques might one day be used to forecast events not simulated by physics-based models, such as snowmelt and river discharge.
One imagines that the fruits of those labors will eventually make their way into Google’s Google Public Alert program, which informs users of apps like Google Search, Maps, and Google News of ongoing or impending natural disasters such as hurricanes, volcanic eruptions, tsunamis, and earthquakes. Government agencies in the U.S., Australia, Canada, Colombia, Japan, Taiwan, Indonesia, Mexico, the Philippines, India, New Zealand, and Brazil currently participate.
“We believe ML can improve the quality of multiple components,” they said. “Toward making this possible, we are collecting, organizing and combining open data sets from different sources to make this problem more accessible for the ML community.”