Seismic events like the magnitude-9.0 earthquake that hit off the coast of Japan in March 2011 aren’t difficult to detect, but few are quite so violent. Microearthquakes — low-intensity earthquakes that register 2.0 or less magnitude on the moment magnitude scale — rarely cause property damage. And as a result of background noise, small events, and false positives, they’re not always picked up by seismic monitoring systems.

A possible solution is described in a new paper from the Department of Geophysics at Stanford University, where scientists have developed an AI system — dubbed Cnn-Rnn Earthquake Detector, or CRED — that can isolate and identify a range of seismic signals from historical and continuous data.

It builds on the work of Harvard and Google, which in August created an AI model capable of predicting the location of aftershocks up to one year after a major earthquake.

The researchers’ system consists of neural network layers — interconnected processing nodes that loosely mimic the function of neurons in the brain — of two types: convolutional neural networks and recurrent neural networks. The former extracts features from seismographs, while the latter — which can combine memory and inputs to improve the accuracy of its predictions — learns the sequential characteristics of said seismographs.

VB Transform 2020 Online - July 15-17. Join leading AI executives: Register for the free livestream.

The two make up a residual-learning framework, an architecture that mitigates a common problem of multilayered neural networks. Typically, as the number of layered nodes increases, accuracy saturates and degrades. But thanks to the way residual-learning functions process functions, the neural nets within them are able both to maintain accuracy and learn more high-level features from datasets. As an added benefit, they’re easier to optimize.

To train and validate the earthquake-detecting AI system, the researchers sourced continuous data recorded in Guy-Greenbrier, Arkansas, during 2011 for a catalog containing 3,788 events, in addition to 889 monitoring stations in North California for 550,000 30-second, 3-component seismograms.

Roughly 50,000 samples of the dataset of 550,000 were used to evaluate performance. The result? The network was able to predict earthquake signals no matter whether the seismic event was large, small, local, or contained a high degree of background noise. Crucially, the AI didn’t require the full length of a signal to detect an earthquake; a partial record was enough.

When fed the continuous data from the Guy-Greenbrier dataset, the model, which took a little over an hour to train on a laptop, detected 1,102 microearthquakes and earthquakes caused by hydraulic fracturing, wastewater injection, and tectonic plate movement — including 77 that hadn’t been previously cataloged.

“Our model is able to detect more than 700 microearthquakes as small as -1.3 [magnitude] induced … far away [from] the training region,” they wrote.

The researchers report that in all tests, the learned model achieved “superior” performance compared to two widely deployed seismic systems. And they note that it generalized well to seismic data that it hadn’t seen.

“[O]nce the network is trained, it can be applied to a stream of seismic data in real time,” they write. “The architecture is flexible and can be scaled up easily. False positive rates are minimal due to the high-resolution modeling of earthquake signals based on their spectral structure.”

The team believes that the machine learning model, which they say can be easily scaled to multiple sensors, could perform real-time monitoring in tectonically active zones or serve as the foundation of an early earthquake warning system.