A.I. Is Helping Scientist Predict When and Where the Next Big Earthquake Will Be Source: Thomas Fuller and Cade Metz
Countless dollars and entire scientific careers have been dedicated to predicting where and when the next big earthquake will strike. But unlike weather forecasting, which has significantly improved with the use of better satellites and more powerful mathematical models, earthquake prediction has been marred by repeated failure.
Some of the world’s most destructive earthquakes — China in 2008, Haiti in 2010 and Japan in 2011, among them — occurred in areas that seismic hazard maps had deemed relatively safe. The last large earthquake to strike Los Angeles, Northridge in 1994, occurred on a fault that did not appear on seismic maps.
Now, with the help of artificial intelligence, a growing number of scientists say changes in the way they can analyze massive amounts of seismic data can help them better understand earthquakes, anticipate how they will behave, and provide quicker and more accurate early warnings.
“I am actually hopeful for the first time in my career that we will make progress on this problem,” said Paul Johnson, a fellow at the Los Alamos National Laboratory who is among those at the forefront of this research.
Well aware of past earthquake prediction failures, scientists are cautious when asked how much progress they have made using A.I. Some in the field refer to prediction as “the P word,” because they do not even want to imply it is possible. But one important goal, they say, is to be able to provide reliable forecasts.
The earthquake probabilities that are provided on seismic hazard maps, for example, have crucial consequences, most notably in instructing engineers how they should construct buildings. Critics say these maps are remarkably inexact.
A map of Los Angeles lists the probability of an earthquake producing strong shaking within a given period of time — usually 50 years. That is based on a complex formula that takes into account, among other things, the distance from a fault, how fast one side of a fault is moving past the other, and the recurrence of earthquakes in the area.
A study led by Katherine M. Scharer, a geologist with the United States Geological Survey, estimated dates for nine previous earthquakes along the Southern California portion of the San Andreas fault dating back to the eighth century. The last big earthquake on the San Andreas was in 1857.
Since the average interval between these big earthquakes was 135 years, a common interpretation is that Southern California is due for a big earthquake. Yet the intervals between earthquakes are so varied — ranging from 44 years to 305 years — that taking the average is not a very useful prediction tool. A big earthquake could come tomorrow, or it could come in a century and a half or more.
This is one of the criticisms of Philip Stark, an associate dean at the University of California, Berkeley, at the Division of Mathematical and Physical Sciences. Dr. Stark describes the overall system of earthquake probabilities as “somewhere between meaningless and misleading” and has called for it to be scrapped.
The new A.I.-related earthquake research is leaning on neural networks, the same technology that has accelerated the progress of everything from talking digital assistants to driverless cars. Loosely modeled on the web of neurons in the human brain, a neural network is a complex mathematical system that can learn tasks on its own.
Scientists say seismic data is remarkably similar to the audio data that companies like Google and Amazon use in training neural networks to recognize spoken commands on coffee-table digital assistants like Alexa. When studying earthquakes, it is the computer looking for patterns in mountains of data rather than relying on the weary eyes of a scientist.
“Rather than a sequence of words, we have a sequence of ground-motion measurements,” said Zachary Ross, a researcher in the California Institute of Technology’s Seismological Laboratory who is exploring these A.I. techniques. “We are looking for the same kinds of patterns in this data.”
Brendan Meade, a professor of earth and planetary sciences at Harvard, began exploring these techniques after spending a sabbatical at Google, a company at the forefront of A.I. research.
His first project showed that, at the very least, these machine-learning methods could significantly accelerate his experiments. He and his graduate students used a neural network to run an earthquake analysis 500 times faster than they could in the past. What once took days now took minutes.
Dr. Meade also found that these A.I. techniques could lead to new insights. In the fall, with other researchers from Google and Harvard, he published a paper showing how neural networks can forecast earthquake aftershocks. This kind of project, he believes, represents an enormous shift in the way earthquake science is done. Similar work is underway at places like Caltech and Stanford University.
“We are at a point where the technology can do as well as — or better than — human experts,” Dr. Ross said.
Driving that guarded optimism is the belief that as sensors get smaller and cheaper, scientists will be able to gather larger amounts of seismic data. With help from neural networks and similar A.I. techniques, they hope to glean new insights from all this data.
Dr. Ross and other Caltech researchers are using these techniques to build systems that can more accurately recognize earthquakes as they are happening and anticipate where the epicenter is and where the shaking will spread.
Japan and Mexico have early warning systems, and California just rolled out its own. But scientists say artificial intelligence could greatly improve their accuracy, helping predict the direction and intensity of a rupture in the earth’s crust and providing earlier warnings to hospitals and other institutions that could benefit from a few extra seconds of preparation.
“The more detail you have, the better your forecasts will be,” Dr. Ross said.
Scientists working on these projects said neural networks have their limits. Though they are good at finding familiar signals in data, they are not necessarily suited to finding new kinds of signals — like the sounds tectonic plates make as they grind together.
But at Los Alamos, Dr. Johnson and his colleagues have shown that a machine-learning technique called “random forests” can identify previously unknown signals in a simulated fault created inside a lab. In one case, their system showed that a particular sound made by the fault, which scientists previously thought was meaningless, was actually an indication of when an earthquake would arrive.
Some scientists, like Robert Geller, a seismologist at the University of Tokyo, are unconvinced that A.I. will improve earthquake forecasts. He questions the very premise that past earthquakes can predict future ones. And ultimately, he said, we would only know the effectiveness of A.I. forecasting when earthquakes can be predicted beyond random chance.
“There are no shortcuts,” Dr. Geller said. “If you cannot predict the future, then your hypothesis is wrong.”
| }
|