It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
So far, no one has found a reliable way to forecast earthquakes, even though many scientists have tried. Some experts consider it a hopeless endeavor. “You’re viewed as a nutcase if you say you think you’re going to make progress on predicting earthquakes,” says Paul Johnson, a geophysicist at Los Alamos National Laboratory. But he is trying anyway, using a powerful tool he thinks could potentially solve this impossible puzzle: artificial intelligence.
Researchers around the world have spent decades studying various phenomena they thought might reliably predict earthquakes: foreshocks, electromagnetic disturbances, changes in groundwater chemistry—even unusual animal behavior. But none of these has consistently worked. Mathematicians and physicists even tried applying machine learning to quake prediction in the 1980s and ’90s, to no avail. “The whole topic is kind of in limbo,” says Chris Scholz, a seismologist at Columbia University’s Lamont–Doherty Earth Observatory.
But advances in technology—improved machine-learning algorithms and supercomputers as well as the ability to store and work with vastly greater amounts of data—may now give Johnson’s team a new edge in using artificial intelligence. “If we had tried this 10 years ago, we would not have been able to do it,” says Johnson, who is collaborating with researchers from several institutions. Along with more sophisticated computing, he and his team are trying something in the lab no one else has done before: They are feeding machinesraw data—massive sets of measurements taken continuously before, during and after lab-simulated earthquake events. They then allow the algorithm to sift through the data to look for patterns that reliably signal when an artificial quake will happen. In addition to lab simulations, the team has also begun doing the same type of machine-learning analysis using raw seismic data from real temblors.
It remains unknown how the small strains induced by seismic waves can trigger earthquakes at large distances, in some cases thousands of kilometres from the triggering earthquake, with failure often occurring long after the waves have passed1, 2, 3, 4, 5, 6. Earthquake nucleation is usually observed to take place at depths of 10–20 km, and so static overburden should be large enough to inhibit triggering by seismic-wave stress perturbations. To understand the physics of dynamic triggering better, as well as the influence of dynamic stressing on earthquake recurrence, we have conducted laboratory studies of stick–slip in granular media with and without applied acoustic vibration. Glass beads were used to simulate granular fault zone material, sheared under constant normal stress, and subject to transient or continuous perturbation by acoustic waves. Here we show that small-magnitude failure events, corresponding to triggered aftershocks, occur when applied sound-wave amplitudes exceed several microstrain. These events are frequently delayed or occur as part of a cascade of small events. Vibrations also cause large slip events to be disrupted in time relative to those without wave perturbation. The effects are observed for many large-event cycles after vibrations cease, indicating a strain memory in the granular material. Dynamic stressing of tectonic faults may play a similar role in determining the complexity of earthquake recurrence.
The six scientists—three seismologists, a volcanologist, and two seismic engineers—together with a public official were put on trial in 2011 for advice they gave at a meeting of an official government advisory committee known as the Major Risks Commission held on 31 March 2009. The judge in that trial, Marco Billi, concluded that the experts' advice was unjustifiably reassuring and led some of the 309 victims of the earthquake, which struck L'Aquila in the early hours of 6 April 2009, to underestimate the threat posed by the ongoing "swarm" of tremors and so remain indoors on that fateful night rather than seek shelter outdoors. Describing the experts' risk analysis as "superficial, approximate and generic," Billi sentenced each of them to 6 years in jail.
Earthquake magnitude prediction for Hindukush region has been carried out in this research using the temporal sequence of historic seismic activities in combination with the machine learning classifiers. Prediction has been made on the basis of mathematically calculated eight seismic indicators using the earthquake catalog of the region. These parameters are based on the well-known geophysical facts of Gutenberg–Richter’s inverse law, distribution of characteristic earthquake magnitudes and seismic quiescence. In this research, four machine learning techniques including pattern recognition neural network, recurrent neural network, random forest and linear programming boost ensemble classifier are separately applied to model relationships between calculated seismic parameters and future earthquake occurrences. The problem is formulated as a binary classification task and predictions are made for earthquakes of magnitude greater than or equal to 5.5 (M≥ 5.5), for the duration of 1 month. Furthermore, the analysis of earthquake prediction results is carried out for every machine learning classifier in terms of sensitivity, specificity, true and false predictive values. Accuracy is another performance measure considered for analyzing the results. Earthquake magnitude prediction for the Hindukush using these aforementioned techniques show significant and encouraging results, thus constituting a step forward toward the final robust prediction mechanism which is not available so far.