The beginning and the middle of the 20th century have presented seismology with important discoveries and inventions. One of the most important was the contribution of C. F. Richter and his later famous Richter scale that was used to describe the distribution of earthquakes by size, depth as well as their effects.
Earthquake Faulting – Cause or Effect?
If the so-called Classical Period (1920–1960) clarified the earthquake distribution it still didn’t provide seismology with a better understanding of the source of seismic waves. Nevertheless, the seismologists of that time already knew that faults in the Earth’s crust and earthquakes were closely related but saw the faulting associated with earthquakes rather as their effect and not as a cause for earthquakes. The paradigmatic event leading to a change in the understanding of this topic was the 1906 California earthquake when the theory of elastic rebound was first presented by H. F. Reid (1859–1944), an American geophysicist. According to this theory, “earthquakes are caused by the release of stress built up along a fault1.”
Although the theory proved itself to be useful the seismologists were still having difficulties trying to relate what happened at the source of an earthquake to the observations that seismograms provided. This way they “looked for patterns in the first motion as recorded at an observatory, or seen at several observatories around an earthquake1.” They required a number of stations at that time only available in Japan and that was also the reason why the Japanese seismologists were for some time the leaders in this topic.
Following the examination of the first motion from isolated forces and couples, both observational and theoretical studies rapidly developed among Japanese scientists and led to diverse interpretations. It wasn’t until the period after World War II that Japanese earthquake mechanism studies were also debated elsewhere, for example in the Soviet Union. An “independent Soviet program on this subject began after 1948 […] and developed methods for mechanism determination that made use, in principle, of first motions and amplitudes of both P and S waves3.” Both Soviet and Japanese seismologists worked with local earthquakes whereas the Canadian seismologists applied techniques largely to global earthquakes.
From Surface-wave Studies to Underground Testing
Japanese, Soviet and Canadian seismologists may have been successful in trying to find the correlation between earthquakes and seismic waves but failed “to compile maps of inferred stress directions and relate these to regional tectonics4.” Besides the studies on the so-called travel-time seismology, the 1950s brought further development in surface-wave studies: M. Ewing (1906–1974), an American geophysicist and oceanographer, and F. Press (1924–2020), an American Geophysicist, for example, studied surface waves in order to develop a better design of seismometer. A new seismometer design also meant that it could better record the longest periods of seismic waves, which eventually led to a better understanding of seismic-wave computation.
After 1950 seismology in the 1960s faced many changes and experienced major growth but the most important breakthrough was actually the fact that seismology started to be viewed as relevant to national security. This way it gained the status that physics had since the 1940s although “in this case not for the weapons it could build, but for those it could detect5.” The first nuclear explosion in 1945, for example, could therefore be detected seismically, which was also the reason for seismology being included in the US investigations of detection from their beginnings in 1947. Besides infrasound and radionuclide collection, seismic methods gained significance as they became more and more important in atmospheric testing, especially with the arrival of underground testing.
A Major Breakthrough in the 1960s
Underground testing and the results from the so-called RAINIER test have proven themselves to be useful even if the existing knowledge was still inadequate. This eventually led to discussions on large-scale programs of “fundamental research in seismology”6, which resulted in the creation of various such programs, for example, the VELA-UNIFORM program in 1960. Its main reason was to provide support to seismologists living outside the United States. Most of the VELA-UNIFORM funds went to improve the instrumentation as well as to further develop the seismic array, while its most important contribution to seismology was definitely the so-called World Wide Standard Seismograph Network (WWSSN). The WWSSN provided seismologists with “easy access to records from standardized and well-calibrated sensors spread around the world. These data improved old results (e.g., on earthquake distribution) and made possible new ones1.”
Besides the data provided by the WWSSN, there were also other tools that helped transform and speed up seismic calculations, for example, rapid computation. With its help seismologists were now not only able to calculate the results faster but also to focus themselves on the surface wave dispersion in realistic structures as well as on the modeling of complex structures. This provided them with a different way to look at and study the data, which also led to the beginning of signal processing and Fourier analysis that have made their way into seismology. Such discoveries and new ways to process data marked the beginning of the modern era in seismology starting in the 1960s –the decade that brought a “new style in seismology, with the first detection of the Earth’s free oscillations of an elastic sphere8.” After several observations of the mode of oscillations Z. Alterman, H. Jarosch, and C.L. Pekeris computed the periods for a realistic Earth in 1959 using an early computer. This provided more precise Earth models and at the same time marked the birth of new seismological techniques, digital recording of seismic data, and the methodology of geophysics.
The modern era of seismology started in the 1960s and brought a shift in the perception of sea-floor spreading and plate tectonics. The measurements provided for this theory were, nevertheless, nonseismic, but this didn’t mean that earthquakes haven’t played an important role. Especially oceanic earthquakes provided a good basis for a better understanding of very narrow and continuous zones along ocean ridges, which later “showed that oceanic fracture zones behaved as transform faults, connecting segments of spreading ridge1.”
Focal-mechanism studies contributed to the development of plate tectonics that had a large effect on seismology in integrating earthquake occurrence with other evidence of deformation. This allowed for seismicity to be not only described but also explained as earthquakes weren’t merely something that occurred in some places but also something that had a consequence of the geometry and history of plate motion. Such conceptual breakthroughs finally gave a better understanding of what was going on at the earthquake source and what kind of damage earthquakes could produce. Moreover, they have provided an excellent basis for future studies of seismicity and even enabled what we now believe to be the future of earthquake forecasting.
1) Agnew, Duncan Carr. History of Seismology. Accessed on 16-Mar-2022. Available at: https://igppweb.ucsd.edu/~agnew/Pubs/agnew.a66.pdf