Open dialogue about important issues in earthquake science presented by Center scientists, visitors, and invitees.
Gaspard Farge, University of California, Santa Cruz Tectonic tremor tracks the repeated slow rupture of certain major plate boundary faults. One of the most perplexing aspects of tremor activity is that some fault segments produce strongly periodic, spatially extensive tremor episodes, while others have more disorganized, asynchronous activity. Here we measure the size of segments that activate synchronously during tremor episodes and the relationship to regional earthquake rate on major plate boundaries. Tremor synchronization in space seems to be limited by the activity of small, nearby crustal and intraslab earthquakes. This observation can be explained by a competition between the self-synchronization of fault segments and perturbation by regional earthquakes. Our results imply previously unrecognized interactions across subduction systems, in which earthquake activity far from the fault influences whether it breaks in small or large segments.
Tina Dura, Virgina Tech Climate-driven sea-level rise is increasing flood risks worldwide, but sudden land subsidence from great (>M8) earthquakes remains an overlooked factor. Along the Washington, Oregon, and northern California coasts, the next Cascadia subduction zone (CSZ) earthquake could cause 0.5-2 m of rapid subsidence, dramatically expanding floodplains and exposing communities to heightened flooding hazards. This talk explores the coastal geologic methods used to estimate coseismic subsidence along the CSZ, and then quantifies potential floodplain expansion across 24 Cascadia estuaries under low (~0.5 m), medium (~1 m), and high (~2 m) earthquake-driven subsidence scenarios—both today and by 2100, when compounded by climate-driven sea-level rise. We will also explore the implications for residents, infrastructure, and decision-makers preparing for the intersection of seismic and climate hazards.
Rick Aster, Colorado State University The long-period seismic background microseism wavefield is a globally visible signal that is generated by the incessant forces of ocean waves upon the solid Earth and is excited via two distinct source processes. Extensive continuous digital seismic data archives enable the analysis of this signal across nearly four decades to assess trends and other features in global ocean wave energy. This seminar considers primary and secondary microseism intensity between 4 and 20 s period between 1988 and late 2024. 73 stations from 82.5 deg. N to 89.9 deg. S latitude with >20 years of data and >75% data completeness from the NSF/USGS Global Seismographic, GEOSCOPE, and New China Digital Networks. The primary microseism wavefield is excited at ocean wave periods through seafloor tractions induced by the dynamic pressures of traveling waves where bathymetric depths are less than about 300 m. The much stronger secondary wavefield is excited at half the ocean wave period through seafloor pressure variations generated by crossing seas. It is not restricted to shallower depths but is sensitive to acoustic resonance periods in the ocean water column. Acceleration power spectral densities are estimated using 50%-overlapping, 1-hr moving windows and are integrated in 2-s wide period bands to produce band-passed seismic amplitude and energy time series. Nonphysical outliers, earthquake signals, and Fourier series seasonal variations (with a fundamental period of 365.2422 d) are removed. Secular period-dependent trends are then estimated using L1 norm residual-minimizing regression. Increasing microseism amplitude is observed across most of the Earth for both the primary and secondary microseism bands, with average median-normalized trends of +0.15 and +0.10 %/yr, respectively. Primary and secondary band microseism secular change rates relative to station medians correlate across global seismic stations at R=0.65 and have a regression slope of 1.04 with secondary trends being systematically lower by about 0.05 %/yr. Multiyear and geographically extensive seismic intensity variations show globally observable interannual climate index (e.g., El Niño–Southern Oscillation) influence on large-scale storm and ocean wave energy. Microseism intensity histories in 2-s period bands exhibit regional to global correlations that reflect ocean-basin-scale teleconnected ocean swell, long-range Rayleigh wave propagation, and the large-scale reach of climate variation. Global secular intensity increases in recent decades occur across the entire 4 – 20 s microseism band and progressively greater intensification at longer periods, consistent with more frequent large-scale storm systems that generate ocean swell at the longest periods.
Chris Johnson, Los Alamos National Lab Significant progress has been made in probing the state of an earthquake fault by applying machine learning to continuous seismic waveforms. The breakthroughs were originally obtained from laboratory shear experiments and numerical simulations of fault shear, then successfully extended to slow-slipping faults. Applying these machine learning models typically require task-specific labeled data for training and tuning for experimental results or a region of interest, thus limiting the generalization and robustness when broadly applied. Foundation models diverge from labeled data training procedures and are widely used in natural language processing and computer vision. The primary different is these models learn a generalized representation of the data, thus allowing several downstream tasks performed in a unified framework. Here we apply the Wav2Vec 2.0 self-supervised framework for automatic speech recognition to continuous seismic signals emanating from a sequence of moderate magnitude earthquakes during the 2018 caldera collapse at the Kilauea volcano on the island of Hawai'i. We pre-train the Wav2Vec 2.0 model using caldera seismic waveforms and augment the model architecture to predict contemporaneous surface displacement during the caldera collapse sequence, a proxy for fault displacement. We find the model displacement predictions to be excellent. The model is adapted for near-future prediction information and found hints of prediction capability, but the results are not robust. The results demonstrate that earthquake faults emit seismic signatures in a similar manner to laboratory and numerical simulation faults, and artificial intelligence models developed for encoding audio of speech may have important applications in studying active fault zones.
Betsy Madden, San Jose State University Seismic hazard assessments currently depend on fault slip rates, the cumulative offset over many earthquakes along individual faults, to determine the probability of earthquakes of a certain magnitude over a certain time period and potential ground motions. Geologic fault slip rates are estimated by a combination of field and laboratory techniques. Such data can be generated synthetically with mechanical models that capture slip rate variations along complex, three-dimensional fault networks. I will discuss opportunities provided by these synthetic data, as well as integration of the results with dynamic rupture models of individual earthquakes.
James Atterholt, USGS Observations of broad-scale lithospheric structure and large earthquakes are often made with sparse measurements and are low resolution. This makes interpretations of the processes that shape the lithosphere fuzzy and nonunique. Distributed Acoustic Sensing (DAS) is an emergent technique that transforms fiber-optic cables into ultra-dense arrays of strainmeters, yielding meter-scale resolution over tens of kilometers for long recording periods. Recently, new techniques have made probing fiber-measured earthquake wavefields for signatures of large-scale deformation and dynamic behavior possible. With fibers in the Eastern California Shear Zone and near the Mendocino Triple Junction, I use DAS arrays to measure a diversity of tectonic-scale phenomena. These include the length scale over which the Garlock Fault penetrates the mantle, the plumbing system of the Coso Volcanic Field at the crust-mantle boundary, the topographic roughness of the Cascadia Megathrust, and the time-dependent rupture velocity of the 2024 M7 Cape Mendocino earthquake. Dense measurements vastly improve the clarity with which we can view these processes, offering new insights into how the lithosphere evolves and what drives the behavior of large earthquakes.
Doron Morad, University of California, Santa Cruz In natural fault surfaces, stresses are not evenly distributed due to variations in the contact population within the medium, causing frictional variations that are not easy to anticipate. These variations are crucial for understanding the kinematics and dynamics of frictional motion and can be attributed to both the intact material and granular media accommodating the principal slip zone. Here, I explore the effects of heterogeneous frictional environments using two different approaches: fracture dynamics on non-mobilized surfaces and granular systems on mobilized ones. First, I will present a quantitative analysis of laboratory earthquakes on heterogeneous surfaces, incorporating both laboratory-scale seismic measurements coupled with high-speed imaging of the controlled dynamic ruptures that generated them. We generated variations in the rupture properties by imposing sequences of controlled artificial barriers along the laboratory fault. We first demonstrate that direct measurements of imaged slip events correspond to established seismic analysis of acoustic signals; the seismograms correctly record the rupture moments and maximum moment rates. We then investigate the ruptures’ early growth by comparing their measured seismogram velocities to their final size. We investigate the laboratory conditions that allow final size predictability during the rupture early growth. Due to higher initial elastic energies imposed prior to nucleation, larger events accelerate more rapidly at the rupture onset for both heterogeneous and non-heterogeneous surfaces. Second, I present a new Couette-style deformation cell designed to study stress localization in two-dimensional granular media under different flow regimes. This apparatus enables arbitrarily large deformations and spans four orders of magnitude in driving velocity, from sub-millimeter to meters per second. Using photoelasticity, we measure force distribution and localization within the granular medium. High-speed imaging captures data from a representative patch, including both lower and upper boundaries, allowing us to characterize local variations in stress and velocity. For the first time, we present experimental results demonstrating predictive local granular behavior based on particle velocities, velocity fluctuations, and friction, as defined by [tau/sigma_n]. Our findings also reveal that stress patterns in the granular medium are velocity-dependent, with higher driving velocities leading to increased stress localization. These two end-member cases of frictional sliding, one dominated by gouge, and the second by intact surfaces, highlight two fundamental aspects of friction dynamics. The spatial distribution of heterogeneity directly influences stress distribution and, consequently, the stability of the medium. With these experimental methods, we can now measure and even control these effects.
Cassie Hanagan, USGS Advancing our understanding of earthquake processes inevitably pushes the bounds of data resolution in the spatial and temporal domains. This talk will step through a series of examples leveraging two relatively niche geodetic datasets for understanding portions of the earthquake cycle: (1) temporally dense and sensitive borehole strainmeter (BSM) data, and (2) spatially dense sub-pixel image correlation displacement data. More specifically, I will detail gap-filling benefits of these two datasets for different earthquakes. BSMs respond to a frequency of deformation that bridges the capabilities of more common GNSS stations and seismometers. As such, they are typically installed to capture deformation signals such as slow slip or transient creep. In practice they are also useful for measuring dynamic and static coseismic strains. This portion of the talk will focus on enhanced network capabilities for detecting both coseismic and postseismic deformation with a relatively new BSM array in the extensional Apennines of Italy, with events spanning tens to thousands of kms away. Then, we will transition toward how these instruments can constrain spatiotemporally variable afterslip following the 2019 Mw7.1 Ridgecrest, California earthquake. High spatial resolution displacements from sub-pixel image correlation serve as gap-filling datasets in another way – providing higher spatial resolution (~0.5 m) maps of the displacement fields than any other method to date, and patching areas where other methods fail to capture the full deformation magnitude or extent, such as where InSAR decorrelates. This portion of the talk will focus on new results that define expected displacement detection thresholds from high-resolution satellite optical imagery and, alternatively, from repeat lidar data. Examples will include synthetic and real case studies of discrete and diffuse deformation from earthquakes and fault creep.
Evan Hirakawa, USGS Northern California, specifically the San Francisco Bay Area, is a great place to study earthquake hazards and risk, due to its dense population centers surrounded by active faults, as well as complex geology that strongly influences earthquake ground motions. Computer simulations of seismic wave propagation which can incorporate 3D models of the subsurface properties and complex faulting behavior are good tools for studying seismic hazard, but ultimately require more development before unlocking full potential; specifically, the 3D seismic velocity models need to be further developed in many places and the simulated motions need to be validated with real, recorded data. In this talk, I will summarize a few different research projects on these topics. First I will review recent efforts to improve the USGS San Francisco Bay region 3D seismic velocity model (SFCVM), the leading community velocity model in the area, and describe some of its interesting features. This will be followed by a preview of ongoing work from collaborators and some other promising avenues to explore, in hopes of further improving the model and stoking more community involvement. In the second part of the talk, I will switch gears and move farther north, to the Humboldt County area, where a recent M7 earthquake occurred offshore. I will show some preliminary modeling results, discuss the datasets available from this event, and describe some of the local geology and efforts to better understand subsurface structure.
Omar Issa, ResiQuant (Co-Founder)/Stanford University A study by FEMA suggests that 20-40% modern code-conforming buildings would be unfit for re-occupancy following a major earthquake (taking months or years to repair) and 15-20% would be rendered irreparable. The increasing human and economic exposure in seismically active regions emphasize the urgent need to bridge the gap between national seismic design provisions (which do not consider time to recovery) and community resilience goals. Recovery-based design has emerged as a new paradigm to address this gap by explicitly designing buildings to regain their basic intended functions within an acceptable time following an earthquake. This shift is driven by the recognition that minimizing downtime is critical for supporting community resilience and reducing the socioeconomic impacts of earthquakes. This seminar presents engineering modeling frameworks and methods to support scalable assessment and optimization of recovery-based design, including: 1. Procedures for selection and evaluation of recovery-based performance objectives and study the efficacy of user-defined checking procedures. 2. A framework to rapidly optimize recovery-based design strategies based on user-defined performance objectives. 3. Building technology to support utilization of these approaches across geographies and industrial verticals. Together, these contributions provide the technical underpinnings and industry-facing data requirements to perform broad, national-scale benefit-cost analysis (BCA) studies that can accelerate decision-making and engineering intuition as resilient design progresses in the coming years.
Martijn van den Ende, Université Côte d'Azur Already for several years it has been suggested that Distributed Acoustic Sensing (DAS) could be a convenient, low-cost solution for Earthquake Early Warning (EEW). Several studies have investigated the potential of DAS in this context, and demonstrated their methods using small local earthquakes. Unfortunately, DAS has a finite dynamic range that is easily exceeded in the near-field of large earthquakes, which severely hampers any EEW efforts. In this talk, I will present a detailed analysis of the dynamic range, and how it impacts EEW: where does it come from? What can we do when the dynamic range is exceeded? And is there still hope for DAS-based EEW systems?
Sara Beth Cebry, U.S.G.S. luid injection decreases effective normal stress on faults and can stimulate seismicity far from active tectonic regions. Based on earthquake nucleation models and measured stress levels, slip will be stable, aseismic, and limited to the fluid pressurized region—contrary to observed increases in seismicity. To understand how fluid injection effects earthquake initiation, rupture, and termination, I used large-scale laboratory faults to experimentally link effects of direct fluid injection to rupture behavior. Comparison between the nucleation of dynamic events with and without fluid pressure showed that rapid fluid injection into a low permeability fault increases multi-scale stress/strength heterogeneities that can initiate seismic slip. Factors that increase the intensity of the heterogeneity, such as increased injection rate or background normal stress, promote the initiation of small seismic events that have the potential to “run away” and propagate beyond the fluid pressurized region. Whether or not the seismic slip can “run away” depends on the background shear stress levels. When the fault was near critically stressed, dynamic slip initiated quickly after high fluid pressure levels were reached. The dynamic slip event propagated far beyond the fluid pressurized region. In comparison, when the fault was far from critically stressed, dynamic slip initiated hundreds of seconds after high injection pressures were reached and this event was limited in size by the region affected by fluid pressure. We conclude that localized decreases in effective normal stress due to fluid pressure can initiate slip, sometimes seismic slip, but the background shear stress controls whether or not that slip and grows into a large earthquake.
John Rekoske, University of California San Diego Rapidly estimating the ground shaking produced by earthquakes in real-time, and from future earthquakes, are important challenges in seismology. Numerical simulations of seismic wave propagation can be used to estimate ground motion; however, they require large amounts of computing power and are too slow for real-time problems, even with modern supercomputers. Our aim is to develop a method using both high-performance computing and machine learning techniques to obtain a close approximation of simulated seismic wavefields that can be solved rapidly. This approach integrates physics into the source- and site-specific ground motion estimates used for real-time applications (e.g., earthquake early warning) as well as many-source problems (e.g., probabilistic seismic hazard analysis). Specifically, I will focus this talk on applying data-driven reduced-order models (ROMs) that are based on the interpolated proper orthogonal decomposition method. I will discuss our work using ROMs to (1) instantaneously generate peak ground velocity maps and (2) to rapidly generate three-component velocity seismograms for earthquakes in the greater Los Angeles area. The approach is flexible, in that it can generate 3D elastodynamic Green’s functions which we can use to simulate seismograms for complex kinematic earthquake rupture models. Lastly, I will show how this approach can provide accurate, near-real-time wavefields that could be used to rapidly inform about possible earthquake damage.
Haiyang Kehoe, USGS Seismograms contain information of an earthquake source, its path through the earth, and the local geologic conditions near a recording site. Ground shaking felt on Earth’s surface is modified by each of these contributions–the spatiotemporal evolution of rupture, three-dimensional subsurface structure, and site conditions all have a substantial impact on hazards experienced by exposed populations. In this talk, I highlight three studies that have improved our understanding of ground motion variability arising from source, path, and site effects. First, I describe the rupture process of the 2017 Mw 7.7 Komandorsky Islands earthquake, which reached supershear speeds following a rupture jump across a fault stepover, and demonstrate the enhanced hazard associated with supershear ruptures across Earth’s complex transform fault boundaries. Second, I compare high-frequency wavefield simulations of Cascadia earthquakes using various tomography models of the Puget Sound region, Washington State to highlight the role of basin structure on ground motion amplification. Third, I show horizontal-to-vertical spectral ratio maps of the continental United States and emphasize the continued importance of region-specific constraints on site characterization. While each study demonstrates progress towards understanding the individual roles of source, path, and site effects on damaging earthquake ground motions, together they underscore distinct challenges for improving seismic hazard models and their uncertainties.
Tara Nye, USGS Models of earthquake ground motion (both simulations and ground-motion models) can be likened to a puzzle with three primary pieces representing the earthquake source, site conditions, and source-to-site path. Early versions of these models were developed using average behavior of earthquakes across a variety of regions and tectonic environments. Although informative, such models do not capture the unique source, path, and site effects that are expected to have a significant influence on resulting ground motion. This talk highlights several approaches for improving modeling of ground motion by focusing efforts on the different pieces of the ground-motion puzzle. Segments of the talk include (1) constraining rupture parameters of rare tsunami earthquakes, (2) estimating site-specific high-frequency attenuation in the San Francisco Bay Area, and (3) investigating relationships between path effects and crustal properties in the San Francisco Bay Area. With continued refinement to models of ground motion, we can improve confidence and reduce uncertainty in seismic hazard and risk assessments.
Rashid Shams, University of Southern California Site response in sedimentary basins is partially governed by mechanisms associated with three-dimensional features. This includes the generation of propagating surface waves due to trapped and refracted seismic waves, focusing of seismic energy due to basin shape and size, and resonance of the entire basin sediment structure. These mechanisms are referred to as basin effects and they lead to a significant increase in the amplitude and duration of the observed ground motions from earthquake events. Currently, ground motion models (GMMs) incorporate basin effects using the time-averaged shear-wave velocity in the upper 30 m (V_S30), and the isosurface depths (depth to a particular shear wave velocity horizon, z_x). This approach captures site response features associated with the basin but uses parameters that are one-dimensional in nature and therefore are limited in their description of the lateral and other three-dimensional (3D) contributing effects. This work explores geometric features as predictive parameters in the development of region-specific models to improve the characterization of site response in sedimentary basins. In this work we constrained basin shape using depth to sedimentary basement (depth to a particular shear wave velocity horizon i.e., z_1.5 and z_2.3) and depth to crystalline basement (z_c,b) which are derived and validated using systematic exploration of geological cross sections and Community Velocity Model (CVM) profiles over Los Angeles Basin (LAB). Finally geometric parameters such as includes Standard deviation of zcb, Standard deviation of Absolute difference between z_1.5 and z_cb, distance from basin margin, and Spatial Area of Influence based on V_S30 are computed based on finalized shape. Residual analysis is employed to access derived geometric parameters for their ability to reduce bias and uncertainty in basin site response analysis.
Amy Williamson, University of California Berkeley Alerts sent through earthquake early warning (EEW) programs provide precious seconds for those alerted to take simple protective actions to mitigate their seismic risk. Programs like ShakeAlert have been providing alerts for felt earthquakes across the west coast of the US for almost 5 years. Earthquakes are also one part of a multihazard system and can trigger secondary natural hazards such as tsunamis and landslides. However in order to be effective and timely, EEW and tsunami forecast algorithms must rely on the smallest amount of data available, often with variable quality and without analyst input. This talk focuses on potential advances to EEW algorithms to better constrain earthquake location and magnitude in real time, providing improved alerts, particularly in network sparse regions. Additionally, this talk highlights work using real time data to generate rapid tsunami early warning forecasts, its feasibility, and the benefit of unifying earthquake and tsunami alerts into one cohesive public-facing alerting structure.
James Biemiller, USGS An unresolved aspect of tsunami generation in great subduction earthquakes is the offshore competition between coseismic deformation mechanisms, such as shallow megathrust slip, slip on one or more splay faults, and off-fault plastic deformation. In this presentation, we first review results from data-constrained 3D dynamic rupture modeling of an active plate-boundary-scale low-angle normal fault, the Mai’iu fault, that show how stress, fault structure, and the strength and thickness of overlying sediments influence shallow coseismic deformation partitioning in an extensional setting. Similar modeling approaches can shed light on shallow coseismic deformation in contractional settings, such as the Cascadia subduction zone (CSZ). Along the northwestern margin of the U.S., robust paleoseismic proxies record multiple M>8 paleoearthquakes over the Holocene, despite limited modern interface seismicity. Additionally, growth strata in the outer wedge record Late Quaternary slip on active landward- and seaward-vergent splay faults inboard of prominent variably-vergent frontal thrusts at the deformation front. The relative importance of megathrust vs. splay fault slip in generating tsunami hazards along the Pacific Northwest coastline is relatively unconstrained. Here, we develop data-driven 3D dynamic rupture models of the CSZ to analyze structural controls on shallow rupture processes including slip partitioning across the frontal thrusts, splays, and underlying decollement. Initial simulations show that trench-approaching ruptures typically involve meter-scale slip on variably oriented preexisting planar splay faults. Splay slip reduces slip on the subduction interface in a shadowed zone updip of their intersection, with greater splay slip leading to stronger shadowing. We discuss two structural controls on splay faults’ coseismic slip tendency: their dip angle and vergence. Gently dipping splays host more slip than steeply dipping ones and seaward-vergent splays host more slip than landward-vergent ones. We attribute these effects to distinct static and dynamic mechanisms, respectively. Finally, we show initial results from simulations with newly mapped frontal thrust geometries from CASIE21 seismic reflection data and discuss future directions for our CSZ dynamic rupture modeling project.
Jaeseok Lee, Brown University Field observations indicate that fault systems are structurally complex, yet fault slip behavior has predominantly been attributed to local fault plane properties, such as friction parameters and roughness. Although relatively unexplored, emerging observations highlight the importance of fault system geometry in the mechanics governing earthquake rupture processes. In this talk, I will discuss how the geometrical complexities of fault networks impact various aspects of fault slip behavior, based on the analysis of surface fault trace misalignment. We discover that surface fault traces in creeping regions tend to be simpler, whereas those in locked regions are more complex. Additionally, we find correlations between complex fault geometry and enhanced high-frequency seismic radiation. Our findings suggest the potential for a new framework in which earthquake rupture behavior is influenced by a combination of geometric factors and rheological yielding properties.
Thomas Lee, Harvard University Since the first seismograms were recorded in the late 19th century, the seismological community has accumulated millions of ground motion records on both paper and film. While almost all analog seismic recording ended by the late 20th century, replaced by digital media, the still-extant archives of paper and film seismograms are invaluable for many ongoing scientific applications. This long-running record of ground motion is crucial for developing understanding of how both natural and anthropogenic events have changed the Earth and its processes throughout the last century. Today, most of these records are housed in institutions with limited resources, which must prioritize certain objects or types of objects for preservation and access. For example, when seismologists today are forced to triage collections, the bulky paper-records are oftentimes more at-risk for deaccessioning than more compact film copies. However, alterations introduced in reformatting (i.e., paper to film) as well as preservation requirements of the various records are not often fully understood or appreciated. To make these decisions in an informed way, it is vital to know the stability of the recording media and the level of accuracy that can be obtained from these different records. For example, image distortion and available color depth in paper and microfilm copies can result in discrepancies in derived time series which could lead to significant errors in products such as earthquake magnitude and location. We present lessons learned from recent experiences with modern archiving and processing of legacy seismic data. These include techniques for data rescue (including both scanning and conversion to time series), the importance of characterizing the full processing chain, and the importance of involving archivists and citizen science in preservation efforts.
Ross Maguire, University of Illinois Urbana-Champaign Seismic source parameters – including hypocentral locations and focal mechanism solutions – provide the most direct constraints for understanding tectonic stresses and deformation processes within planetary interiors. The SEIS (Seismic Experiment for Interior Structure) seismometer deployed by the InSight mission to Mars detected and located approximately 40 high-quality marsquakes. However, inferences about the present-day deformation and seismotectonics of Mars have been hindered by the non-uniqueness and technical challenges that arise when using seismograms recorded by just a single seismometer. In this talk, I will review what we have learned about seismic activity on Mars from InSight and discuss how waveform-based inversions of data from a single station have helped us gain a clearer understanding of martian tectonics. Several high-quality marsquakes from the Cerberus Fossae region appear to be consistent with an active extensional tectonic setting, while the largest marsquake observed by InSight – S1222a – was likely due to compressional stresses near the hemispheric dichotomy boundary. Ongoing work is aimed at gaining a better understanding of the uncertainties involved in single station moment tensor inversions and developing best practices for obtaining robust solutions.
Roland Burgmann, University of California Berkeley Decadal changes in aseismic fault slip rate on partially coupled faults reflect long-term changes in fault loading and/or fault-frictional properties that can be related to earthquake cycle processes. We consider constraints on aseismic fault slip rates from historical alignment array measurements, InSAR measurements since 1992, and repeating micro-earthquakes since 1984 along the Hayward fault, California. During recent decades, creep rates consistently increased along the whole Hayward fault. Accelerated fault creep associated with M > 4 earthquakes on the northern Hayward fault in 2007, 2010 and 2018 may explain some of the creep-rate accelerations, but the acceleration on the remaining Hayward fault does not seem to be directly tied to small-scale afterslip transients. Dynamic models of partially coupled faults through earthquake cycles suggest non-stationary asperities that continue to decrease in size late in the earthquake cycle. We explore such asperity erosion models to explain the apparent decadal acceleration of aseismic Hayward fault slip.
Savvas Marcou, University of California Berkeley MyShake is a free smartphone application, developed at UC Berkeley, that serves as one of the main delivery mechanisms for earthquake early warning (EEW) alerts issued to the US West Coast by the USGS ShakeAlert system. While it is most well-known for delivering alerts to the public, MyShake was originally conceived as a platform for crowdsourcing earthquake data. MyShake currently collects crowdsourced shaking experience reports, EEW message delivery receipts, as well as triggered acceleration waveforms using the onboard smartphone accelerometer. In this talk, I will present the progress made in taking advantage of the growing MyShake crowdsourced database, in two key areas: 1) Studies of ground motion variability in California, 2) Earthquake early warning performance assessment and predictive modeling. In the first part of the talk, I will introduce the MyShake Ground Motion Database, a database of over 1500 acceleration observations collected from 2019 to 2023 using triggered waveforms from MyShake phones globally, with a focus on California. Past research shows that the accelerations recorded by MyShake phones are systematically higher than accelerations recorded by ground-based (i.e. “free-field”) stations, likely due to the modifying effect of buildings. My work thus treats these accelerations as a distinct intensity metric, rather than equivalent to free-field acceleration. I show the development of a bespoke MyShake Ground Motion Model, and how it can be used to map the spatial variability of ground motion with remarkable correspondence to results from the free-field. I will then discuss the potential applications in ground motion products such as ShakeMap, as well as the validation of next-generation non-ergodic (i.e., location specific) ground motion models. In the second part of the talk, I will focus on MyShake’s role as an EEW delivery platform. I will present a new methodology that uses alert delivery data to rapidly assess the end-to-end performance of the US West Coast alerting pipeline every time MyShake sends out an alert. I then introduce a thought experiment where we combine our understanding of delivery latencies with the network-based, point source EEW algorithm EPIC to demonstrate the potential effectiveness of EEW in the February 2023 Turkiye earthquake doublet. Finally, I will show what the results could mean for EEW performance in a large California earthquake.
Zhigang Peng, Georgia Institute of Technology Earthquakes are not frequent in the Southeastern United States (SEUS), but they do occur in areas with long-term seismic activity and in new regions with no clear seismic history. Most of these earthquakes have relatively small magnitudes (less than 1) and are therefore not well recorded by the current seismic network. Some are extremely shallow, with hypocenters less than a few kilometers deep. In this talk, I will provide an update on our recent efforts to study shallow microearthquakes in several regions of the SEUS using dense nodal seismic networks and advanced processing techniques such as machine learning and template matching. This includes the 2020 magnitude 5.2 Sparta earthquake sequence in North Carolina, the Elgin-Lugoff earthquake swarm in South Carolina that began in December 2021, and the rock exfoliation event at Arabia Mountain in Georgia on July 17, 2023. Studying these extremely shallow events may offer new insights into the physical processes of earthquake nucleation.
Ettore Biondi, California Institute of Technology Traveltime-based tomographic methods have been extensively explored and employed by researchers since the 80s. Such algorithms have been successfully applied to various geophysical applications, ranging from seismic exploration to global to regional seismological scales. However, given the advancements in computational architectures over the last 20 years, full-waveform methodologies are now dominating most of the subsurface-parameter inversion applications. These workflows seek to match all the waveforms present within active seismic data or synthetic Green’s functions obtained by cross-correlating ambient noise. Despite this decrease in the popularity of traveltime-based tomographic approaches, these methods have great potential to be successful when applied to distributed acoustic sensing (DAS) data for seismic applications. DAS instruments can operate on existing telecommunication fibers and transform them into large-scale high-resolution seismic arrays. We demonstrate such potential by applying an Eikonal traveltime double-difference tomography algorithm to DAS data recorded in the Long Valley caldera, located in the Eastern Sierra region of California. This active volcanic area has been extensively studied in the last 50 years and its recent unrest remains still poorly understood. We employ two DAS arrays composed of almost 9000 channels along a 90-km north-south transect across the caldera to characterize the subsurface structures present underneath the area. We use almost 2000 cataloged events and apply a machine-learning algorithm to accurately pick their P and S arrival times necessary for the tomography. The range and spatial resolution of the DAS arrays allow us to retrieve structures that could not be resolved by previous studies that employed only conventional station recordings. Our results agree well with previous studies and highlight the presence of a low-velocity basin along the Mono-Inyo craters. Both P- and S-wave models also show a low-velocity structure centered below Mono Lake, which agrees with historical gravity surveys. Moreover, the low Vp/Vs ratio inverted below the Long Valley caldera suggests a lack of newly intruded materials at depth above 10 km and a clear separation between the shallow low-velocity basins and the ≥10-km deep magmatic reservoir.
Janet Watt, U.S. Geological Survey Investigating the geologic record of shallow megathrust behavior is imperative for estimating the earthquake hazard and tsunamigenic potential along the Cascadia subduction zone. Ship-borne sparker seismic imaging and multibeam mapping is integrated with targeted autonomous underwater vehicle (AUV) bathymetry and sub-bottom data to document along-strike variability in seafloor morphology and deformation mode along the Cascadia subduction zone frontal thrust offshore Oregon and northern California in unprecedented detail. The combined use of high- and ultra-high-resolution bathymetric (30-m to 1-m grids) and seismic imaging (vertical resolution ranging from 2 m to centimeters) allows us to evaluate geologic evidence for co-seismic activation of frontal thrust structures. Multi-scale data synthesis enables investigation of linkages between shallow deformation style and deeper decollement structure and accretionary mode. The ~580-km-long frontal thrust splay fault system between Astoria and Eel Canyons is divided into seven sections based on along-strike variability in shallow structure and seafloor morphology. Many late Pleistocene to Holocene active fault strands within 10 km of the deformation front exhibit both geomorphic and stratigraphic evidence for coseismic activation. The high degree of variability in detailed shallow structure and morphology along the frontal thrust reflects changes in the crustal-scale frontal thrust fault geometry and décollement level. We present a conceptual model that links the along-strike variability in frontal thrust morpho-tectonics to differences in accretionary mode. Results suggest shallow megathrust rupture including co-seismic activation of frontal thrust splay faults is a common rupture mode along much of the Cascadia margin that should be considered in future earthquake and tsunami rupture models and hazard assessments.
Jeanne Hardebeck, U.S. Geological Survey Aftershock triggering is commonly attributed to static Coulomb stress changes from the mainshock. A Coulomb stress increase encourages aftershocks in some areas, while in other areas termed “stress shadows” a decrease in Coulomb stress suppresses earthquake occurrence. While the predicted earthquake rate decrease is rarely seen, lower aftershock rates are observed in the stress shadows compared to stress increase regions. However, the question remains why some aftershocks occur in the stress shadows. I examine three hypotheses: (1) Aftershocks appear in shadows because of inaccuracy in the computed stress change. (2) Aftershocks in the shadows occur on faults with different orientations than the model receiver faults, and these unexpected fault orientations experience increased Coulomb stress. (3) Aftershocks in the shadows are triggered by other physical processes, specifically dynamic stress changes. For the 2016 Kumamoto, Japan, and 2019 Ridgecrest, California, sequences, the first two hypotheses seem unlikely. Over many realizations of the stress calculations with different modeling inputs, numerous aftershocks consistently show negative static Coulomb stress changes both on the model receiver faults and the individual event focal mechanisms. Hypothesis 3 appears more likely, as the spatial and temporal distribution of aftershocks in the stress shadows are consistent with the expectations of dynamic triggering: the aftershocks occur mainly in a burst over the first few days to weeks, and decay with distance like near-field body waves. The time series of dynamic stress can be modeled, and numerous metrics explored, such as the maximum dynamic Coulomb stress change, and the period and duration of the stressing. Determining which metrics correspond to aftershock occurrence in the stress shadows may be useful in discriminating between various proposed physical mechanisms of dynamic stress triggering.
Theresa Sawi, U.S. Geological Survey Repeating earthquakes sequences are widespread along California’s San Andreas fault (SAF) system and are vital for studying earthquake source processes, fault properties, and improving seismic hazard models. In this talk, I’ll be discussing an unsupervised machine learning‐based method for detecting repeating earthquake sequences (RES) to expand existing RES catalogs or to perform initial, exploratory searches. This method reduces spectrograms of earthquake waveforms into low-dimensionality “fingerprints” that can then be clustered into similar groups independent of initial earthquake locations, allowing for a global search of similar earthquakes whose locations can afterwards be precisely determined via double-difference relocation. We apply this method to ∼4000 small (Ml 0–3.5) located on a 10-km-long creeping segment of SAF and double the number of detected RES, allowing for greater spatial coverage of slip‐rate estimations at seismogenic depths. This method is complimentary to existing cross‐correlation‐based methods, leading to more complete RES catalogs and a better understanding of slip rates at depth.
Andres Pena Castro, University of New Mexico The seismicity detected in the Antarctic continent is low compared with other continental intraplate regions of similar size. The low seismicity may be explained by (i) insufficient strain rates to generate earthquakes, (ii) scarcity of seismic instrumentation for detecting relatively small earthquakes, (iii) lack of comprehensive data mining for tectonic seismicity, or a combination of all the aforementioned. There have been ∼ 200 earthquakes in the interior of the Antarctic continent in the past two decades according to the International Seismological Centre (ISC) and other global catalogs. Previous studies in Antarctica have used seismometers installed for relatively short periods of time (∼days to months) to detect icequakes and/or tectonic earthquakes but a thorough integration of temporary and permanent network data is needed. Additionally, most of the reported seismicity was detected using classic earthquake detection techniques such as short-term-average/long-term-average or other energy detectors. State-of-the-art detection techniques, including machine learning, have proven to outperform classic detection techniques in different seismic sequences around the world and enable automated re-analysis of large volumes of data. Here I will present a new seismic catalog for the southernmost continent. We use a Machine Learning phase picker technique on over 21 years of seismic data from on-continent temporary and permanent networks to obtain the most complete catalog of seismicity in Antarctica to date. The new catalog contains 60,006 seismic events within the Antarctic continent between January 1, 2000 to January 1, 2021, with event magnitudes between −1.0 to 4.5. Most of the detected seismicity occurs near Ross Island, large ice shelves, ice streams, ice-covered volcanoes, or in distinct and isolated areas within the continental interior. Their locations and waveform characteristics indicate volcanic, tectonic, or cryospheric sources. The catalog shows that Antarctica is more seismically active than prior catalogs would indicate. This catalogue provides a resources for more specific targeting with other detection and analysis methods such as template-matching or transfer learning, to further discriminate event types and investigate diverse seismogenic processes across the continent.
Zachary Smith, University of California Berkley Intense dynamic stresses during earthquakes can activate numerous subsidiary faults and generate off-fault damage that alters fault properties and can impact the source processes and rupture dynamics of future earthquakes. Distinguishing how much damage accumulates during a single earthquake versus multiple earthquake cycles and determining how the magnitude of earthquakes impacts off-fault damage remains challenging. We combine geodetic, field, and experimental observations to evaluate the relationship between slip and off-fault deformation during a single earthquake and to assess how deformation evolves through successive earthquake cycles. Our study is focused on distributed faults that ruptured during the 2019 Mw 6.4 and 7.1 Ridgecrest earthquake sequence. Coseismic surface offsets are well imaged by satellite observing systems and the faults cut dikes of the extensive Independence dike swarm which serve as excellent linear cumulative displacement markers and records of near-fault damage exposed at Earth’s surface. Geodetic observations allow us to constrain slip and off-fault deformation due to a single event and the dikes enable us to constrain cumulative displacements. We bridge the gap between space geodetic observations of deformation and laboratory scale deformation by collecting sub-millimeter resolution ground-based imagery and LiDAR across offset dikes and along bedrock sections of the major faults. Using these high-resolution scans, we compare fault slip with mesoscale fault-damage properties (e.g., fracture density and fragment size). Optical grain size analysis shows that fault damage is lithology dependent and that asymmetric grain size reduction of bedrock across faults is common. Fault zone asymmetry may result from slip on geometrically complex faults, preferred rupture directivity on subsidiary faults, or by distributed off-fault shearing, as observed in geodetic studies. We performed successive dynamic loading rock mechanics experiments to investigate how deformation may evolve over multiple earthquake cycles leading to the development of damage zone asymmetry and pulverized zones. Integration of geodetic, field, and laboratory observations provides a multiscale view of off-fault deformation to better inform the interpretation of inelastic strain accumulation in geodetic data, damage accumulation along large strike-slip faults, and seismic hazards associated with distributed shallow faulting.
Yifang Cheng, Tongji University, Shanghai Earthquake focal mechanisms offer insights into the architecture, kinematics, and stress at depth within fault zones, providing observations that complement surface geodetic measurements and seismicity statistics. We have improved the traditional focal mechanism calculation method, HASH, through the incorporation of machine learning algorithms and relative earthquake radiation measurements (REFOC). Our improved approach has been applied to over 1.5 million catalog earthquakes in California from 1980 to 2021, yielding high-quality focal mechanisms for more than 50% of these events. In this presentation, I will elucidate how analyzing the focal mechanisms of small earthquakes advances our understanding of fault zone behaviors at varying scales, from major plate boundaries to microearthquakes. We integrate focal mechanism data with geodetic observations, and seismicity analysis to elucidate the fine-scale fault zone structure, stress field, as well as local variations of on-fault creep rate and creep direction. All observed fine-scale kinematic features can be reconciled with a simple fault coupling model, inferred to be surrounded by a narrow, mechanically weak zone. This comprehensive analysis can be applied to other partially coupled fault zones for advancing our understanding of fault zone kinematics and seismic hazard assessment. Additionally, we utilized the new focal mechanism catalog to construct a statewide stress model for California, shedding light on stress accumulation and release dynamics within this complex fault system. Our analysis suggests that local stress rotations in California are predominantly influenced by major fault geometries, slip partitioning, and inter-fault interactions. Major faults not optimally oriented for failure under the estimated stress regime are characterized by limited stress accumulation and/or recent significant stress release. Finally, I will present ongoing work that employs focal mechanisms and P-wave spectra to determine microearthquake source properties, including fault orientation, slip direction, stress drop, and 3D rupture directivity. This approach markedly improves microearthquake source characterization, thereby offering an extensive dataset for probing fine-scale fault mechanics and earthquake source physics.
Travis Alongi, U.S. Geological Survey Many of the world’s most damaging faults are offshore, presenting unique challenges and opportunities for studying earthquakes and faults. This talk explores how earthquake-generated (passive) and human-made (active) marine seismic methods improve our knowledge of on-fault slip behavior and off-fault damage. The first part of my talk explores coupling along the poorly resolved shallow offshore portion of the southernmost Cascadia subduction zone plate interface using microseismicity patterns. Knowledge of coupling provides information about the spatial distribution and magnitude of elastic strain accumulated interseismically, presumably to be released in future earthquakes. We develop a high-quality seismic catalog using a dense amphibious seismic array and advanced location techniques to provide constraints on the coupling here. We reveal an absence of shallow plate interface seismicity, suggesting high coupling. The second part of my talk focuses on the in-situ spatial distribution of secondary faults surrounding the main fault identified using marine-controlled source seismic reflection imaging. Through secondary faulting, the damage zone provides a window into the inelastic response of the Earth’s crust to strain. To better understand the damage zone, we develop a workflow to automate fault detections in seismic images, with dense sampling, over large distances (~10 km from the fault). Using this method, we find a peak in fault damage occurring at the location of the active main fault strand and a decay of damage with lateral distance. We found that rock type influences damage patterns and controls near-fault fluid flow. Additionally, accumulated fault slip determines the overall width of the damage zone, and along-strike variations in damage are controlled by fault obliquity.
Thomas Rockwell, San Diego State University The Salton Basin was free of significant water between about 100 BCE and 950 CE but has filled to the sill elevation of +13 m six times between ca 950 and 1730 CE. Based on a dense array of cone penetrometer (CPT) soundings across a small sag pond, the Imperial fault is interpreted to have experienced an increase in earthquake rate and accelerated slip in the few hundred years after re-inundation, an observation that is also seen on the southern San Andreas and San Jacinto faults. This regional basin-wide signal of transient accelerated slip in interpreted to result from the effects of increased pore pressure on fault strength resulting from the ~100 m of water load during full lake inundations. If the relationship between co-seismic subsidence in the sag depression and horizontal slip through the sag is even close to constant, the slip rate on the Imperial fault may have exceeded the plate rate for a few hundred years due to excess stored elastic strain that accumulated during the extended dry period prior to ca 950 CE.
Taiyi Wang, Stanford University All instrumented basaltic caldera collapses generate Mw > 5 very long period earthquakes. However, previous studies of source dynamics have been limited to lumped models treating the caldera block as rigid, leaving open questions related to how ruptures initiate and propagate around the ring fault, and the seismic expressions of those rupture dynamics. In the first part of my talk, I will present the first 3D numerical model capturing the nucleation and propagation of ring fault rupture, the mechanical coupling to the underlying viscoelastic magma, and the associated seismic wavefield. I demonstrate that seismic radiation, neglected in previous models, acts as a damping mechanism reducing coseismic slip by up to half, with effects most pronounced for large magma chamber volume, high magma compressibility, or large caldera block radius. Viscosity of basaltic magma has negligible effect on collapse dynamics. In contrast, viscosity of silicic magma significantly reduces ring fault slip. In the second part of my talk, I compare simulation results with the 2018 Kīlauea caldera collapse. Three stages of collapse, characterized by ring fault rupture initiation and propagation, deceleration of the downward-moving caldera block and magma column, and post-collapse resonant oscillations, in addition to chamber pressurization, are identified in simulated and observed (unfiltered) near-field seismograms. A detailed comparison of simulated and observed displacement waveforms corresponding to collapse earthquakes with hypocenters at various azimuths of the ring fault reveals a complex nucleation phase for earthquakes initiated on the northwest. At the end of my talk, I will show ongoing work in deriving rigorous seismic representations of caldera collapse earthquakes from dynamic rupture simulations. The theory is fully general and can be applied to other volcanic processes, enabling parameterization of seismic inverse problems consistent with source physics.
Shinji Toda, Tohoku University The 1 Jan 2024 Noto Hanto earthquake launched a plethora of ills on the Noto Hanto population, taking 200 lives, and causing $25B in damage, only $5B of which was insured. These ills include a tsunami that arrived within a few minutes of the mainshock, as well as unexpectedly strong shaking throughout the Noto peninsula. In addition to direct shaking damage, the shaking triggered massive landslides in steep terrain, and caused extensive liquefaction in coastal marshes and estuaries. Coastal uplift of up to 4 m also lifted fishing harbors out of the water. Because the affected population locates nearly on top of the epicenter, neither the earthquake early warning nor the tsunami warning were effective. The earthquake was preceded by an extremely intense 3-year-long seismic swarm, and so efforts are under way to discern if the swarm triggered the earthquake, and if so, how. Whether swarms can trigger great quakes is a key question with which we must now grapple, as swarms are common in California, and elsewhere in Japan, as well. Sadly, the previously mapped offshore fault that ruptured was not used in the Japanese HERP hazard assessment, and so the Noto peninsula hazard had been greatly underestimated.
Mong-Han Huang, University of Maryland The Ross Ice Shelf (RIS) in Antarctica is the largest ice shelf in the world. As the RIS flows toward the Ross Sea, a buildup of tensile stress due to increasing ice flow velocity develops a series of flow-perpendicular rift zones. Although these rifts are essential in contributing to future calving and reduction in size of the ice shelf, their material properties and mechanical response to external stress in the rift zone scale (~10-100 km) are poorly understood, partly due to a lack of high spatial-temporal scale in-situ observations to characterize key rift processes. Using a deployment of seismometers and GPS stations from the NSF DRRIS project and recently by our team, we further explore the link between seismicity, tidal cycles, and air temperature at two rifts of different ages. We found that icequakes along the major rift zones on RIS are modulated with the oscillating tidal stressing rate, and icequakes have a stronger modulation with stress rate in older rifts. We adopted the theory proposed by Heimisson and Avouac (2020) about seismicity rate due to oscillating stresses for icequakes. On RIS, the characteristic time scale from elevated icequake seismicity rate to background rate is much shorter than the periodicity of the tidal stresses, and therefore the seismicity rate is proportional to the stressing rate. This also suggests that how stress varies in time, rather than the total quantity of stress, has a higher contribution to brittle fractures in ice shelves. Constraining the current behavior of ice shelf rifts and their modulation by oscillating stresses will help inform their future stability in a changing climate.
Peter Shearer, University of California, San Diego Similar-sized earthquakes vary in the strength of their high-frequency radiation and various modeling assumptions can be used to translate these differences into stress-drop estimates. Empirical methods are widely applied to correct earthquake spectra for path effects in order to estimate corner frequencies and stress drops, but suffer from tradeoffs among model parameters that hamper estimates of absolute stress drop and comparisons between different studies or regions. Based on our recent work documenting hard-to-resolve tradeoffs between absolute stress drop, stress-drop scaling with moment, high-frequency fall-off rate, and empirical corrections for path and attenuation terms, we adopt a new approach in which the corner frequencies of the smallest earthquakes in each region are fixed to a constant value. This removes any true coherent spatial variations in stress drops among the smallest events but ensures that any spatial variations seen in larger-event stress drops are real and not an artifact of inaccurate path corrections. Applying this approach across southern California, we document spatial variations in stress drop for M 1.5 to 4 earthquakes that agree with previous work, such as lower-than-average stress drops in the Salton Trough, as well as small-scale stress-drop variations along many faults and aftershock sequences. However, our results are unable to independently determine the average stress drop for small earthquakes, highlighting the limitations of purely empirical approaches to spectral analysis for earthquake source properties and the importance of determining shallow crustal attenuation models.
Toño Bayona, University of Bristol The Collaboratory for the Study of Earthquake Predictability (CSEP) is a global community of scientists whose mission is to advance earthquake predictability research though the rigorous and prospective evaluation of probabilistic seismicity forecasts. One of CSEP’s major international achievements is the development and operation of dozens of time-varying and time-invariant seismicity models for California, including various versions of the well-established Epidemic-Type Aftershock Sequence (ETAS) and Short-Term Earthquake Probability (STEP) models, globally calibrated models, hybrid models and non-parametric models. Over the past ten years, prominent seismic sequences, such as the 2010 Mw 7.2 El Mayor-Cucapah, 2014 Mw 6.8 Ferndale, and 2019 Mw 7.1 Ridgecrest sequences, have been observed in California, providing a unique opportunity to comprehensively evaluate our current ability to forecast earthquakes and thus improve probabilistic seismic hazard assessments. In this talk, I will describe the models, statistical tests and data used in two CSEP forecast experiments, one comparing globally calibrated earthquake forecasting models with regional models and the other evaluating one-day seismicity models for California, and present the results of their respective prospective tests. I will discuss, in particular, the ability of the models to forecast the number and epicentral locations of observed M ≥ 4 earthquakes and compare their short- and long-term performance with that of an time-invariant smoothed seismicity model.
Jack W. Baker, Stanford University The amplitude of ground shaking during an earthquake varies spatially, due to location-to-location differences in source features, wave propagation, and site effects. These variations have important impacts on infrastructure systems and other distributed assets. This presentation will provide an overview of efforts to quantify spatial correlations in amplitudes, via observations from past earthquakes and numerical simulations. Regional risk analysis results will be presented to demonstrate the potential role of spatial correlations on impacts to the built environment. Traditional techniques for fitting empirical correlation models will be discussed, followed by a proposal for new techniques to account for soil conditions and other site-specific effects. Prospects for future opportunities in this field will also be addressed, including the role of numerical simulations and advanced risk assessment.
Emily Mongold, Stanford University The impact of liquefaction on a regional scale is not well understood or modeled with traditional approaches. This paper presents a method to quantitatively assess liquefaction hazard and risk on a regional scale, accounting for uncertainties in soil properties, groundwater conditions, ground shaking parameters, and empirical liquefaction potential index (LPI) equations. The regional analysis is applied to a case study to calculate regional occurrence rates for the extent and severity of liquefaction and to quantify losses resulting from ground shaking and liquefaction damage to residential buildings. We present a regional-scale metric to quantify the extent and severity of liquefaction. A sensitivity analysis on epistemic uncertainty indicates that the two most important factors on output liquefaction maps are the empirical liquefaction equation, emphasizing the necessity of incorporating multiple equations in future regional studies, and the water table level, highlighting concerns around data availability and sea level rise. Furthermore, the disaggregation of seismic sources reveals that triggering earthquakes for various extents of liquefaction originate from multiple sources, though primarily nearby faults and large magnitude ruptures. This finding indicates the value of adopting regional probabilistic analysis in future studies to capture the diverse sources and spatial distribution of liquefaction.
(1) Susan Hough, (2) Kate Hutton, (1) U.S. Geological Survey, (2) Caltech (retired) On the 30th anniversary of the 17 January 1994 Northridge, California, earthquake, we present a retrospective overview of an earthquake that had an enormous, multi-faceted impact in the greater Los Angeles area. In this two-part seminar, retired Caltech seismologist Kate Hutton first discusses the response to the earthquake by the (then) Southern California Seismic Network, which found itself slammed by joint demands of data analysis and overwhelming media/public interest. In the second part, Susan Hough discusses how modern analysis and data products – the development of which was spurred by the earthquake – bring early characterizations of the sequence, including its near-field ground motions, into greater focus.
Sarah Minson, U.S. Geological Survey There are many underdetermined geophysical inverse problems. For example, when we try to infer earthquake fault slip, we find that there are many potential slip models that are consistent with our observations and our understanding of earthquake physics. One way to approach these problems is to use Bayesian analysis to infer the ensemble of all potential models that satisfy the observations and our prior knowledge. In Bayesian analysis, our prior knowledge is known as the prior probability density function or prior PDF, the fit to the data is the data likelihood function, and the target PDF that satisfies both the prior PDF and data likelihood function is the posterior PDF. Simulating a posterior PDF can be computationally expensive. Typical earthquake rupture models with 10 km spatial resolution can require using Markov Chain Monte Carlo (MCMC) to draw tens of billions of random realizations of fault slip. And now new technological advancements like LiDAR provide enormous numbers of laser point returns that image surface deformation at submeter scale, exponentially increasing computational cost. How can we make MCMC sampling efficient enough to simulate fault slip distributions at sub-meter scale using “Big Data”? We present a new MCMC approach called cross-fading in which we transition from an analytical posterior PDF (obtained from a conjugate prior to the data likelihood function) to the desired target posterior PDF by bringing in our physical constraints and removing the conjugate prior. This approach has two key efficiencies. First, the starting PDF is by construction “close” to the target posterior PDF, requiring very little MCMC to update the samples to match the target. Second, all PDFs are defined in model space, not data space. The forward model and data misfit are never evaluated during sampling, allowing models to be fit to Big Data with zero computational cost. It is even possible, without additional computational cost, to incorporate model prediction errors for Big Data, that is, to quantify the effects on data prediction of uncertainties in the model design. While we present earthquake models, this approach is flexible and can be applied to many geophysical problems.
(1) Stephen Wu, (2) Keisuke Yano, Institute of Statistical Mathematics, Japan (1) Since 2021, the Seismology TowArd Research innovation with data of Earthquake (STAR-E) project has been established by the Japanese government to promote interdisciplinary research between data science and seismology. Five proposals have been accepted to be the core projects of STAR-E and EEW has become a sub-project in one of the selected projects. In this talk, I will provide an overview of the plan to improve EEW in Japan through integration with data science. While no concrete results have been obtained yet, I will share part of the blueprint of the possible EEW development in Japan for the future 5 years. (2) In real data analysis, we often encounter mixed-domain data. Mixed-domain data refer to multivariate data in various domains such as real values, categorical values, manifold values, and functional values. In this presentation, we will introduce our minimum information dependence model. This statistical model is tailored to analyze mixed-domain data with potential higher-order dependencies. We will highlight its utility through its application in the ecological study of penguins and in the analysis of earthquake catalogs.
(1) Rachel Abercrombie, (2) Annemarie Baltay, (1) Boston University, (2) U.S. Geological Survey In 2021 we launched the Community Stress Drop Validation Study, focused on the 2019 Ridgecrest earthquake, California, sequence, using a common dataset. The broad aim of the collaboration is to improve the quality of estimates of stress drop and related fundamental earthquake source parameters (corner frequency, source duration, etc.) and their uncertainties, to enable more reliable ground motion forecasting, and to obtain a better understanding of earthquake source physics. Seismological estimates of stress drop from earthquake spectral measurements have become standard practice over the last 50 years, but their wide variability, model dependence and inconsistency between studies have led to controversy and concerns about how to assess and interpret these measurements. The SCEC/USGS community study has engaged a wide international community focused on improving methods and distinguishing the sources of variability between physical earthquake source variation, and random and systematic scatter and bias. To date, 18 research groups have submitted 28 different measurements of source parameters for earthquakes in the 2019 Ridgecrest sequence, with a focus on 55 events of M2 to 5. These approaches include spectral decomposition/generalized inversion, empirical Green’s function analysis in both frequency and time domains, and ground-motion and single-station based approaches. Comparison of submitted stress drops reveals considerable systematic and random scatter, but also shows consistency between events; for some events, methods are in agreement on either relatively high or low stress drops. Ongoing focus is on understanding the relative influences of different analysis parameter choices, assumptions about attenuation, frequency range of the data, and the growing evidence of widespread complexity and heterogeneity in even small earthquake ruptures. We welcome new members wishing to observe, learn or more actively participate; more information can be found at https://www.scec.org/research/stress-drop-validation.
Heather Crume, California Geological Survey Surface creep has been documented on the San Andreas fault (SAF) since the 1960s. From Parkfield in the southeast to San Juan Bautista (SJB) in the northwest, the SAF is largely creeping and accommodating most of the ~38 mm/year right-lateral plate motion. The SJB section of the SAF lies at the northwest boundary of the central creeping section, forming a creeping-to-locked transition. These transition sections are known to be potential zones for earthquake nucleation. Spatiotemporal changes in fault creep within this locking transition provide a potential quantitative measure for the assessment of the seismic hazard of the SAF system in the region. In addition to steady fault creep, episodic creep events characterized by accelerated slip of a few millimeters to centimeters over several days occur. However, knowledge of the along-strike and downdip-extent and propagation velocity of these events is limited by the sparse density of current creepmeters. How do these events propagate? What is their magnitude? What factors drive their occurrence? Moreover, there is a need to address environmental effects that can confound creep data and develop appropriate corrections. To address some of these questions, we have initiated a densification of the current creepmeter array by installing new instruments and renovating those in disrepair. We report results from some of these sensors. At Fox Creek, south of Hollister, we installed two creepmeters 130 m apart to measure creep event propagation velocity. One was equipped with an orthogonal sensor to measure dilation. During the dry season ≈0.16-0.35 mm of fault dilation accompanied complex creep event sequences with cumulative amplitudes of 3.5-5.8 mm. In the months following each sequence the fault zone slowly returned to its pre-event width. During one creep event a southward propagating dislocation was present with a velocity of 0.4 km/hour. We also observe distinct differences in amplitude and shape of creep events. Further, we are able to correct for apparent left-lateral slip due to fault closing during a rain event using the relationship between the orthogonal and oblique instruments.
Shuo Ma, San Diego State University One crucial yet unanswered question about the 2011 Tohoku-Oki earthquake and tsunami is what generated the largest tsunami (up to 40 m) along the Sanriku coast north of 39°N without large slip near the trench. A minimalist dynamic rupture model with wedge plasticity is presented to address this issue. The model incorporates the important variation of sediment thickness along the Japan Trench into the Japan Integrated Velocity Structure Model (JIVSM). By revising a heterogeneous stress drop model, the dynamic rupture model with a standard rate-and-state friction law can well explain the GPS, tsunami, and differential bathymetry data (within data uncertainties) with minimum model tuning. The rupture is driven by a large patch of stress drop up to ~10 MPa near the hypocenter with significantly smaller stress drop (< 3 MPa) in the upper ~10 km. The largest shallow slip reaches 75.67 m close to the trench ~50 km north of hypocenter dominated by elastic off-fault response, which is caused by the large fault width, free surface, shallowly dipping fault geometry, and increasing sediment thickness northward. North of large shallow slip zone, however, inelastic deformation of thick wedge sediments significantly controls the rupture propagation along trench, giving rise to slow rupture velocity (~850 m/s), diminishing shallow slip, and efficient seafloor uplift. The short-wavelength inelastic uplift produces impulsive tsunami consistent with the observations off the Sanriku coast in terms of timing, amplitude, and pulse width. Wedge plasticity and variation of sediment thickness along the Japan Trench thus provides a self-consistent interpretation to the along-strike variation of near-trench slip and anomalous tsunami generation in the northern Japan Trench in this earthquake.
Artemii Novoselov, Stanford University This seminar introduces PhaseHunter, a deep learning framework initially designed for the precise estimation and uncertainty quantification of seismic phase onset times. Building upon this foundational capability, PhaseHunter has evolved to handle a broader range of seismic applications through a probabilistic deep learning regression approach. This enables the framework to analyze both continuous and binary properties of seismic signals, thereby extending its potential applications to include earthquake location, seismic tomography, source discrimination, and earthquake early warning systems. The seminar will explore the technical aspects and practical applications of PhaseHunter, offering insights into how this tool could serve various facets of seismological research and hazard assessment. As an open-source project, PhaseHunter also encourages community contributions for ongoing improvements and adaptations.
James Neely, University of Chicago Commonly used large earthquake recurrence models have two major limitations. First, they predict that the probability of a large earthquake stays constant or even decreases after it is “overdue” (past the observed average recurrence interval), so additional accumulated strain does not make an earthquake more likely. Second, they assume that the probability distribution of the time between earthquakes is the same over successive earthquake cycles, despite the fact that earthquake histories show clusters and gaps. These limitations arise because the models are purely statistical in that they do not incorporate fundamental aspects of the strain accumulation and release processes that cause earthquakes. Here, we present a new large earthquake probability model, built on the Long-Term Fault Memory model framework that better reflects the strain accumulation and release processes. This Generalized Long-Term Fault Memory model (GLTFM) assumes that earthquake probability always increases with time between earthquakes as strain accumulates and allows the possibility of earthquakes releasing only part of the strain accumulated on the fault. GLTFM estimates when residual strain is likely present and its impact on the probable timing of the next earthquake in the sequence, and so can describe clustered earthquake sequences better than commonly used models. GLTFM’s simple implementation and versatility should make it a powerful tool in earthquake forecasting.
Zhiang Chen, California Institute of Technology The intricate and dynamic nature of fault zones and fragile geological features has long fascinated geoscientists and researchers. Understanding these geological phenomena is crucial not only for scientific exploration but also for hazard assessment and resource management. Recently, the convergence of robotics and machine learning has given rise to a transformative practice called automated geoscience. This practice utilizes robotics to automate data collection and machine learning to automate data processing, liberating geoscientists from labor-intensive activities. Focusing on rock detection, mapping, and dynamics analysis, I present the applications of automated geoscience in fault zone mapping and fragile geological feature analysis. To explore the influence of rocky fault scarp development on rock trait distributions, I have developed a data-processing pipeline using UAVs and deep learning to segment dense rock distributions. This application provides a statistical approach for geomorphology studies. Precariously balanced rocks (PBRs) offer insights into ground motion constraints for hazard analysis. I have designed offboard and onboard methods for autonomous PBR detection and mapping. After mapping, I delve into PBR dynamics with a virtual shake robot simulating the dynamics of PBRs in overturning and large displacement processes with respect to various ground motions. The overturning and large displacement processes provide upper-bound and lower-bound ground motion constraints, respectively. Moving forward, I am integrating automated geoscience into broader studies on fault zone mapping and fragile geological feature analysis. My aim is to push this interdisciplinary research direction, offering potential advancements in hazard monitoring and prospecting.
Ryley Hill, San Diego State University Both natural and anthropogenic hydrologic loads have been associated with stimulating seismicity. However, there are few documented examples that hydrologic loads can trigger large earthquakes. The southern San Andreas Fault (SSAF) in Southern California lies next to the Salton Sea, a successor of ancient Lake Cahuilla that periodically filled and desiccated over the past millennium. Here we use new geologic and paleoseismic data to demonstrate that the past six major earthquakes on the SSAF likely occurred during highstands of Lake Cahuilla. To investigate possible causal relationships, we computed time-dependent Coulomb stress changes produced by lake level fluctuations over the last ~1100 years. Using a fully coupled model of a poroelastic crust overlying a viscoelastic mantle, we find that hydrologic loads increased Coulomb stress on the SSAF by several hundred kilopascals and fault-stressing rates by more than a factor of 2, likely sufficient for triggering. Stress perturbations are dominated by pore pressure changes, but are enhanced by the poroelastic “memory" effect whereby increases in pore pressure due to previous lake high stands do not completely vanish by diffusion and constructively interfere with the undrained response in subsequent high stands. The destabilizing effects of lake inundation are enhanced by a nonvertical fault dip, the presence of a fault damage zone, and lateral pore pressure diffusion. Our model provides physical insights into relations between lake level and time-dependent seismic hazard, and may be applicable to other regions with hydrologic loading from either natural or anthropogenic sources.
Kelian Dascher-Cousineau, University of California, Berkeley Seismology is witnessing explosive growth in the diversity and scale of earthquake catalogs. A key motivation for this community effort is that more data should translate into better earthquake forecasts. In this presentation, I report on recent works in 1) improving aftershock forecasts, 2) investigating the seismic triggering potential of slow slip events, and 3) introducing deep learning methods for earthquake forecasting. Our results underscore the importance of large datasets in yielding robust earthquake forecasts. Furthermore, they illustrate how more data can unlock new, more flexible methodologies.