Planning, Forecasting, and Intelligence Preparation
The workshop’s second panel session featured presentations on the analysis and forecasting aspects of preparing intelligence and decision support products for end users such as DTRA. The three speakers were Jeffrey J. Love, USGS’s adviser for geomagnetic research and a member of the Space Weather Operations Research and Mitigation Working Group of the National Science and Technology Council; Nestor Alfonzo Santamaria, senior adviser on risk governance at OECD; and Madhav Marathe, distinguished professor in biocomplexity at the University of Virginia Biocomplexity Institute. Christopher Barrett moderated the discussion following the three presentations.
DOWN TO EARTH WITH AN ELECTRIC HAZARD FROM SPACE
Earth’s core, said Love, is a naturally occurring electric current generator, which in turn produces magnetic fields that thread their way out of the core, through Earth’s mantle, up to Earth’s surface, and out into space. The extent to which the magnetic field expands into space defines the magnetosphere. When the magnetosphere interacts with electrically charged particles in the solar wind, it induces electric currents in Earth’s interior. Because systems such as power grids are grounded to the earth, they pick up these naturally occurring, changing, electric fields, which in turn can cause problems.
Space weather, Love continued, results when the Sun gives off an occasional coronal burst of plasma, which a couple of days later arrives at Earth and compresses its magnetosphere on the daylight side and extends it on the nighttime side. The rearrangement of the magnetic lines within the magnetosphere produces currents that travel into and out of Earth’s ionosphere, generating the beautiful aurora borealis and, also, possibly causing disruptions to critical infrastructure. One such magnetic storm in September 1859, often described as the most intense magnetic storm in recorded history, caused widespread disruption of telegraphic services. Another notable magnetic storm in May 1921 disrupted radio, telegraph, and telephone communications and triggered at least three fires in New York railway stations. A March 1940 solar storm caused widespread disruption of long-wire communication systems, and storms in May 1967 and August 1972 interfered with military operations.
The most recent major magnetic storm, March 1989, caused the complete collapse of the Canadian Hydro-Quebec power system, about $3 billion to $6 billion Canadian dollars in damage in Canada, electrical blackouts in Sweden, interference to the U.S. electrical grid, damage to a high-voltage transformer in New Jersey, disruption of geophysical surveys, and damage to orbiting satellites. Given this history and society’s dependence on electrical technology, Love
posed a critical question, What would the effect on modern society be if there was a recurrence of the 1859 or 1921 magnetic storms?
In fact, the National Academies issued a workshop report on that very subject in 2008.1 That workshop report noted that a magnetic super storm could cause significant damage to and interference with military satellites; widespread disruption of GPSs, radio communication, and geophysical surveys; widespread and prolonged loss of electricity resulting from damage to the electric grid; and a possible economic impact to the United States of $1 trillion to $2 trillion. The challenge in accurately predicting the effects of such a storm is that, even though magnetic super storms have occurred in the past, society is now, more than ever, dependent on technology that is vulnerable to magnetic storms. Love explained that the possible occurrence of such a storm led to the establishment of the Space Weather Operations Research and Mitigation Working Group within the White House National Science and Technology Council to coordinate activities among federal agencies concerned about space weather.
Statistical modeling has produced estimates that a magnetic storm such as the one that occurred in 1989 will occur every 44 years or so. At the same time, while the 1859 or 1921 storms appear to be 500-year events,2 there have been two significant storms since 1859. This, observed Love, highlights one of the challenges of retrospective statistical analyses of rare events. He added that describing the intensity or size of a rare event, such as a massive magnetic storm with a single number, is almost always simplistic, in part because such an extreme storm transpires in complex ways over a period of time. Magnetic storms typically last several days, which means that the scalars one might hope to use in standard statistical analyses do not describe these events well. Some phenomena, Love described, change over long time scales, introducing non-stationarity, which is a problem in many statistical analyses.
Turning to the March 1989 magnetic storm, Love described the physics involved when a magnetic storm damages a high-voltage transformer. The key feature here is that Earth’s structure is heterogeneous, with some regions being good electrical conductors as a result of their particular mineralogy and fluid content, while other regions are more resistant. As a result, the electric field generated by a magnetic storm will not be as effective at driving currents in some regions of Earth compared to other regions. If a power grid spans an electrically resistive region, the electric current will take the path of least resistance, which would be to flow through the power grid rather than the earth. The current would flow into and out of the earth through ground connections, which typically occur at transformer stations. The power grid is not designed to accommodate such “quasi-direct” currents, which is why they damage transformers.3 “This is the crux of the problem for the power grid industry in terms of space weather,” deduced Love.
To map geoelectric hazards across the United States, Love and his collaborators have adopted an empirical model approach familiar from time series analysis:
- An input signal that varies over time—the geomagnetic variation generated by the Sun,
- Convolution of that signal through a filter—Earth’s heterogeneous composition, and
- An output signal that varies over time—a geoelectric field that can damage power grids.
Powering the model are magnetic observatory data that record geomagnetic variation at stations across the United States and Canada and survey data from magnetotelluric measurements that yield Earth’s surface impedance as a function of location. Love noted that the magnetotelluric survey is an ongoing project, with much of Arizona, Texas, Oklahoma, Arkansas, Louisiana, and Mississippi. Parts of Utah, Kansas, and Alabama have yet to be surveyed. The existing survey, which presently covers two-thirds of the contiguous United States, shows high impedance in the upper Midwest and Eastern United States, with low impedance in Michigan, Illinois, the Appalachian basin, and much of the western United States.
1 National Research Council, 2008, Severe Space Weather Events: Understanding Societal and Economic Impacts: A Workshop Report, The National Academies Press, Washington, DC.
2 That is, events assessed as expected to occur at this frequency.
3 J.J. Love, E.J. Rigler, A. Pulkkinen, and C.C. Balch, 2014, “Magnetic Storms and Induction Hazards,” Eos, Transactions, American Geophysical Union 95(48):445–446, https://doi.org/10.1002/2014EO480001.
Using these inputs, the modeling method used by Love and his collaborators provides geoelectric field estimates that can then be mapped onto power grids to estimate line voltages. The method was used to estimate the maximum geoelectric fields experienced during the March 1989 storm. That storm, according to the model, would have produced high electric field amplitudes of two to three orders of greater magnitude in the upper Midwest and along the Eastern Seaboard than in other places, such as in Michigan or across most of the western United States. High geoelectric amplitude correlates with the reported interference that power grids experienced during the 1989 storm. This geoelectric hazard analysis, combined with the March 1989 record of anomalies in the power grid, suggests that the U.S. Mid-Atlantic and Northeast regions are where high geoelectric hazards are likely to be experienced during an even more intense storm.
Presently, Love and his collaborators are working with the National Oceanic and Atmospheric Administration (NOAA) to create real-time geoelectric hazard maps. Such maps, he explained, would be useful for nowcasting, an important tool for managing the power grid system during an intense magnetic storm. In addition, the USGS team has developed a statistical map of geoelectric hazards, such as would be experienced by a 100-year magnetic storm on voltages the national power grid would experience and indicates where the hazards of damage are high and low.
An offshoot of this project, said Love, is that it also provides information as to where the nation might concentrate its efforts to understand the hazards of an E3 nuclear electromagnetic pulse (EMP). Such a pulse, which would last tens to hundreds of seconds, results from a nuclear explosion’s distortion of Earth’s magnetic field and is in many ways analogous to the effects of a magnetic storm. Looking forward, the USGS has proposed improving the U.S. magnetic monitoring systems and performing more detailed magnetotelluric surveys to better understand both EMP and magnetic storm hazards. Love also called for open access to power grid impact data to better understand how EMP and magnetic storms can affect engineered systems.
When asked if the power grid industry is doing something to address these hazards, Love replied yes, that the industry takes these magnetic storms seriously and has made progress that gives the industry operational flexibility during a magnetic storm. For example, when a coronal mass ejection occurs, which might produce a magnetic storm, grid operators will often add generating capacity to the system and be prepared to reroute electricity to keep the grid operating. He added that there are no plans to turn off the grid in response to a magnetic storm.
PLANNING FOR RARE EVENTS: SUPPORTING GOOD GOVERNANCE FOR RESILIENCE
Nestor Alfonzo Santamaria noted that unlike the previous speakers, who addressed specific types of rare events, his remit at OECD is to take an all-hazards perspective on planning for rare events. He explained that every country has its own way of planning for rare events, but there are common features of those approaches that include conducting a risk analysis to identify possible scenarios the country could face. These illustrative, rather than exhaustive, scenarios serve as a way for communicating the risks on which policy makers are concentrating. For example, some countries focus on natural hazards while others take an all-hazards approach.4
After conducting a risk analysis, countries engage in risk evaluation, which may also include involving specific ministries or agencies in the process of identifying the capacities needed and that should be prioritized to deal with the identified risks. Ultimately, each country then makes an investment decision based on the probability of an event occurring; the nature, intensity, and duration of the event; and the probability that the event would cause certain types of impacts. The risk evaluation process, said Santamaria, enables countries to understand and compare the significance of different risks on the basis of how likely they are and what their effects might be. Those two factors are captured in a likelihood/plausibility score and overall impact score, which together inform risk prioritization.
While some of this analysis includes complex mathematical models for naturally occurring events, many of the risks that countries worry about arise from threat vectors with malicious intent that do not have models. These two types of threats have different sets of uncertainties underpinning them, said Santamaria, which makes
4 An all-hazards approach, as described in OECD’s Assessing Global Progress in the Governance of Critical Risks, “allocates resources to risks that are most likely to have a national significance.” The approach is not meant to mitigate all possible hazards, but is meant to address “known unknown” and “unknown unknown” problems. See OECD, 2018, Assessing Global Progress in the Governance of Critical Risks, OECD Reviews of Risk Management Policies, OECD Publishing, Paris.
the analysis challenging. To deal with this, the Swiss have grouped risks in clusters according to the monetary value of the damage that would result from those risks. This type of analysis starts with looking at the worst-case scenario, but based on recent events, such as the COVID-19 pandemic and the United Kingdom withdrawing from the European Union, it is clear that the worst-case scenario is highly variable and must change as new data become available. Brexit provided a different case example and required changing detailed models of personnel flows across borders that were included in models of COVID-19 transmission and spread, to inform a national security risk assessment.
The United Kingdom, the Netherlands, and Sweden have each gone through an exercise that plans for common consequences of various incidents, such as disruption to utilities, transportation, and health care, rather than having in place specific arrangements for responding to specific risks, remarked Santamaria. Ultimately, he explained, these exercises are about developing capabilities that are more generalizable, which represents a shift in the emphasis of developing specific capabilities to deal with specific hazards of various types.
These capabilities are more adaptive, and as an example, he noted that Spain is now looking at the mechanisms it needs to shift its national industrial and production capabilities to address needs as they arise and the social compact it needs to form with industry to respond to an unforeseen and rare catastrophic event. Instead of creating stockpiles to respond to specific events, Spain is investing in a process to understand how to mobilize its manufacturing capacity to address many types of challenges quickly when they arise. He also noted a similar OECD risk management review aimed to strengthen Paris’s resilience to the risk of a catastrophic flood of the Seine River that would wipe out much of Paris.
Another challenge Santamaria referred to that is associated with risk planning is communicating how much of a challenge a risk could present. As an example, Denmark has established what it calls the “Pandora Cell,” which tries to identify what could go wrong during a crisis and work out what the response might be if everything went wrong. This is intended to help crisis managers understand whether an incident could be getting worse, whether they could see a loss of control, and whether they could face resource shortages with a further spread of the crisis. This process involves gathering a team of experts to outline the problems that may worsen during an event and then identify three to five concrete issues that crisis managers need to look for during a serious event.
PLANNING AND RESPONDING TO SIGNIFICANT RARE EVENTS
Madhav Marathe’s presentation focused on socially coupled networks, which are the networks that allow the flow of goods, services, people, finances, and information. These networks, explained Marathe, make the world more efficient in terms of the transfer of these types of commodities. Over the past 15 to 20 years, these networks have made the world “smaller” with the advent of communication systems, but the interconnected nature of these networks also creates situations that can lead to large-scale contagions and cascades. These cascades can spread across multiple sectors and nations, with significant social, economic, and human costs. The Chicago metropolitan area, for example, covers 400 square miles and includes 9 million people who depend on a transportation system with some 4 million edges and nodes that accommodates approximately 31 million trips; a social network for public health with approximately 20 million nodes in a temporal network with 1-second resolution; and a telecommunication system with 1 million IP addresses that handles some 125 million calls per day. Any model for planning and preparedness would have to account for all of these interconnections.
The first example Marathe discussed involved the urban transportation planning that would occur sometime after a 9/11-type event. The analysis was designed to understand the effect on traffic and the surrounding neighborhoods of closing streets around the White House. The project considered multiple solutions and considered the trade-offs between the effectiveness of an intervention, its costs, and the overall impact on travelers in the area.
The second example was a project his group conducted for DTRA as part of a national planning scenario in which a 10-kiloton weapon is detonated in downtown Washington, DC. Marathe’s project focused on the social behavior and economic impact of this event between time zero and 3 days post-event. The models his team developed included power systems, the transportation system, the social context network, communications networks, building infrastructure, and the connections between these networks. Though these systems are huge, the resulting simulations suggested that small details matter and need to be presented in a meaningful manner.
In addition, observed Marathe, the modeling results showed that complex behavior adaptation is central to the risks in the first 3 days as people try to recover from what has happened. He noted that models such as this need to include various types of behavior changes that occur to escape, seek care, reconstitute households, estimate danger, or leave the area. Other findings from this project were that behaviors and the physical environment co-evolve and that information plays an important role in situational assessment and response coordination. Another thing the models showed was that even a partially restored communication system has a disproportionately positive effect on overall behavior and reducing anxiety. Partially restoring communication can be done today, he added, using what is known as a cell on wheels, which are networks that can be brought up quickly. This modeling effort also showed that although the power network would be completely destroyed in the area and not likely to be restored for 2 to 3 years, the cascading failure effect would be relatively minimal.
The third example Marathe discussed involved modeling pandemic response. In December 2020, this effort looked at how to optimally allocate a limited supply of vaccines. By February 2021, the work shifted to spatially modeling vaccine allocation in an efficient manner because the vaccine supply was plentiful by then.
In April 2021, this work shifted to understand the effect of new and emerging strains. Additionally, the Centers for Disease Control and Prevention (CDC) created a real-time scenario modeling hub focused on trying to understand what might happen under various scenarios extending a few months into the future. In June 2021, modeling was used to try to understand the risk of vaccine hesitancy. In July, this work started planning for evolving strains, in August the simulations looked at the effect of waning immunity, and in September the effort aimed to understand the impact of vaccinating 5- to 11-year-olds. In each case, the goal was to assess the epidemiologic outcomes for these scenarios given the current ground conditions.
What Marathe and his collaborators developed as part of this work is a novel platform that harnessed two supercomputers to do these simulations in close to real time. This involved building a large, agent-based modeling environment for the entire United States that is a digital twin of the nation5 that includes a detailed representation of the underlying social contact network using data from various sources. This model is able to forecast how COVID-19 would spread in every state and county in great detail using current data on vaccine uptake according to demographics, hesitancy surveys, policies and interventions in place, and vaccine supplies.
One forecast from this model was that the effect of protection is relatively small if immunity wanes to the extent that it has following immunization or infection. This has turned out to be important with the appearance of the Delta and Omicron variants and shows the importance of booster shots. The model also forecasted that infections would peak between October and December 2021, which did start to happen until the Omicron variant appeared. Later forecasts that included Omicron suggested that infections would peak in early 2022.
In closing, Marathe emphasized that designing models for analyzing and anticipating future events should account for system resiliency, efficiency, and sustainability. System design, he explained, should be taught to include considerations of rare events. “While anticipating such events might be challenging or impossible, one can certainly try and prevent or delay or recover from it in a graceful manner,” he asserted. Marathe also noted the challenge of modeling social systems that arise from co-evolution because of the fact that actions can change potential outcomes for many of these scenarios.
To begin the discussion, Santamaria asked Marathe if he has been able to track the performance of his model against the observed reality, particularly with regard to the sensitivity of the projected trajectory to certain specific interventions. Marathe replied that the forecasting model, which only makes projections out to 4 weeks, has done reasonably well in terms of confirmed cases, hospitalizations, and death when compared to the ground truth. Where his model and others struggle is when there is a sudden uptick in cases. With scenario modeling, there is no ground truth against which to compare performance, but such is not the purpose of that model. Rather, it serves to inform planning and efforts to make systems more resilient in the long run.
5 “Digital Twin” does not imply that the model represents every single aspect of the nation’s pandemic response, but rather that the model is updated regularly with real-world information, so that it remains current (in terms of vaccines, demographics, cases, deaths, etc.).
Marathe added that one issue that confounds planning is people forget when an event does not occur, which makes it difficult to sustain the investments needed to make systems resilient over the long term. That issue is compounded, said Santamaria, in democracies, where there is almost a disincentive to invest when that decision does not align with the political cycle. The decisions elected officials make depend at least in part on how their population is processing the situation, which can be contrary to what the model expects.
As a final comment, Santamaria said to beware of biases, particularly cognitive biases, because they can lead to the idea that there are certain measures that make sense to implement in a specific direction that may, in fact, not be the best path forward. “I think having explicit mechanisms to address cognitive biases is important in decision making, particularly for rare events,” he said.