Skip to main content

Currently Skimming:

3 Methods of Event Attribution
Pages 47-84

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 47...
... While observations are useful, attribution studies generally use climate models, which incorporate knowledge of the physics of the climate system, to quantify how human or natural influences have changed the frequency or intensity of events like the observed event relative to a baseline forcing scenario. Climate and other numerical models are useful because they can be used to investigate responses to controlled forcing (see conditioning in the previous chapter)
From page 48...
... The limitations of trend detection in the frequency or intensity of extreme events i ­mply that event attribution must often rely on the understanding of long-term changes in variables that have a close physical relationship to the event in question and are expected to affect the frequency or the intensity of the event in question. Such attributed long-term changes could pertain to the mean state of the climate, or to extremes over a larger area, or to a variable that demonstrably contributes to the extreme event, such as higher water availability for extreme rainfall (e.g., Pall et al., 2011)
From page 49...
... . Significant lowfrequency natural variability can make it difficult to detect a change due to human influence because the natural variability reduces the signal-to-noise ratio.
From page 50...
... heat wave days and mean temperature anomalies relative to the 1955-1984 average (a) and scatter plot of heat wave days and summer temperature anomalies (b)
From page 51...
... This approach is justifiable only if there is supporting evidence that the covariate ­ndeed has a causal link to human influences. i Otherwise, trends caused by other factors or natural variability may be aliased, leading to either an over­ stimate or an underestimate of the human influence.
From page 52...
... It would be preferable if such studies could explicitly include uncertainty in the fraction of trend that is due to human influences in the analysis as well as additional uncertainty due to the indirect relationship of the variable in question to the larger-scale attributed trend. In the example of temperatures in De Bilt, for instance, the human contribution to global mean temperature is a range, not a single value, and uncertainty increases further when going to the regional scale.
From page 53...
... The uncertainties in observation-based analyses are considerable, but they are different and complementary to the uncertainties in attribution approaches that rely strongly on climate models to estimate the difference between present conditions and those that would have occurred without human influences. METHODS BASED ON CLIMATE AND WEATHER MODELS In nearly all attribution studies of extreme events, climate and weather ­ odels are an m indispensable tool.
From page 54...
... . The following are examples of the impacts of low-frequency natural variability on extreme events of the type addressed in this report.
From page 55...
... When attribution of extreme events is conditioned on observed SSTs (for which there is only one historical realization) , however, unforced natural variability may impact conclusions about likelihoods.
From page 56...
... Model simulations are well suited to provide quantitative estimates of the ­ egree to d which extreme event frequencies or magnitudes in the factual world differ from what would have happened in a world unperturbed by ­ uman emissions of GHGs (and h other forcing factors; see Chapter 2 for a discussion of framing)
From page 57...
... Reliable simulation of non-­ xtreme events does not necessarily indicate that a model will reliably simulate e extreme events. This section describes some specific types or configurations of models and how they have been used in extreme event attribution studies.
From page 58...
... It also is possible to use coupled models for conditional attribution studies, such as for El Niño years, by selecting specific years that have the same phase of El Niño as observed (Lewis and Karoly, 2013)
From page 59...
... Initial condition ensembles (the model is run with a variety of slightly different initial conditions at the start) are used in almost all model-based event attribution studies (including unconditional ­ tudies using ensem s bles such as CMIP5)
From page 60...
... . Studies using mul FIGURE 3.5  Sensitivity of change in the occurrence frequency of extremely high river runoff in England and Wales for autumn 2000 using different climate models to characterize and remove the human influence on sea-surface temperature from a counterfactual world.
From page 61...
... Downscaling Some types of extreme events are not well simulated by global models, either coupled or atmosphere-only, often because these models are not run at sufficiently high spatial resolution. Additional models, embedded within a global model to provide large-scale environmental conditions, may be used to represent these events better.
From page 62...
... for factual and counterfactual world simulations. As discussed in Chapter 2, some approaches provide much stronger constraints on the current state of the climate system than conditioning on SST patterns, corresponding to different framings of the attribution question.
From page 63...
... in representing an event or the climatology of an event class is best assessed using the factual simulations, because these are expected to correspond most closely to the observed climate. Even then, however, only limited information is available from observations for extreme events.
From page 64...
... Such assessment includes the evaluation of model quality for the factual world with anthropogenic forcing over the past several decades, and it may be based on instrumental data for time periods before extensive anthropogenic influence and possibly using paleoclimatic reconstructions of earlier periods. Also, knowledge of fundamental climate science and of model structure can provide an understanding of what kinds of events may or may not be well characterized by models in terms of the variables that are used to define the events, dependence on circulation patterns, dependence on SSTs, spatial scales, and temporal scales.
From page 65...
... reports, this requirement has implications in terms of which kinds of extreme events can be addressed. Most coupled models exhibit substantial biases in mean climate and variability relative to observations, especially at the regional scale, so some bias correction will almost certainly be required, and the validity of this must be established.
From page 66...
... and even more so for the counterfactual scenario. Conditional Attribution In conditional attribution analyses, model quality should ideally be assessed conditionally: Does the model accurately represent the climatology given the forcings and the conditioning factors?
From page 67...
... world, and therefore, they condition on a feature of that world; but one needs a corresponding ocean state in the counterfactual world. Nevertheless, studies that use atmospheric models often use multiple estimates of the ocean warming due to human influences and results -- particularly in studies of precipitation -- can be surprisingly sensitive to this (see Otto et al., 2015c; Pall et al., 2011; see Figure 3.4)
From page 68...
... . Never­ heless, the description of the counterfactual remains a t challenge because it is necessary to determine the anthropogenic component of the thermodynamic conditions relevant for the event; this introduces uncertainties comparable to those of determining the counterfactual ocean state in atmosphere-only model simulations, as discussed above.
From page 69...
... In the context of event attribution, the main source of sampling uncertainty is the chaotic unforced variability that is a pervasive feature of the climate system and that is simulated to various extents by climate m ­ odels, even when run without any type of time-varying natural or anthropogenic external forcing. This can include substantial contributions from the low-frequency natural variability of the climate system (Box 3.1)
From page 70...
... In modeling studies, uncertainty from natural variability is accounted for in analyses that use coupled models. Such model simulations sample over the state of the climate system and, if run for sufficiently long or with sufficiently large ensembles, should, in principle, represent the full distribution of natural variability as a component of sampling uncertainty.
From page 71...
... adult males) , the Bayesian would use established techniques to update this prior distribution in light of the data to get a new probability distribution for h called the "posterior distribution." The posterior distribution reflects our state of knowledge about h after collecting data.
From page 72...
... , by selecting different periods from a single climate or weather model simulation, or using different simulations from the same climate or weather model that have been started from different initial conditions. In all cases, a frequentist describes the uncertainty that arises from using different samples drawn in statistically equivalent ways, whereas a Bayesian also will use additional knowledge that is described in the form of probability distributions that quantify what is known or judged to be more or less likely given the available understanding before gathering further data.
From page 73...
... the representation of sub-grid scale processes. The nature of this uncertainty will vary with the type of model that is used for event attribution (e.g., ranging from global coupled models, to nested regional climate models, to very-high-resolution convection permitting models)
From page 74...
... m Interpretation, however, of such a sample based on an ensemble of opportunity of climate models -- for example, those that participated in CMIP3 or CMIP5 -- ­ emains r a challenging topic (Annan and Hargreaves, 2010; Rougier et al., 2013)
From page 75...
... Viewed from a Bayesian perspective, the prior distribution over parameters or models is not updated based on observed data as in a standard Bayesian analysis. This concern about statistical bias can be stated in another way in the context of multi-model analyses.
From page 76...
... It is clear, however, that satisfyingly addressing uncertainties in all of these aspects is difficult if not impossible. In the absence of being able to do so, some studies have started using multiple, different methods to estimate human influences on a given event.
From page 77...
... RAPID ATTRIBUTION AND OPERATIONALIZATION The media, the public, and decision makers increasingly ask for results from event attribution studies during or directly following an extreme event. To meet this need, some groups are developing rapid and/or operational event attribution systems to provide attribution assessments on faster timescales than the typical research mode timescale, which can often take years (Box 3.4)
From page 78...
... with seasonal forecasts in order to predict the probability of extreme events under current climate conditions 1 month ahead. The counterfactual world is simulated as in other weather@home experiments.
From page 79...
... Using the computing resources provided by volunteers through the climateprediction.net distributed computing network, weather@home runs very large ensembles of simulations with the UK Met Office's HadAM3P global atmosphere-only model to investigate how the odds of extreme weather events change due to anthropogenic climate change, other external forcings, and natural variability. Depending on the problem that is being investigated, the system can also be configured to dynamically downscale the output from HadAM3P by nesting the HadRM3Pg regional model nested in the output from the global model.
From page 80...
... The time constraints associated with rapid attribution may affect framing and methodological choices by limiting analyses to approaches that can be undertaken quickly. Examples of possible limitations are: reliance on a primarily observationally based approach and possibly on station data that have not yet been quality controlled; inability to assess the robustness of model-based results through reliance on single models with specified SSTs or "off-the-shelf" global model runs from an ensemble of opportunity; and insufficient time either to investigate causal mechanisms or to evaluate the model for the particular extreme events.
From page 81...
... Assessment of model quality in relation to the event or event class of interest is critical for enhancing confidence in event attribution studies. Different event types pose different requirements for model fidelity.
From page 82...
... In the context of event attribution, the main source of sampling uncertainty is the chaotic unforced variability that is a pervasive feature of the climate system and that is simulated to various extents by climate models, even when run without any type of time-varying natural or anthropogenic external forcing. This can include substantial contributions from the low-frequency natural variability of the climate system, including the effects of long-term oscillations that may confound the diagnosis of the effects of human-induced changes in analyses based on short observational records.
From page 83...
... Event attribution results, particularly for local events or such events that are strongly influenced by climate dynamics and its changes, are subject to substantial uncertainty and hinge on assumptions made when selecting a modeling setup and using statistical tools to quantify uncertainty. Given that these choices and the representation of uncertainties can be highly technical, communicating results of event attribution to the broader public in a way that does not overstate the result or fails to sufficiently highlight the assumptions involved in the analysis is difficult.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.