Skip to main content

Currently Skimming:

9 Uncertainty
Pages 160-187

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 160...
... is pervasive uncertainty.... there is often great uncertainty in estimates or the types, probability, and magnitude of health effects associated with a chemical agent of the economic effects of a proposed regulatory action, and of the extent of current and possible future human exposures.
From page 161...
... , the agency concludes that quantitative uncertainty assessment is usually not practical or necessary for site risk assessments. The same guidance questions the value and accuracy of assessments of the uncertainty, suggesting that such analyses are too data-intensive and "can lead one into a false sense of certainty." In direct contrast, the committee believes that uncertainty analysis is the only way to combat the "false sense of certainty," which is caused by a refusal to acknowledge and (attempt to)
From page 162...
... However, that attitude toward uncertainty may be misguided. The very heart of risk assessment is the responsibility to use whatever information is at hand or can be generated to produce a number, a range, a probability distribution whatever expresses best the present state of knowledge about the effects of some hazard in some specified setting.
From page 163...
... . qualities types input response data Model selection for low-dose risk extrapolation -low-dose functional behavior of dose-response relationship (threshold, sublinear, linear, supralinear, flexible)
From page 164...
... RISK CHARACTERIZATION Component uncertainties hazard identification dose-response assessment exposure assessment SOURCE: Adapted from Bogen, 1990a. of classifying uncertainty is used by some research methodologists, because it provides a complete partition of types of uncertainty, and it might be more productive intellectually: bias is almost entirely a product of study design and performance; randomness a problem of sample size and measurement imprecision; and variability a matter for study by risk assessors but for resolution in risk management (see Chapter 10~.
From page 165...
... . A second type of parameter uncertainty arises when generic or surrogate data are used instead of analyzing the desired parameter directly (e.g., the use of standard emission factors for industrialized processes)
From page 166...
... PROBLEMS WITH EPA'S CURRENT APPROACH TO UNCERTAINTY EPA's current practice on uncertainty is described elsewhere in this report, especially in Chapter 5, as part of the risk-characterization process. Overall, EPA tends at best to take a qualitative approach to uncertainty analysis, and one that emphasizes model uncertainty rather than parameter uncertainties.
From page 167...
... The following pages describe more fully the development of probabilities and the method of using probabilities as inputs into uncertainty analysis models. Probability Distributions A probability density function (PDF)
From page 168...
... In particular, presentations of uncertainty will help in advancing the debate over whether the standardized procedures used to generate point estimates of risk are too "conservative" in general or particular cases. · Their insights regarding the balance between the costs of overestimating and underestimating risk (i.e., the shape and breadth of the uncertainty distribution informs the manager about how prudent various risk estimates might be)
From page 169...
... UNCERTAINTY TABLE 9-3 Some Key Variables in Risk Assessment for Which Probability Distributions Might Be Needed 169 Model Component Output Variable Independent Parameter Variable Transport Air concentration Deposition Deposition rate Overland Surface-water load Chemical emission rate Stack exit temperature Stack exit velocity Mixing heights Dry-deposition velocity Wet-deposition velocity Fraction of time with rain Fraction of chemical in overload runoff Water Surface-water concentration River discharge Chemical decay coefficient in river Soil Surface-soil concentration Surface-soil depth Exposure duration Exposure period Cation-exchange capacity Decay coefficient in soil Food Chain Plant concentration Fish concentration Plant interception fraction Weathering elimination rate Crop density Soil-to-plant bioconcentration factor Water-to-fish bioconcentration factor Dose Inhalation dose Inhalation rate Body weight Ingestion dose Plant ingestion rate Soil ingestion rate Body weight Dermal-absorption dose Exposed skin surface area Soil absorption factor Exposure frequency Body weight Risk Total carcinogenic risk Inhalation carcinogenic potency factor Ingestion carcinogenic potency factor Dermal-absorption carcinogenic potency factor SOURCE: Adapted from Seigneur et al., 1992.
From page 170...
... Subjective Probability Distributions A different method of probability assessment is based on expert opinion. In this method, the beliefs of selected experts are elicited and combined to provide a subjective probability distribution.
From page 171...
... Model Uncertainty: "Unconditional" Versus "Conditional" PDFs Regardless of whether objective or subjective methods are used to assess them, the distinction between parameter uncertainty and model uncertainty remains pivotal and has implications for implementing improved risk assessments that acknowledge uncertainty. The most important difference between parameter uncertainty and model uncertainty, especially in the context of risk assessment, concerns how to interpret the output of an objective or subjective probability assessment for each.
From page 172...
... it would be mathematically correct to say the following: "The expected value of the estimate of the number of annual excess cancer deaths nationwide caused by exposure to this substance is 1,000; the LCL of this estimate is zero deaths, and the UCL is 2,000 deaths." 3 We contend that in such cases, which typify the two kinds of uncertainties that risk managers must deal with, it would be a mistake simply to report the confidence limits and expected value in Situation B as one might do more routinely in Situation A, especially if one then used these summary statistics to make a regulatory decision. The risk-communication problem in treating this dichotomous model uncertainty (Situation B)
From page 173...
... This admonition is not inconsistent with our view that model uncertainty is important and that the ideal uncertainty analysis should consider and report all important uncertainties; we simply suspect that comprehension and decision-making might suffer if all uncertainties are lumped together indiscriminately. The subjective likelihood that each model (and hence each parameter uncertainty distribution)
From page 174...
... We have presented this discussion of the pitfalls of combining the results of incompatible models to support our view urging caution in applying these techniques in EPA's risk assessment. Such techniques should not be used for calculating unit risk estimates, because of the potential for misinterpretation of the quantitative risk characterization.4 However, we encourage risk assessors and risk managers to work closely together to explore the implications of model uncertainty for risk management, and in this context explicit characterization of model uncertainty may be helpful.
From page 175...
... Experience might show that emission estimates based on emission factors, mass balances, or material balances have an inherent uncertainty of a factor of about 100, whereas those based on testing tend to be within a factor of about 10. Expert opinion and analysis of past studies of such emission estimates could provide more definitive bounds on the estimates and result in a probability distribution.
From page 176...
... A distribution for qua derived from the entire binomial probability distribution for n, on the other hand, would answer both of these concerns. A second opportunity, which allows the analyst to draw out some of the model uncertainty in dose-response relationships, stems from the flexibility of the LMS model.
From page 177...
... Statistical Analysis of Generated Probabilities Once the needed subjective and objective probability distributions are estimated for each variable in the risk assessment, the estimates can be combined to determine their impact on the ultimate risk characterization. Joint distributions of input variables are often mathematically intractable, so an analyst must use approximating methods, such as numerical integration or Monte Carlo simulation.
From page 178...
... Quantitative dose-response assessment, with characterization of the uncertainty in the assessment, could then be conducted conditional on this set of inference options. Such a "conditional risk assessment" could then routinely be combined with an uncertainty analysis for exposure (which might not be subject to fundamental model uncertainty)
From page 179...
... Furthermore, because problems are not always resolved and analyses often need to be repeated, identification and characterization of the uncertainties can make the repetition easier. Single Estimates of Risk Once EPA succeeds in supplanting single point estimates with quantitative descriptions of uncertainty, its risk assessors will still need to summarize these distributions for risk managers (who will continue to use numerical estimates of risk as inputs to decision-making and risk communication)
From page 180...
... On the one hand, the study in Appendix G claims that EPA's estimate of MEI risk (approximately 10-~) is in fact quite "conservative," given that the study calculates a "reasonable worstcase risk" to be only about 0.0015.6 However, we note that this study essentially compared different and incompatible models for the cancer potency of butadiene, so it is impossible to discern what percentile of this unconditional uncertainty distribution any estimate might be assigned (see the discussion of model uncertainty above)
From page 181...
... are case-specific and can rarely be estimated with adequate precision until an honest attempt at uncertainty analysis is made. Risk Communication inadequate scientific and technical communication about risk is sometimes a source of error and uncertainty, and guidance to risk assessors about what to
From page 182...
... 182 so o o Cal 4Cal o ._ ,D .c Cal ._ Cal o o o o so Cal 3 o 3 o A: o .~ ce C~, ~ Cal C, ·~4 Cal ._ V Ct a_ V, ~ · _ ~ rD .
From page 183...
... COMPARISON, RANKING, AND HARMONIZATION OF RISK ASSESSMENTS As discussed in Chapter 6, EPA makes no attempt to apply a single set of methods to assess and compare default and alternative risk estimates with respect to parameter uncertainty. The same deficiency occurs in the comparison of risk estimates.
From page 184...
... Single Point Estimates and Uncertainty EPA often reports only a single point estimate of risk as a final output. In the past, EPA has only qualitatively acknowledged the uncertainty in its estimates, generally by referring to its risk estimates as "plausible upper bounds" with a plausible lower bound implied by the boilerplate statement that "the number could be as low as zero." In light of the inability to discern how "conservative" an estimate might be unless one does an uncertainty analysis, both statements might be misleading or untrue in particular cases.
From page 185...
... Comparison of Risk Estimates EPA makes no attempt to apply a consistent method to assess and compare default and alternative risk estimates with respect to parameter uncertainty. Presentations of numerical values in an incomplete form lead to inappropriate and possibly misleading comparisons among risk estimates.
From page 186...
... · Rather than "harmonizing" risk assessments by picking one assumption over others when several assumptions are plausible and none of the assumptions is clearly preferable, EPA should maintain its own default assumption for regulatory decisions but indicate that any of the methods might be accurate and present the results as an uncertainty in the risk estimate or present multiple estimates and state the uncertainty in each. However, "harmonization" does serve an important purpose in the context of uncertainty analysis it will help, rather than hinder, risk assessment if agencies cooperate to choose and validate a common set of uncertainty distributions (e.g., a standard PDF for the uncertain exponent in the "body weight to the X power" equation or a standard method for developing a PDF from a set of bioassay data)
From page 187...
... Note that characterizing risks considering only the parameter uncertainty under the preferred set of models might not be as restrictive as it appears at first glance, in that some of the model choices can be safely recast as parameter uncertainties. For example, the choice of a scaling factor between rodents and humans need not be classified as a model choice between body weight and surface area that calls for two separate "conditional PDFs," but instead can be treated as an uncertain parameter in the equation Rhumana Rrodent BWa, where a might plausibly vary between 0.5 and 1.0 (see our discussion in Chapter 11)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.