National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Summary

The U.S. Environmental Protection Agency’s (EPA) Integrated Risk Information System (IRIS) program develops human health assessments that focus on hazard identification and dose-response analyses for chemicals in the environment. The assessments are highly important as they are routinely used to inform risk assessments and risk management decisions across the agency. In addition, they are used by federal, state, local, and tribal agencies, as well as community organizations and agencies in other countries, to inform decisions concerning health risk assessment and management.

Over the years, questions had been raised about the scientific basis of toxicity values reported in some IRIS assessments and the extensive amount of time taken to complete assessments. The 2014 report of the National Academies of Sciences, Engineering, and Medicine titled Review of EPA’s Integrated Risk Information System (IRIS) Process recommended that EPA develop a single handbook to provide detailed guidance for all those involved in developing IRIS assessments. The ORD Staff Handbook for Developing IRIS Assessments (the handbook) provides guidance to scientists who perform the IRIS assessments in order to foster consistency in the assessments and enhance transparency about the IRIS assessment process. It also provides an opportunity for stakeholder communities to become aware of the processes and policies guiding those who are drafting IRIS assessments. The handbook describes the tasks involved in carrying out the sequential stages in preparing an IRIS draft assessment (see Box S-1).

EPA requested that the National Academies conduct a review of the 2020 version of the handbook. In response, the Committee to Review EPA’s IRIS Assessment Handbook has been convened to review the procedures and considerations for operationalizing the principles of systematic reviews and the methods described in the handbook for determining the scope of the IRIS assessments, evidence integration, extrapolation techniques, dose-response analyses, and characterization of uncertainties.

GENERAL COMMENTS ON THE HANDBOOK

The committee found that the handbook reflects the significant improvements that EPA has made in its IRIS assessment process. For instance, the handbook describes the inclusion of sophisticated, state-of-the-art methods that use systematic evidence maps to summarize literature characteristics for scoping and systematic review methods for hazard identification. Moreover, the IRIS program is clearly helping to advance the science of systematic review, as applied to hazard identification. EPA staff are actively involved in the ongoing development of methods, such as study evaluation and handling of mechanistic data. The committee recognizes that EPA faces challenges in implementing many of the methods for the IRIS assessment process and is impressed and encouraged by the

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

progress that the IRIS program has made to date. The methods for developing IRIS assessments can serve as a model for other EPA programs that are implementing systematic review methods.

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

However, the committee found that the handbook does not consistently convey the strengths and advances in methodology for the IRIS assessment process in an even and clear manner. Thus, the committee offers recommendations aimed at ensuring the handbook meets its objectives of providing transparency about the IRIS assessment process and providing operational instructions for those conducting the assessments. This summary provides the committee’s highest priority recommendations that it believes are critical for improving the scientific rigor and clarity of the handbook. Other lower priority suggestions can be found in the conclusions of individual chapters of this report.

OVERVIEW OF ORGANIZATION AND CONTENT OF THE HANDBOOK

The overall organization and presentation of the handbook are in need of improvement. For example, there are multiple places in the handbook where the roles of mechanistic and toxicokinetic (TK)1 data are described, but they are not entirely consistent. Similarly, multiple places in the handbook describe how the evidence base for susceptible populations should be handled. However, the definition of what constitutes evidence of susceptibility and the types of data that may inform such susceptibility are not defined in one place. In addition, the handbook chapter on selection of studies for toxicity value determination does not directly follow on from the earlier handbook chapters. As described, hazard evaluation and toxicity value determination appear to be disconnected processes.

Recommendation 2.1: EPA should engage a professional editor with specific expertise in developing handbook-like materials to assist with the handbook revision. The editor should enhance the transparency and ease of use of the handbook by focusing the material on the concepts, definitions, and instructions needed to complete the main steps in the IRIS assessment process; eliminating unnecessary repetition among the chapters; and ensuring that terminology is used consistently across chapters.

The handbook uses some of its terminology in an inconsistent manner, a substantive issue. For instance, definitions for key terms such as “scoping” and “sensitivity” are currently scattered across different chapters of the handbook. In other cases, the handbook assigns unconventional definitions to terms used for designating various IRIS-specific processes and products in the area of evidence synthesis.

Recommendation 2.2: EPA should add a glossary to the handbook for defining key terms. Single definitions should be provided for concepts, and the definitions should be applied consistently throughout the handbook.

___________________

1 In this report, toxicokinetics refers to the absorption, distribution, metabolism, and excretion processes also called ADME or pharmacokinetics. However, consistent with the handbook, models are described as pharmacokinetic or physiologically based pharmacokinetic models rather than toxicokinetic models.

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Recommendation 2.3: The handbook should use terminology in a manner that is consistent with existing, accepted definitions in related fields. When alternative definitions are used for the IRIS assessment process, the handbook should provide explicit justification.

The handbook does not clearly indicate where systematic review methods are used in the major steps of the IRIS assessment process and where they are not used.

Recommendation 2.4: When systematic review methods are being used for parts of the IRIS assessment process, this should be stated and the relevant methodological literature should be referenced.

The handbook does not adequately describe the overall process or flow for developing IRIS assessments and the iterative nature of some of the steps in the process.

Recommendation 2.5: EPA should create new graphical and tabular depictions of the IRIS assessment process as it is currently practiced, and should not feel constrained to mirror the process depicted by the 2014 National Academies report Review of EPA’s Integrated Risk Information System (IRIS) Process or any other report (including this one). A professional editor could be of assistance here.

The handbook includes very detailed information on some methods that may undergo rapid development (e.g., data extraction). On the other hand, the handbook lacks specific examples from relevant IRIS assessments and examples of software used by EPA, such as the Health Assessment Workspace Collaborative (HAWC), that are needed to fully understand the IRIS program’s methods.

Recommendation 2.6: The handbook should incorporate more examples from relevant IRIS assessments and examples of software used by EPA, such as HAWC. EPA could provide them as supplementary material or links to other content.

Funding bias refers to an association between study funding sources and financial ties of investigators with research outcomes that are favorable for the sponsors. Publication bias is the publication or non-publication of research results based on the nature and direction of the results. Publication bias and funding bias are mentioned sparingly in the handbook, although empirical evidence from a variety of fields shows that funding bias and publication bias can alter effect estimates when evidence is synthesized. Funding bias might also have an effect on the confidence of study ratings from evidence evaluation. Tools and methods to detect and assess these biases are available.

Recommendation 2.7: The handbook should describe how to detect and assess the effect of funding bias on the confidence of study ratings from evidence evaluation or effect estimates from synthesis.

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Recommendation 2.8: The handbook should describe how to detect and assess the effect of publication bias on effect estimates from synthesis.

PLANNING THE IRIS ASSESSMENT

Chapters 17 and 10 of the handbook describe processes related to planning the development of IRIS assessments. The committee divided its critiques of the overall planning process for IRIS assessments into three general phases: problem formulation, protocol development, and organization of the hazard review.

Problem Formulation

The problem formulation process described in the handbook presents an important evolution in the understanding of how to use systematic evidence maps (also called “literature inventories” in the handbook) to inform priority setting in toxicity assessments. The process of developing the systematic evidence maps is in accordance with established best practices. The problem formulation process makes suitable use of information specialists and is notable for its comprehensive coverage of both published and grey literature resources. The systematic evidence map is a key milestone in the IRIS process, as it incorporates information obtained from the scoping step and initial problem formulation step and forms the foundation of all subsequent analysis.

The committee notes that systematic evidence maps also have considerable potential value in and of themselves as a public good by providing a publicly accessible database that could be queried and used by any research organization to identify knowledge gaps and clusters in toxicity research.

Recommendation 3.1: The handbook should make explicit which components of a literature inventory database are to be made publicly available and when.

Protocol Development

The handbook lacks clarity regarding the products of the planning process, the relationships among them, which are expected to be updated or should be registered, and how they feed into the IRIS Assessment. In a systematic review, the protocol is a complete account of planned methods, which should be registered prior to conduct of the review. The term “registration,” in this context, is generally understood to mean the public release of the protocol in a time-stamped, read-only format. The handbook lacks clarity as to exactly what documented output of the assessment process constitutes a protocol, as the term “protocol” seems to refer to as many as three types of documents. The scope of each of these documents is also described ambiguously in the handbook.

Recommendation 3.5: The handbook should clarify and simplify the assessment planning process as follows: restructure the handbook to directly reflect the order in which each step is undertaken, unambiguously identify each of the products of the

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

planning process, clearly define what each product consists of, and state if and when each product is to be made publicly available.

Recommendation 3.6: EPA should create a time-stamped, read-only, final version of each document that details the planned methods for an IRIS assessment prior to conducting the assessment.

The committee observes that the processes for inclusion and exclusion of studies from the systematic reviews conducted as part of the IRIS assessments diverges from current best practices. In conventional systematic reviews, the inclusion and exclusion of evidence, as well as the delineation of “units of analysis” for evidence synthesis, are strictly governed by prespecified population, exposure, comparator, outcome (PECO) statements. However, the handbook, as well as recently released IRIS assessment planning documents (e.g., for Vanadium and Inorganic Mercury Salts), only include broad PECO statements, along with broad health effect categories. This approach contrasts with conventional systematic reviews, in which even within a relatively broad health effect category (e.g., cardiovascular disease), a detailed PECO statement with specific outcomes is prespecified to be fully transparent and to minimize potential for bias via selective inclusion of literature in a review.

Moreover, as currently described in the handbook, evidence may be excluded from further consideration at the “refined plan” step or at the “organize hazard review” step. The reasons for the triage of such evidence, and safeguards to ensure that the evidence is not being used selectively, are not explained sufficiently in the handbook.

Recommendation 3.7: The IRIS assessment protocol should include refined PECO statements for each unit of analysis defined at the levels of endpoint or health outcome. The development of the refined PECO statements could benefit from considerations of available mechanistic data (e.g., grouping together causally linked endpoints, separating animal evidence by species or strain) and TK information (e.g., grouping or separating evidence by route of exposure), and information about population susceptibility.

Organization of the Hazard Review

While there should be explicit consideration given to the organization of the hazard review in the planning stages of the IRIS assessment, the handbook leaves this organizing process under-specified. This is problematic given that this stage requires the highest level of granularity in describing how evidence is to be selected, organized, and grouped prior to the data extraction stage of an assessment. Much more clarity is needed about which elements would more appropriately be subsumed under other (earlier) stages. Moreover, this step deviates from best practices in conventional systematic reviews, where all outcomes are prespecified and not subject to change after study evaluation. The handbook does not provide sufficient justification for revisiting the design of the systematic review

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

after study evaluation. In particular, it does not clearly articulate what findings from the study evaluation would be sufficient to change the analysis plan.

Recommendation 3.8: The steps of organizing the hazard review should be narrowed to focus on new information obtained after the study evaluation stage. Organizing the hazard review should be structured by clear criteria for triage and prioritization, and aimed at producing transparent documentation of how and why outcomes and measures are being organized for synthesis.

The Use of Mechanistic and Toxicokinetic Data and Key Characteristics

The roles of mechanistic and TK data in the planning process are described in multiple places, and the descriptions are not entirely consistent. In some cases when it is appropriate and possible to evaluate the strength of evidence of mechanistic and TK data or pharmacokinetic (PK) models using systematic review methods, these data may warrant their own PECO (or a set of PECO) statements. However, the handbook is not clear as to when mechanistic and TK data or PK models may require a separate PECO statement defining a discrete unit of analysis for systematic review, synthesis, and strength of evidence judgments.

Key characteristics (KCs) comprise the set of chemical and biological properties of agents that cause a particular toxic outcome. Although the use of KCs to search, screen, and organize mechanistic data is increasingly becoming accepted, the role of KCs for informing hazard identification has been the subject of debate. They are appropriate for use in evaluating biological plausibility, or lack thereof. However, KCs as currently constructed tend to be sensitive, but not necessarily specific. More research is needed into whether and how they can be used to be more predictive of hazard.

Recommendation 3.9: The handbook should describe how the IRIS assessment plan and IRIS assessment protocol can identify the potential roles of mechanistic and TK data, including if they are to be units of analysis for systematic review, synthesis, and strength of evidence judgments. At a minimum, all endpoints that may be used for toxicity values, including so-called “precursor” endpoints that might be viewed as “mechanistic,” should require separate PECO statements; however, application of systematic review methods to other mechanistic endpoints, such as mutagenicity, may depend on the needs of the assessment. The key mechanistic and TK questions should be identified to the extent possible in the IRIS assessment plan and IRIS assessment protocol documents.

Recommendation 3.10: When available, KCs should be used to search for and organize mechanistic data, identify data gaps, and evaluate biological plausibility. Those uses should be reflected in the IRIS assessment plan and IRIS assessment protocol.

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

STUDY EVALUATION

Chapter 6 of the handbook describes approaches for evaluating individual human and animal health effect studies, pharmacokinetic models, and an approach for mechanistic studies. The chapter describes circumstances under which a study may be excluded from the systematic review based on the outcome of the study evaluation. However, such exclusion is inconsistent with recent recommendations to incorporate study evaluation ratings within the context of evidence synthesis.

Recommendation 4.1: The handbook should not use the results of study evaluation as eligibility criteria for the systematic review.

The handbook discusses the term “sensitivity” as the ability of a study to detect a true association, with insensitive studies being prone to producing false negative results. The committee finds the handbook’s definition of “sensitivity” to be ambiguous and potentially overlapping with more established systematic review concepts of internal validity, external validity, and statistical precision.

Recommendation 4.2: EPA should evaluate whether aspects currently captured in the notion of “sensitivity” might be better described in the handbook with more established terminology (e.g., precision or generalizability) or better addressed at other points of the systematic review (e.g., risk of bias assessment or evaluation relative to PECO statement[s]). Otherwise, the handbook should provide a more concrete definition of “sensitivity” and a procedure for operationalizing its use in the study evaluation step.

The use of reporting quality as a distinct quality assessment item for study evaluation is not standard for systematic reviews, and procedures for evaluating reporting quality are very different for human epidemiological and animal toxicological studies.

Recommendation 4.3: The handbook should address the apparent difference in assessing reporting quality between the human epidemiological studies and animal toxicological studies by either (1) assessing reporting quality similarly in both types of studies or (2) providing an explicit rationale for why the concepts require different assessment procedures in different types of studies. In either case, the handbook should provide an explicit rationale for isolating elements of reporting quality from established systematic review concepts and evaluate whether aspects currently described as reporting quality might be better addressed at other points of the systematic review process.

EVIDENCE SYNTHESIS

Because many of the considerations for evidence synthesis within a group of outcomes of human or animal evidence are repeated (with slight variation) in Chapters 9 and

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

11 of the handbook, the transition from the synthesis step to integration is confusing. In addition, the procedures in Chapter 9 do not lead to a strength of evidence judgment for synthesis, which is covered in Chapter 11.

Recommendation 5.1: The handbook should consolidate its discussion of evidence synthesis in a single place. The discussion should include all of the considerations involved in making strength of evidence conclusions (currently in Chapter 9), as well as the criteria for different strength of evidence judgments (currently in Chapter 11). The handbook chapter describing synthesis of evidence should end with the methods for reaching strength of evidence conclusions for each unit of analysis, and how these are carried forward to evidence integration.

The unit of analysis for evidence synthesis and the strength of evidence conclusion is unclear, with respect to the breadth or narrowness of the evidence being synthesized. Although the handbook states that the evaluation of strength of evidence “will preferably occur at the most specific health outcome possible” (EPA, 2020a, p. 11-8), it is not clear how to proceed when there is more than one unit of analysis.

Recommendation 5.2: The unit of analysis for evidence synthesis and strength of evidence conclusions should be clearly defined as specified by the refined PECO statements recommended in Chapter 3 of this report. For example, a unit of analysis could be defined at the endpoint level (e.g., clinical chemistry) or outcome level (e.g., liver toxicity). If judgments may be made at both the endpoint and health outcome levels, details should be provided on how these judgments and the methods used to make them are distinct from each other.

The evidence synthesis approach outlined in the handbook appears to be a hybrid of a guided expert judgment approach and a more structured approach (see Chapter 5 of this report).

Recommendation 5.3: The handbook should provide justification for the initial rating for strength of evidence, as well as more detailed operationalization of the criteria used to upgrade or downgrade the evidence.

The handbook’s considerations of mechanistic and TK data, and PK or physiologically based pharmacokinetic (PBPK) models outlined in evidence synthesis appear to mix in some elements of evidence integration, particularly through the concepts of “coherence” of study findings across different endpoints and “biological plausibility” of the findings. The result is confusing, especially when combined with the unclear transition between Chapters 9 and 11, as described above. The application of the term “coherence,” as described in the handbook, appears to be more appropriate during either (1) planning of the assessment (the biological relationship among different endpoints) or (2) evidence integration (through incorporation of mechanistic data). Overall, the applications of the term “biological plausibility” in the handbook appear to either (1) address considerations

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

already covered elsewhere, such as consistency, or (2) involve comparison with data on mechanistic changes, which may be more appropriate to incorporate during the evidence integration step.

Recommendation 5.4: The handbook should restrict the applications of mechanistic and TK data or PK models in evidence synthesis and strength of evidence judgments to those relevant to each individual unit of analysis, such as addressing consistency and indirectness of evidence. Other applications of mechanistic and TK data or PK models, such as addressing coherence and elements of biological plausibility, could be addressed in evidence integration, either as a separate evidence stream or as support for the human or animal evidence streams. This recommendation should be implemented in the planning stage and reflected in the protocol (see Recommendation 3.9).

EVIDENCE INTEGRATION

Chapter 11 of the handbook contains information on three sequential but independent steps in a typical systematic review process: synthesizing the evidence (unit[s] of analysis within human and animal streams), rating the confidence in the body of the evidence, and integrating the evidence (integrating human and animal evidence and considering mechanistic evidence). As noted above, there is overlap with handbook Chapter 9 on evidence synthesis. The terms “synthesis,” “integration,” and “strength of the evidence” appear to be used almost interchangeably throughout these two chapters.

Recommendation 6.1: The handbook should separate and delineate the chapters with respect to evidence synthesis within a stream and evidence integration across data streams (see Recommendation 5.1). Synthesis, integration, and the judgments that are used to rate the evidence need to be clearly defined, distinct steps.

Recommendation 6.2: EPA should supplement Chapter 11 of the handbook with additional figures and examples from IRIS assessment documents. The addition of a terminology map (endpoint, outcome, synthesis, integration) would help define the steps that should occur at each level and, more specifically, what data are to be synthesized and where to expect the judgment narratives to be provided.

Mechanistic data appear to be used to strengthen conclusions of the individual data streams or when there is a lack of evidence in a single stream, and yet some aspects of the handbook treat mechanistic data as their own unit of analysis for evidence synthesis.

Recommendation 6.3: The handbook should clearly define the roles of mechanistic data and other supporting data in evidence integration and throughout the entire IRIS assessment development process (see Recommendations 3.9, 3.10, and 5.4).

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

STUDY SELECTION FOR DERIVING TOXICITY VALUES AND DERIVATION OF TOXICITY VALUES

EPA has made considerable progress in acting on the 2014 National Academies report recommendation that the IRIS program “continue its shift toward the use of multiple studies rather than single studies for dose‒response assessment” (NRC, 2014, p.129) by developing formal methods for combining data from multiple studies. EPA also has developed a process for characterizing uncertainty and communicating confidence in derived toxicity values, as recommended in the 2014 report.

Despite these improvements, the methods described in Chapter 12 of the handbook lack a transparent discussion of considerations used to select studies. Chapter 7 of the committee’s report provides some key questions that could be considered to clarify study selection.

Recommendation 7.1: The handbook should provide a clearer, step-by-step description of study selection, using a framework incorporating the different steps of hazard identification (including study evaluation, synthesis, and integration) as well as new steps specific to toxicity value derivation. The handbook should provide a template for how IRIS assessments are to summarize (e.g., in a table) the study selection process as applied to each endpoint, health outcome, study, and evidence stream in order to provide transparency as to study evaluation for toxicity value derivation, and to support selection of overall toxicity values. It is especially important to capture study attributes for which EPA has designated an option as “preferred” versus “less preferred.”

Chapter 13 of the handbook provides important information relating to issues and considerations for developing points of departure (PODs) for toxicity values, but it lacks a consistent level of detail for deriving and utilizing PODs.

Recommendation 7.2: EPA should streamline Chapter 13 of the handbook, especially Section 13.2, to focus on the most common methods and approaches rather than detailing less common scenarios. For instance, although use of PBPK modeling is designated as “preferred,” it requires too much detail in this handbook to provide instructions on development and application of such models; citing other documents is preferable. If there is important information that is missing from existing EPA documents or from the peer-reviewed literature, these could be provided in an appendix to avoid disrupting the flow of the handbook. Additionally, there may be concerns over providing duplicated information, as any future updates to the related EPA documents would require an update to the handbook as well.

Chapter 13 of the handbook is unclear as to the use of probabilistic approaches to replace the traditional deterministic uncertainty factor-based approach for toxicity value derivation.

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Recommendation 7.3: EPA should make it explicit in the handbook that probabilistic approaches to derive risk-specific doses will be routinely applied where feasible, referencing recent literature including a 2020 case study on acrolein (Blessinger et al., 2020). EPA should also consider when and how to transition fully away from the traditional deterministic approach to adopt risk-specific doses for its IRIS toxicity values.

CONCLUDING REMARKS

Overall, the committee concluded that the handbook reflects the significant improvements that EPA has made in its IRIS assessment process. The methods for developing IRIS assessments can serve as a model for other EPA programs that are implementing systematic review methods. The committee believes that the recommendations provided in this report will help ensure that the handbook meets its objectives of providing transparency about the IRIS assessment process and providing operational instructions for those conducting the assessments.

Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 1
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 2
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 3
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 4
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 5
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 6
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 7
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 8
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 9
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 10
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 11
Suggested Citation:"Summary." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 12
Next: 1 Introduction »
Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version Get This Book
×
 Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version
Buy Paperback | $25.00 Buy Ebook | $20.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The U.S. Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) program develops human health assessments that focus on hazard identification and dose-response analyses for chemicals in the environment. The ORD Staff Handbook for Developing IRIS Assessments (the handbook) provides guidance to scientists who perform the IRIS assessments in order to foster consistency in the assessments and enhance transparency about the IRIS assessment process. At the request of the EPA, this report reviews the procedures and considerations for operationalizing the principles of systematic reviews and the methods described in the handbook for determining the scope of the IRIS assessments, evidence integration, extrapolation techniques, dose-response analyses, and characterization of uncertainties.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!