National Academies Press: OpenBook
« Previous: 5 Evidence Synthesis
Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

6

Evidence Integration

This chapter provides the committee’s review of Chapter 11 of the ORD Staff Handbook for Developing IRIS Assessments (the handbook), “Evidence Integration” (EPA, 2020a). The committee considered whether the approaches described in that chapter of the handbook are scientifically sound and appropriate for integrating the various types of evidence relevant to investigating the potential for human health effects from exposure to environmental chemicals. The committee also commented on approaches for developing overall integration judgments that involve using three versus five categories of judgment levels (Questions 6b and 7 in Appendix B).

OVERVIEW OF THE HANDBOOK’S MATERIAL ON EVIDENCE INTEGRATION

Chapter 11 of the handbook maps out the process the Integrated Risk Information System (IRIS) program intends to use for integrating human, animal, and mechanistic data to assess the human health hazard of the chemical being reviewed. Details are provided on the proposed evidence integration narratives with summary judgments and supporting rationale for these decisions. Although evidence integration is typically a stand-alone step in the systematic review process used to summarize data across evidence streams, there is overlap in the handbook with Chapter 9 “Analysis and Synthesis of Human and Experimental Animal Data,” which provides general considerations for evidence synthesis within a data stream.

The National Institute of Environmental Health Science’s Office of Health Assessment and Translation (OHAT) Handbook for Conducting a Literature-Based Health Assessment Using OHAT Approach for Systematic Review and Evidence Integration describes three distinct steps following the risk of bias assessment: synthesizing the evidence (within human and animal streams), rating the confidence in the body of the evidence, and integrating the evidence (integrating human and animal evidence and considering mechanistic evidence) (NTP, 2019). The judgments developed in the evidence integration step are intended to directly inform the hazard identification and dose-response analysis. These steps, which OHAT defines as sequential but independent steps in the systematic review process, are nearly indistinguishable in Chapter 11 of the handbook.

RESPONSIVENESS TO PREVIOUS NATIONAL ACADEMIES REPORTS

Evidence integration has long been a challenge in toxicological systematic reviews, and previous National Academies reports have provided the U.S. Environmental Protection Agency (EPA) with several suggestions to improve the process. The 2011 National

Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

Academies report Review of the Environmental Protection Agency’s Draft IRIS Assessment of Formaldehyde recommended the development of a standardized approach for evidence integration (NRC, 2011). The 2014 National Academies report Review of EPA’s Integrated Risk Information System (IRIS) Process commended EPA on its progress (NRC, 2014). That report provided additional recommendations to the IRIS program for developing a more systematic process for evidence synthesis and integration to enhance transparency, efficiency, and scientific validity. The development of standardized, structured evidence tables was recommended by both the 2011 and 2014 reports to support the evidence judgments and narratives. The 2018 National Academies report Progress Toward Transforming the Integrated Risk Information System (IRIS) Program: A 2018 Evaluation assessed changes that EPA implemented or planned to implement in response to the recommendations in the 2014 National Academies report (NASEM, 2018). Based on a review of materials presented by EPA to the authoring committee, the 2018 report indicated that the process and framework for evidence integration were consistent with state-of-the-art approaches taken by other scientific agencies, such as the National Toxicology Program, that face similar challenges.

The use and incorporation of mechanistic data in systematic reviews in toxicology continue to be a challenge, not only for EPA but for the systematic review community in general. The 2014 National Academies report acknowledged the problematic use of various kinds of mechanistic data, specifically where in the process it should be incorporated and how to do so systematically. The 2014 report noted that solid conclusions about causality can be drawn without mechanistic information, for example, when there is strong and consistent evidence from animal or epidemiology studies, suggesting that mechanistic types of data are not necessarily needed in the integration step in order to make a judgment of the evidence.

CRITIQUE OF METHODS FOR EVIDENCE INTEGRATION

There are many strengths to Chapter 11 of the handbook as it is written now. Although the organization of this chapter has some issues, the approaches to strength of evidence judgments are very detailed and well described, and are likely to add to the transparency and consistency of the IRIS process for developing assessments. Chapter 11 also indicates it is preferable that at least two reviewers form evidence integration judgments independently. Involving multiple reviewers is an important feature of a systematic review, although it is unclear whether the reviewers remain the same throughout the review.

A tremendous amount of work has been done to improve evidence integration in risk assessment. However, there are still issues in the handbook that need to be addressed. The terms “synthesis,” “integration,” and “strength of the evidence” appear to be used almost interchangeably, when in fact these should be distinct steps in the systematic review process. As previously discussed, Chapter 11 overlaps somewhat with Chapter 9, and yet the chapters show inconsistencies in the considerations for evidence synthesis within a stream without a clear discussion as to the difference. For instance, Chapter 9 (Table 9-1 “Important considerations for evidence syntheses”, p. 9-3) lists study confidence, consistency, strength (effect magnitude) and precision, biological gradient/dose-response, coherence,

Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

mechanistic evidence related to biological plausibility, and natural experiments. However, Chapter 11 (Table 11-2 “Considerations that inform evaluations and judgments of the strength of the evidence”, p. 11-10) lists risk of bias/sensitivity, consistency, strength (effect magnitude) and precision, biological gradient/dose-response, coherence, and mechanistic evidence related to biological plausibility. While most of the terms are the same, the terms “study confidence,” “natural experiments,” “risk of bias,” and “sensitivity” only appear in one or the other tables.

The handbook’s discussion of supporting studies, including those providing mechanistic data, does not indicate a clear role throughout the assessment, particularly during the evidence integration step. Mechanistic data appear to be used to support the strength of evidence conclusions of the individual data streams or when there is a lack of evidence in a single stream, and yet some aspects of the handbook treat mechanistic data as a separate stream. Table 11-1 of the handbook (p. 11-5) is an example of how these concepts have been intermingled, as it includes aspects of strength, synthesis, and the overall inferences across streams. Table 11-2 of the handbook (p. 11-10), appropriately titled “Considerations that inform evaluations and judgments of the strength of the evidence,” focuses on evaluating the data streams individually and therefore does not follow the chapter description of integrating across streams. The steps outlined in Figure 11-1 of the handbook (p. 11-3) are also unclear, and the figure lacks information on how multiple evidence conclusions are narrowed down to a single judgment. The synthesis step is an analysis of “like” studies to form the basis of conclusions; all of these studies are to have gone through the same study quality evaluation, including risk of bias. The inclusion of toxicokinetic (TK) data, physiologically based pharmacokinetic (PBPK) models, and mechanistic data (outside of the precursor or mechanistic endpoints that are included in the human or animal “effects” studies) requires the use of very different approaches for evaluating the study and model quality (e.g., EPA’s quality assurance project plan for PBPK models [EPA, 2018]). Furthermore, the role of TK data, PBPK models, and mechanistic data is to answer questions that arose in the synthesis step. (e.g., If toxicity was seen in the oral gavage study but not the drinking water study, are the findings consistent or not when accounting for TK considerations across exposure routes?)

Clarification of the evidence synthesis and integration processes also needs to address expectations for dose-response, as described in Chapter 12:

Ideally, the hazard synthesis and integration has clarified any important considerations, including mechanistic understanding, that would indicate the use of particular dose-response models, including chemical-specific or biologically based models, over more generic models (see Chapter 13). These considerations also include whether linked health effects within and between organ systems should be characterized together, as well as whether there is suitable mechanistic information to support combining related outcomes or to identify internal dose measures that may differ among outcomes (generally for animal studies). (EPA, 2020a, p. 12-2, lines 10-16)

While aspects such as linking of multiple health effects or evaluating whether animal toxicological outcomes are species-specific may be necessary for hazard identification, it is

Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

likely that mechanistic and TK considerations will not be required. When and how to consider these data (e.g., during synthesis and integration or subsequently) needs to be defined more broadly so that mechanistic and TK data are not neglected in the assessment process.

As with other chapters in the handbook, consistent use of terminology is critical throughout Chapter 11. Multiple terms are used both independently and interchangeably, including “body of evidence,” “evidence stream,” “endpoint,” “outcome,” and “health effect.” This causes confusion in the terminology surrounding the unit of analysis and makes the description of the process difficult to interpret and nearly impossible to follow. A terminology map, such as the example shown in Figure 6-1 below, would help to clarify the types of data being referred to, how the data move through the process, and where judgments will be applied.

EPA asked the committee to consider whether the approaches described in Chapter 11 are scientifically sound and appropriate for integrating the various types of evidence relevant to investigating the potential for human health effects from exposure to environmental chemicals.

Image
FIGURE 6-1 Example of a terminology map illustrating units of analysis.
Note: The figure shows the points at which synthesis, integration, and strength of evidence judgments may occur. In this example, the units of analysis for integration are the endpoint (clinical chemistry) for the human evidence stream, and the health outcome (liver toxicity) for the animal evidence stream.
Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

As described in Chapter 5 of this report, the handbook is confusing as to the distinction between evidence synthesis and evidence integration, making it difficult to determine if the evidence integration process as presented in the handbook is scientifically sound and appropriate. In particular, there is inconsistency concerning whether the judgment as to a strength of evidence rating is (1) the direct result of synthesis at the endpoint level (as shown for clinical chemistry in slide 34 of Thayer [2021]) or (2) the result of synthesizing multiple outcomes (as shown for liver clinical chemistry and liver histopathology in slide 35, where the individual animal outcomes were synthesized separately and then given a single judgment). The addition of mechanistic evidence as a separate stream (Table 11-1 of the handbook (p. 11-5)) and as factors that may increase or decrease certainty in the animal and human streams (Thayer, 2021, slide 35) compounds the difficulty in interpreting this chapter.

In addition, EPA asked the committee to comment on approaches using five categories versus three categories for drawing evidence integration conclusions. The agency also asked which approach is recommended and why, and whether any specific refinements are needed.

Table 11-5 of the handbook (p. 11-22) presents five categories of evidence integration judgments for characterizing potential human health hazards regarding environmental exposure to a chemical:

  • Evidence demonstrates that [chemical] causes [health effect].
  • Evidence indicates that [chemical] likely causes [health effect].
  • Evidence suggests that [chemical] may cause [health effect].
  • Evidence is inadequate to assess whether [chemical] may cause [health effect].
  • Strong evidence supports no effect.

The IRIS program included a three-category approach for evidence integration in several systematic review protocols proposed in EPA (2019a,b) and EPA (2020c):

  • Sufficient evidence for hazard.
  • Insufficient evidence.
  • Sufficient evidence to judge that a hazard is unlikely.

There are potential advantages to either set of categories. For example, five categories would align the approach for noncancer judgments with the 2005 cancer guidelines (EPA, 2005), while three categories might reduce variation in judgments for different health outcomes and endpoints considered for a particular chemical and may make the process simpler to implement. However, the committee sees no strong scientific rationale for favoring one approach over the other. The committee has determined that a comparative assessment of the various factors related to using each framework in an IRIS assessment context and recommending one of them are important activities, but are outside of the committee’s task of reviewing the handbook.

Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×

FINDINGS AND RECOMMENDATIONS

Finding: Chapter 11 of the handbook contains information on three sequential but independent steps in a typical systematic review process: synthesizing the evidence (units of analysis within human and animal streams), rating the confidence in the body of the evidence, and integrating the evidence (integrating human and animal evidence and considering mechanistic evidence).

Recommendation 6.1: The handbook should separate and delineate the chapters with respect to evidence synthesis within a stream and evidence integration across data streams (see Recommendation 5.1). Synthesis, integration, and the judgments that are used to rate the evidence need to be clearly defined, distinct steps. [Tier 1]

Finding: The process outlined in Chapter 11 of the handbook remains unclear, with ambiguous language and a lack of stepwise instructions adding to the confusion. The process was more clearly described in Thayer (2021).

Recommendation 6.2: EPA should supplement Chapter 11 of the handbook with additional figures and examples from other IRIS assessment documents. The addition of a terminology map (endpoint, outcome, synthesis, integration) would help define the steps that should occur at each level and, more specifically, what data are to be synthesized and where to expect the judgment narratives to be provided. [Tier 1]

Finding: The role of mechanistic data discussed in Chapter 11 (i.e., whether they are their own stream or simply used to support the human and animal streams) remains unclear. As noted by the committee in Chapter 2, there may be multiple roles for mechanistic data in an IRIS assessment. While many of these are described in Chapter 11 of the handbook, it is not clear as to which of these roles constitute a separate stream (as shown in Table 11-1, p. 11-5) and which are simply used to support the human and/or animal streams (as described in Section 11.1 and Table 11-2 (p. 11-10) of the handbook.

Recommendation 6.3: The handbook should clearly define the roles of mechanistic data and other supporting data in evidence integration and throughout the entire IRIS assessment development process (see Recommendations 3.9, 3.10, and 5.4). [Tier 1]

No Tier 2 or Tier 3 recommendations were presented in this chapter.

Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 70
Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 71
Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 72
Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 73
Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 74
Suggested Citation:"6 Evidence Integration." National Academies of Sciences, Engineering, and Medicine. 2022. Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version. Washington, DC: The National Academies Press. doi: 10.17226/26289.
×
Page 75
Next: 7 Hazard Considerations and Study Selection for Deriving Toxicity Values »
Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version Get This Book
×
 Review of U.S. EPA's ORD Staff Handbook for Developing IRIS Assessments: 2020 Version
Buy Paperback | $25.00 Buy Ebook | $20.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The U.S. Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) program develops human health assessments that focus on hazard identification and dose-response analyses for chemicals in the environment. The ORD Staff Handbook for Developing IRIS Assessments (the handbook) provides guidance to scientists who perform the IRIS assessments in order to foster consistency in the assessments and enhance transparency about the IRIS assessment process. At the request of the EPA, this report reviews the procedures and considerations for operationalizing the principles of systematic reviews and the methods described in the handbook for determining the scope of the IRIS assessments, evidence integration, extrapolation techniques, dose-response analyses, and characterization of uncertainties.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!