Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
5 Developing the Science Base and Assays to Implement the Vision Rapid advances in the understanding of the organization and function of biologic systems provide the opportunity to develop innovative mechanistic approaches to toxicity testing. In compari- son with the current system, the new approaches should provide wider coverage of chemicals of concern, reduce the time needed for generating toxicity-test data required for decision-making, and use animals to a far smaller extent. Accordingly, the committee has proposed development of a testing structure that evaluates perturbations in toxicity pathways and relies on a mix of high- and medium-throughput assays and targeted in vivo tests as the foundation of its vision for toxicity testing. This chapter discusses the kinds of applied and basic research needed to support the new toxicity-testing approach, the institutional resources required to support and encourage it, and the valuable products that can be expected during the transition from the current apical end-point testing to a mechanistically based in vivo and in vitro test system. 120
Developing the Science Base and Assays 121 Most tests in the committeeâs vision would be unlike current toxicity tests, which generate data on apical end points. The mix of tests in the vision include in vitro tests that assess critical mecha- nistic end points involved in the induction of overt toxic effects rather than the effects themselves and targeted in vivo tests that ensure adequate testing of metabolites and coverage of end points. The move toward a mechanism-oriented testing paradigm poses challenges. Implementation will require (1) the availability of suites of in vitro testsâpreferably based on human cells, cell lines, or componentsâthat are sufficiently comprehensive to evaluate activity in toxicity pathways associated with the broad array of possible toxic responses; (2) the availability of targeted tests to complement the in vitro tests and ensure overall adequate data for decision-making; (3) models of toxicity pathways to support ap- plication of in vitro test results to predict general-population ex- posures that could potentially cause adverse perturbations; (4) infrastructure changes to support the basic and applied research needed to develop the tests and the pathway models; (5) valida- tion of tests and test strategies for incorporation into chemical- assessment guidelines that will provide direction on interpreting and drawing conclusions from the new assay results; and (6) ac- ceptance of the idea that the results of tests based on perturbations in toxicity pathways are adequately predictive of adverse re- sponses and can be used in decision-making. Development of the new assays and the related basic researchâthe focus of this chap- terârequires a substantial research investment over quite a few years. Institutional acceptance of the new tests and the requisite new risk-assessment approachesâthe focus of Chapter 6âalso require careful planning. They cannot occur overnight. Ultimately, the time required to conduct the research needed to support large-scale incorporation of the new mechanistic assays into a test strategy that can adequately and rapidly address large numbers of agents depends on the institutional will to commit re- sources to support the changes. The committee believes that with
122 Toxicity Testing in the 21st Century a concerted research effort, over the next 10 years high- throughput test batteries could be developed that would substan- tially improve the ability to identify toxicity hazards caused by a number of mechanisms of action. Those results in themselves would be a considerable advance. The time for full realization of the new test strategy, with its mix of in vitro and in vivo test bat- teries that can rapidly and inexpensively assess large numbers of substances with adequate coverage of possible end points, could be 20 or more years. This chapter starts by discussing basic research that will pro- vide the foundation for assay development. It then outlines a re- search strategy and milestones. It concludes by discussing the sci- entific infrastructure that will support the basic and applied research required to develop the high-throughput and targeted testing strategy envisioned by the committee. SCOPE OF SCIENTIFIC KNOWLEDGE, METHODS, AND ASSAY DEVELOPMENT This section outlines the scientific inquiry required to develop the efficient and effective testing strategy envisioned by the committee. Several basic-research questions need to be addressed to develop the knowledge base from which toxicity- pathway assays and supporting testing technologies can be designed. The discussion here is intended to provide a broad overview, not a detailed research agenda. The committee recognizes the challenges and effort involved in addressing some of these research questions. Knowledge Development Knowledge critical for the development of high-throughput assays is emerging from biologic, medical, and pharmaceutical
Developing the Science Base and Assays 123 research. Further complementary, focused research will be needed to address fully the key questions that when answered will sup- port toxicity-pathway assay development. Those questions are outlined in Box 5-1 and elaborated below. â¢ Toxicity-pathway identification. The key pathways that, when sufficiently perturbed, will result in toxicity will be identi- fied primarily from future, current, and completed studies in the basic biology of cell-signaling motifs. Identification will involve the discovery of the protein components of toxicity pathways and how the pathways are altered by environmental agents. Many pathways are under investigation with respect to the basic biology of cellular processes. For example, the National Institutes of Health (NIH) has a major program under way to develop high- throughput screening (HTS) assays based on important biologic responses in in vitro systems. HTS has the potential to identify chemical probes of genes, pathways, and cell functions that may ultimately lead to characterization of the relationship between chemical structure and biologic activity (Inglese et al. 2006). De- termining the number and nature of toxicity pathways involved in human disease and impairment is an essential component of the committeeâs vision for toxicity testing. â¢ Multiple pathways. Adverse biologic change can occur from simultaneous perturbations of multiple toxicity pathways. Environmental agents typically affect more than one toxicity pathway. Although the committee envisions the design of a suite of toxicity tests that will provide broad coverage of biologic perturbations in all key toxicity pathways, biologic perturbations in different pathways may lead to synergistic interactions with important implications for human health. For some adverse health effects, an understanding of the interplay of multiple pathways involved may be important. For others, the research need will be to identify the pathway affected at the lowest dose of the environmental agent.
124 Toxicity Testing in the 21st Century BOX 5-1 Key Research Questions in Developing Knowledge to Support Pathway Testing Toxicity-Pathway IdentificationâWhat are the key pathways whose perturbations result in toxicity? Multiple PathwaysâWhat alteration in response can be expected from simultaneous perturbations of multiple toxicity pathways? AdversityâWhat adverse effects are linked to specific toxicity-pathway perturbations? What patterns and magnitudes of perturbations are predictive of adverse health outcomes? Life StagesâHow can the perturbations of toxicity pathways associated with developmental timing or aging be best captured to enable the advancement of high-throughput assays? Effects of Exposure DurationâHow are biologic responses affected by exposures of different duration? Low-Dose ResponseâWhat is the effect on a toxicity pathway of adding small amounts of toxicants in light of pre-existing endogenous and exogenous human exposures? Human VariabilityâHow do people differ in their expression of toxicity- pathway constituents and in their predisposition to disease and impairment? â¢ Adversity. An understanding of possible diseases or func- tional losses that may result from specific toxicity-pathway per- turbations will support the use of pathway perturbations for deci- sion-making. Current risk assessments rely on toxicity tests that demonstrate apical adverse health effects, such as disease or func- tional deficits, that are at various distances downstream of the tox- icity-pathway perturbations. In the committeeâs vision, the as- sessment of potential human health impact will be based on perturbations in toxicity pathways. For example, activation of es- trogenic action to abnormal levels during pregnancy is associated with undescended testes and, in later life, testicular cancer. Re- search will be needed to understand the patterns and magnitudes of the perturbations that will lead to adverse effects. As part of the
Developing the Science Base and Assays 125 research, biomarkers of effect that can be monitored in humans and studied in whole animals will be useful. â¢ Life stages. An understanding of how pathways associated with developmental timing or aging can be adversely perturbed and lead to toxicity will be needed to develop high-throughput assays that can capture and adequately cover developmental and senescing life stages. Many biologic functions require coordina- tion and integration of a wide array of cellular signals that interact through broad networks that contribute to biologic function at dif- ferent life stages. That complexity of pathway interaction holds for reproductive and developmental functions, which are governed by parallel and sequential signaling networks during critical peri- ods of biologic development. Because of the complexity of such pathways, the challenge will be to identify all important pathways that affect such functions to ensure adequate protection against risks to the fetus and infant. That research will involve elucidating temporal changes in key toxicity pathways that might occur dur- ing development and the time-dependent effects of exposure on these pathways. â¢ Effects of exposure duration. The dose of and response to ex- posure to a toxicant in the whole organism depend on the dura- tion of exposure. Thus, conventional toxicity testing places con- siderable emphasis on characterizing risks associated with exposures of different duration, from a few days to the test ani- malâs lifetime. The ultimate goal in the new paradigm is to evalu- ate conditions under which human cells are likely to respond and to ensure that these conditions do not occur in exposures of hu- man populations. Research will be needed to understand how the dose-response relationships for perturbations might change with the duration of exposure and to understand pathway activation under acute, subchronic, and chronic exposure conditions. The research will involve investigating the differential responses of cells of various ages and backgrounds to a toxic compound and
126 Toxicity Testing in the 21st Century possible differences in responses of cells between people of differ- ent ages. â¢ Low-dose response. The assessment of the potential for an adverse health effect from a small environmental exposure in- volves an understanding of how the small exposure adds to pre- existing exposures that affect the same toxicity pathways and dis- ease processes. For the more common human diseases and im- pairments, a myriad of exposures from food, pharmaceuticals, the environment, and endogenous processes have the potential to per- turb underlying toxicity pathways. Understanding how a specific environmental exposure contributes, with the other exposures, to modulate a toxicity pathway is critical for the understanding of low-dose response. Because the toxicity tests used in the commit- teeâs long-range vision are based largely on cellular assays involv- ing sensitive biomarkers of alterations in biologic function, it will be possible to study the potential for adverse human health effects at doses lower than is possible with conventional whole-animal tests. Given the cost-effectiveness of the computational methods and in vitro tests that form the core of the toxicity testing, it will be efficient to evaluate effects at multiple doses and so build a ba- sis of detailed dose-response research. â¢ Human variability. People differ in their expression of toxic- ity-pathway constituents and consequently in their predisposition to disease and impairment. An understanding of differences among people in the level of responsiveness of particular toxicity pathways is needed to interpret the importance of small environ- mental exposures. The comprehensive mapping of toxicity path- ways provides an unprecedented opportunity to identify gene loci and other determinants of human sensitivity to environmental exposures. That research will support the development of bio- markers of exposure, effect, and susceptibility for surveillance in the human population, and these discoveries in turn will support an assessment of host susceptibility for use in extrapolating results from the in vitro assays to the general population and susceptible
Developing the Science Base and Assays 127 groups. The enhanced ability to characterize interindividual dif- ferences in sensitivity to environmental exposures will provide a firmer scientific basis of the establishment of human exposure guidelines that can protect susceptible subpopulations. Research on most, or all, of the above subjects is going on in the United States and internationally. It is taking place in academe, industry, and government institutions and is funded by founda- tions and the federal government mainly to understand the basis of human disease and treatment. Private firms, such as pharma- ceutical and biotechnology companies, conduct the research for product development. However, efforts directed specifically to- ward developing toxicity-testing systems are small. Test and Analytic Methods Development The research described above will provide the foundation for the development of toxicity tests and comprehensive testing ap- proaches. The categories of toxicity tests and methods to be devel- oped are outlined below, and the primary questions to be an- swered in their development are presented in Box 5-2. â¢ Methods to predict metabolism. A key issue to address at an early phase will be development of methods to ensure adequate testing for metabolites in high-throughput assays. Understanding the range of metabolic products and the variation in metabolism among humans and being able to simulate human metabolism as needed in test systems are critical for developing valid toxicity- pathway assays. Without such methods, targeted in vivo assays will be needed to evaluate metabolism. â¢ Chemical-characterization tools. In addition to metabolism, further development of tools to support chemical characterization
128 Toxicity Testing in the 21st Century BOX 5-2 Main Questions in Developing Tests and Methods Methods to Predict MetabolismâHow can adequate testing for metabolites in the high-throughput assays be ensured? Chemical-Characterization ToolsâWhat computational tools can best predict chemical properties, metabolites, xenobiotic-cellular and molecular interactions, and biologic activity? Assays to Uncover Cell CircuitryâWhat methods will best facilitate the discovery of the circuitry associated with toxicity pathways? Assays for Large-Scale ApplicationâWhich assays best capture the elucidated pathways and best reflect in vivo conditions? What designs will ensure adequate testing of volatile compounds? Suite of AssaysâWhat mix of pathway-based high- and medium- throughput assays and targeted tests will provide adequate coverage? What targeted tests should be developed to complement the toxicity-pathway assays? What are the appropriate positive and negative controls that should be used to validate the assay suite? Human-Surveillance StrategyâWhat surveillance is needed to interpret the results of pathway tests in light of variable human susceptibility and background exposures? Mathematical Models for Data Interpretation and ExtrapolationâWhat procedures should be used to evaluate whether humans are at risk from environmental exposures? Test-Strategy UncertaintyâHow can the overall uncertainty in the testing strategy be best evaluated? will be important. The tools will include computational and struc- ture-activity relationship (SAR) methods to predict chemical properties, potential initial interactions of a chemical and its me- tabolites with cellular molecules, and biologic activity. A National Research Council report (NRC 2000) indicated that early cellular interactions are important in understanding potential toxicity and include receptor-ligand interactions, covalent binding with DNA and other endogenous molecules, peroxidation of lipids and pro- teins, interference with sulfhydryl groups, DNA methylation, and
Developing the Science Base and Assays 129 inhibition of protein function. Good predictive methods for chemical characterization will reduce the need for targeted testing and enhance the efficiency of the testing. â¢ Assays to uncover cell circuitry. Development of methods to facilitate the discovery of the circuitry associated with toxicity pathways will involve functional genomic techniques for integrat- ing and interpreting various data types and for translating dose- response relationships from simple to complex biologic systems, for example, from the pathway to the tissue level. It will most likely require improved methods in bioinformatics, systems biol- ogy, and computational toxicology. Some advances in overexpres- sion with complementary DNA (cDNA) and gene knockdown with small inhibitory RNAs are likely to allow improved pathway mapping and will also lead to studies with cells or cell lines that are more readily transfectable. â¢ Assays for large-scale application. Several substantive issues will need to be considered in developing assays for routine appli- cation in a testing strategy. First, as pathways are identified, me- dium- and high-throughput assays that adequately evaluate pathways and human biology will be developed, including new, preferably human, cell-based cultures for assessment of perturba- tions. Second, the assay designs that best capture the elucidated pathways and can be applied for rapid large-scale testing of chemicals will need to be identified. Third, an important design criterion for assays will be that they are adequately reflective of the in vivo cellular environment. For any given assay, that will involve an understanding of the elements of the human cellular environment that must be simulated and of culture conditions that affect response. Fourth, the molecular evolution of cell lines during passage in culture and related interlaboratory differences that can result will have to be controlled for. Fifth, approaches for the testing of volatile compounds will require early attention in the development of high-throughput assays; this has been a chal- lenge for in vitro test systems in general. Sixth, assay sensitivity
130 Toxicity Testing in the 21st Century (the probability that the assay identifies the phenomenon that it is designed to identify) and assay specificity (the probability that the assay does not identify a phenomenon as occurring when it does not) will be important considerations in assay design. Individual assays and test batteries should have the capability to predict ac- curately the effects that they are designed to measure without un- due numbers of false positives and false negatives. And seventh, it will be important to achieve flexibility to expand or contract the suites of assays as more detailed biologic understanding of health and disease states emerges from basic research studies. â¢ Suite of assays. An important criterion for the development of a suite of assays for assessing the potential for a substance to cause a particular type of disease or group of toxicities will be adequate coverage of causative mechanisms, affected cell types, and susceptible individuals. Ensuring the right mix of pathway- based high-throughput assays and targeted tests will involve re- search. For diseases for which toxicity pathways are not fully un- derstood, targeted in vivo or other tests may be included to ensure adequate coverage. â¢ Human-surveillance strategy. Human data on the fundamen- tal biologic events involved in the activation of toxicity pathways will aid the interpretation of the results of high-throughput assays. They will provide the basis of understanding of determinants of human susceptibilities related to a toxicity pathway and of back- ground exposures to compounds affecting the pathway. Research will be needed to assess how population-based studies can best be designed and conducted to complement high-throughput testing and provide the information necessary for data interpretation. â¢ Mathematical models for data interpretation and extrapolation. Procedures for evaluating the impact of human exposure concentrations will involve pharmacokinetic and other modeling methods to relate cell media concentrations to human tissue doses and biomonitoring data and to account for exposure patterns and interindividual variabilities. To facilitate interpretation of high-
Developing the Science Base and Assays 131 throughput assay results, models of toxicity pathways (see Chapter 3) and other techniques will be needed to address differences among people in their levels of activation of particular response pathways. Although it is not a key aspect of the current vision, in the distant future research may enable the development of biologically based dose-response models of apical responses for risk prediction. â¢ Test-strategy uncertainty. Methods to evaluate the overall uncertainty in a possible testing strategy will assist the validation and evolution of the new methods. Formal methods could be de- veloped that use systematic approaches to evaluate uncertainty in predicting from the test battery results the doses that should be without biologic effect in human populations. These uncertainty evaluations can be used in the construction and selection of test- ing strategies. Whether the testing strategy will detect and predict harmful exposures will depend on whether the major toxicity pathways are addressed by the high-throughput assays or covered by the targeted in vivo and other tests. To ensure that the test system is adequate, the committee envisions a multipronged approach that includes the following components: â¢ A continuing research and evaluation program to develop, improve, and assess the testing program. â¢ Adequate validation of the assays, including examination of false-negative and false-positive rates, by applying the assays to sufficient numbers of chemicals of known toxicity. â¢ A robust program of biomonitoring, human health surveil- lance, and molecular epidemiology to assess exposures and early indicators of toxicity, to aid in interpretation of high-throughput assay results, and to monitor exposures to ensure that toxic ones are not missed.
132 Toxicity Testing in the 21st Century Aspects of those endeavors are discussed in the following sections. STRATEGY FOR KNOWLEDGE AND ASSAY DEVELOPMENT AND VALIDATION The research strategy to develop the computational tools, suites of in vitro assays, and complementary targeted tests envi- sioned by the committee will likely involve contributions on mul- tiple fronts, including the following: â¢ Basic biologic research to obtain the requisite knowledge of toxicity pathways and the potential human health impacts when the pathways are perturbed. â¢ Science and technology milestones that ensure timely achievement of assays and tool development for the new para- digm. â¢ Phased basic and applied research to demonstrate success in the transition to the testing emphasis on toxicity pathways. The basic-research effort will be directed at discovering and mapping toxicity pathways that are the early targets of perturba- tion by environmental agents and at understanding how agents cause the perturbations. That will be followed by research focused on the design of assays that can be used to determine, first, whether an agent has the potential to perturb the pathway and, if so, the levels and durations of exposure required. The scientific inquiry will involve research at multiple levels of biologic organi- zation, that is, understanding the nature of toxicity pathways at the molecular and cellular levels and how toxicity-pathway altera- tions may translate to disease processes in tissues, organs, and the whole organism. Some of the tools and technologies that enable this research are described in Chapter 4.
Developing the Science Base and Assays 133 In each broad field of toxicity testing, such as neurotoxico- logy and reproductive and developmental toxicity, systematic ap- proaches to assay development, assay validation, and generalized acceptance of the assays will be organized and pursued. As the research questions presented in the previous section are answered, milestones would be achieved in an orderly manner. Some impor- tant milestones to move from pathway research through assay development to validated test strategies are presented in broad brush strokes in Box 5-3. The committee recognizes that the im- plementation of its recommendations would entail extensive planning and expert deliberation; through those processes, the important milestones would be subdivided, elaborated, reshaped, and perhaps even replaced. The research would progress in sequential phases, whose timelines would overlap. The committee finds that four phases would evolve as follows: Phase I: Toxicity-pathway elucidation. A focused research effort is pursued first to understand the toxicity pathways for a select group of health effects (that is, apical end points) or molecular mechanisms. Early in this first phase, a data-storage, -access, and -management system would be established to enable broad use of the data being generated to facilitate the understanding of the tox- icity pathways and research and knowledge development in later phases. A third element of this phase would involve developing standard practices for research methods and reporting of results so that they are understandable and accessible to a broad audience of researchers and to facilitate consistency and validity in the re- search methods used. Research in this phase would also focus on developing tools for predicting metabolism, characterizing chemi- cals, and planning a strategy for human surveillance and biomoni- toring of exposure, susceptibility, and effect markers associated with the toxicity-pathway perturbations.
134 Toxicity Testing in the 21st Century BOX 5-3 Some Science and Technology Milestones in Developing Toxicity-Pathway Tests As the Cornerstone of Future Toxicity-Testing Strategies Develop rapid methods and systems to enable in vitro dosing with chemical stressors (including important metabolites and volatile compounds). Create and adapt human, human-gene-transfected rodent, and other cell lines and systems, with culture medium conditions, to have an adequate array of in vitro human cell and tissue surrogates. Adapt and develop technologies to enable the full elucidation of critical toxicity pathways causing the diseases by the mechanisms selected for pilot project study. Develop toxicity-pathway assays that fully explore the possible effects of exogenous chemical exposure on the diseases and mechanisms selected for a pilot-project study, thereby demonstrating proof of concept. Establish efficient approaches for validating suites of high-throughput assays. Develop the infrastructure for data management, assay standardization, and reporting to enable broad data-sharing across academic, government, industry, and nongovernment-organization sectors and institutions. Phase II: Assay development and validation. High- and medium- throughput assays would be developed for toxicity pathways and points for chemical perturbation in the pathways organized for assay development. During this phase, attempts would be pur- sued to develop biologic markers of exposure, susceptibility, and effect for use in surveillance and biomonitoring of human popula- tions where these toxicity pathways might activated. Phase III: Assay relevance and validity trial. The third phase would explore assay use, usually in parallel with traditional apical tests. It would screen chemicals that would not otherwise be
Developing the Science Base and Assays 135 tested and would begin the biomonitoring and surveillance of human populations. Phase IV: Assembly and validation of test batteries. Suites of as- says would be proposed and validated for use in place of identi- fied apical tests. Some of the key science and technology development activi- ties for the phases are listed out in Figure 5-1, and some of the critical aspects are described below. All phases would include re- search on toxicity pathways. Progression through the phases would involve exploring the research questions outlined in Box 5-1. Phase I: Toxicity-Pathway Elucidation Research to Understand Toxicity Pathways Phase I research would develop pathway knowledge from which assays for health effects would emerge. Systems-biology approachesâincluding molecular profiling microarrays, pathway mining, and other high-resolution techniquesâwould reveal key molecular interactions. Mechanistic understanding provides the basis for identifying the key molecular âtriggersâ or mechanisms of interactions that can alter biologic processes and ultimately cause toxicity after an environmental exposure. Those nodal trig- gers or interactions would be modeled in vitro and computation- ally to provide a suite of appropriate assays for detecting toxicity- pathway perturbations and the requisite tools for describing dose- response relationships. Early efforts would explore possible toxicity pathways for health effects where there is fairly advanced knowledge of mechanisms of toxicity, molecular signaling and interactions. As a
136 Toxicity Testing in the 21st Century Elucidate toxicity pathways. Establish data-storing and âmanagement Phase I systems. Establish practices for assay conduct and reporting. Plan human-surveillance and -biomonitoring strategy. Develop suite of representative human cell lines and cultures. Develop and validate high- and medium-throughput Phase II assays. Develop biomarkers for exposure, susceptibility, and effect for human surveillance and biomonitoring. Gain experience though testing mechanistic assays â¢ In parallel with traditional apical tests. â¢ On chemicals with large datasets of apical tests. Phase III â¢ By screening chemicals that would not otherwise be tested. Begin biomonitoring and surveillance of human populations. Propose then validate suites of assays for use in place of identified apical tests. Phase IV Program Time Line FIGURE 5-1 Progression of some important science and technology activities during assay development. case study, the following sketches out how knowledge develop- ment might begin for toxic responses that are associated with es- trogenic signaling alterations caused by agonists and antagonists of estrogen function. Even our current appreciation of the number of potential tox- icity pathways highlights the breadth of responses that might be evaluated in various high-throughput assays. Consideration of adverse responses at the level of the intact organism that might be associated with altered signaling through estrogen-receptor- mediated responses illustrates some of the challenges. Xenobiotic- caused alteration in estrogen signaling can occur or be measured at a number of points in the various process that affect estrogen actions, including steroidogenesis, hormone transport and elimi- nation, receptor binding and alteration in numbers of receptors, and changes in nuclear translocation. Those pathways may also be evaluated at different levels of organizationâligand binding, re-
Developing the Science Base and Assays 137 ceptor translocation, transcriptional activation, and integrated cel- lular responses. Some of the processes are outlined here. â¢ Estrogen steroidogenesis. Upstream alterations in steroido- genesis pathways or other independently regulated pathways that affect endocrine signaling would be explored. Knowledge devel- opment would focus on understanding of enzymatic function for key steroidogenesis pathways and the interactions of the path- ways with each other and on understanding of how key elements of the pathways might be altered, including alterations of precur- sors, products, and metabolites when pathway dysregulation oc- curs. The research might involve quantitative assessment of key enzyme functions in in vitro and in vivo systems, analytic tech- niques to measure various metabolites, and modeling to under- stand the target and key steps that undergo estrogen-related dys- regulation. Other assays would develop SAR information on compounds already associated with altered steroidogenesis in other situations. â¢ Estrogen-receptor interactions. Much is known about the mo- lecular interactions between xenobiotics and estrogen receptors (ERs), for example, direct xenobiotic interaction with ERs, includ- ing differential interaction with specific ER subtypes, such as ER-Î± and ER-Î² xenobiotic interactions with discrete receptor domains that give rise to different biologic consequences, such as interac- tions with the ligand-binding domain that could cause conforma- tional changes that activate or inhibit signaling; and direct xenobi- otic interactions with other components of the ER complex, including accessory proteins, coactivators, and other coregulatory elements. Most responses associated with altered estrogen signal- ing would be more easily evaluated in assays that evaluated a lar- ger-scale function, such as receptor activation of estrogen- mediated transcription of reporter genes or estrogen-mediated cell responses (for example, cell proliferation of estrogen-sensitive cells in vitro).
138 Toxicity Testing in the 21st Century â¢ Processes that lead to estrogenic transgenerational epigenetic ef- fects. Assay development to address estrogen-induced transgen- erational epigenetic effects would involve understanding how early-life exposures to estrogenic compounds permanently alter transcriptional control of genes, understanding how such early- life exposures might be priming events for later-life alterations in reproductive competence or the development of cancer, and un- derstanding how such exposures may produce transgenerational effects. Specific approaches in this research might include ge- nomewide methods to analyze the patterns of DNA methylation with and without estrogenic exposure, quantification of histone modifications, measurements of microRNAs, and the dissection and mechanistic understanding of hormonal inputs to the epige- netic regulatory phenomena. Those are just a few examples of the kinds of research on es- trogenic compounds that would support assay development. The approaches include relatively small-scale research efforts for proc- esses that are fairly well understood (such as direct ligand- receptor interactions) and larger endeavors for the yet-to-be- explained (such as the epigenetic and transgenerational effects of early-life estrogenic-compound exposure). A holistic understand- ing of estrogenic and other pathways and signaling in humans would be derived incrementally by building on studies in a wide variety of species and tissues. New information from basic studies in biology is likely to lead to improved assays for testing specific toxicity pathways. The identified estrogenic pathways and signaling processes, once understood, would serve as the substrate for further path- way mining to highlight the critical events that could be tested experimentally in assay systems, that is, events that are obligatory for downstream, apical responses and occur at the lowest expo- sure of a biologic system to an environmental agent. With studies on the organization of response circuitry controlling the toxicity-
Developing the Science Base and Assays 139 pathway responses, a dose-response model would be based on key, nodal points in the circuits that control perturbations rather than on the overall detail of all steps in the signaling process. Assessing Validity of Pathway Knowledge and Linkage to Adversity at the Organism Level The next step in pathway elucidation would be the assess- ment of the validity of the pathway knowledge, which would pro- ceed in two steps and involve the broader scientific community. First, the validity would be tested by artificially modulating the pathways to establish that predicted downstream molecular consequences are consistent and measurable. The perturbations could take place, for example, with the use of standard reference compounds, such as 17Î²-estradiol, or discrete molecular probes, such as genetically modified test systems, knockout models, or other interventions with siRNA or small-molecule inhibitors of key enzymes of other cellular factors. Second, the consequences of pathway disruption for the or- ganismâthe linkage of molecular events to downstream estab- lished biologic effects considered to be adverse or human dis- easeâwould be assessed. For the case of perturbations of estrogen signaling, it may include linkage with results from short-term in vivo assays, such as an increase in uterine weight in rats in the uterotrophic assay. The link between the toxicity pathways and adverse effects at the level of the whole organism would be as- sessed in a variety of in vivo and in vitro experiments. Development of Data-Storage, Data-Access, and Data-Management Systems Very early stage in Phase I, data-storage, -access, and -man- agement systems should be developed and standardized. As the
140 Toxicity Testing in the 21st Century altered-estrogen-signaling case study indicates, the acquisition of the knowledge to develop high-throughput testing assays would involve the discovery of toxicity pathways and networks from vast amounts of data from studies of biologic circuitry and inter- actions of environmental agents with the circuitry. Organization of that knowledge would require data analysis and exploration by interdisciplinary teams of scientists. Understanding the relation- ships of pathways to adverse end points would also involve large- volume data analysis, as would the design of test batteries and their validation. Those efforts could be stymied without easy and wide public access to databases of results from a broad array of research studies: high-throughput assays, quantitative-SAR model development, protein and DNA microarrays, pharmacokinetic and metabolomic experiments, in vivo apical tests, and human biomonitoring, clinical, and population-based studies. Central re- positories for -omics data are under development and exist to a small extent for some in vivo toxicity data. The scale of data stor- age and access envisioned by the committee is much larger. The data should be available, regardless of whether they were generated by industry, academe, federal institutions, or foundations. However, the data-management system must also be able to accommodate confidential data but allow for data-sharing of confidential components of the database among parties that agree to the terms of confidentiality. The data-management sys- tem would also provide procedures and guidelines for adequate quality control. Central storage efforts would need to be coordi- nated and standardized as appropriate to ensure usefulness. Standardization of Research Assays and Results With the development of data-management systems, proc- esses for standardizing platforms would have to be developed. Currently, there is little standardization of microarrays, although
Developing the Science Base and Assays 141 such efforts are moving more quickly with the Minimum Informa- tion About a Microarray Experiment formats now in use (Brazma et al. 2001). Too much standardization can stifle innovation, so approaches to identifying and using the appropriate level of stan- dardization would be needed. Bioinformatics should proceed jointly with the development of assay-platform technology. Data- management systems would have to evolve flexibly to accommo- date new data forms and assay platforms. Phase II: Assay Development and Validation After the Phase I validity assessment, pathways would be selected for assay development. The focus would be on critical toxicity pathways that lead reliably to adverse effects for the organism and that are not secondary consequences of other biologic perturbations. The first section of this chapter outlined some of the technical issues that would require research to support assay development. The case-study example of altered estrogen signaling above indicates how assays may follow from toxicity-pathway identifica- tion. Understanding the direct gene-regulation consequences of modulated ER-mediated transcriptional activation would lead to specific assays for quantitative assessment of transcription (RNA), translation (protein), metabolite markers, and altered function. Rapid assays to evaluate function on the scale of receptor activa- tion of estrogen-mediated transcription of reporter genes or even estrogen-mediated cell responses, such as cell proliferation of es- trogen-sensitive cells in vitro, could be developed to assess altered estrogen signaling. Also important for assessing the potential for perturbations in estrogen signaling would be reliable assays for detecting estro- gen receptor interactions rapidly. Specific assays that might be developed include ligand-receptor binding assays and more so-
142 Toxicity Testing in the 21st Century phisticated computational structural models of ligand interactions with receptor and receptor-complex conformational changes. Fur- ther sets of assays would be needed to address the wide variety of toxicity pathways by which estrogenic compounds can operate. In this phase, biomarkers of effect, susceptibility, and exposure would be developed for use in human biomonitoring and surveil- lance. Demonstrating that a test is reliable and relevant for a par- ticular purpose is a prerequisite for its routine use for regulatory acceptance. But establishing the validity of any new toxicity assay can be a formidable processâexpensive, time-consuming, and logistically and technically challenging. Development of efficient approaches for validating the new mechanistically based assays would add to the challenge. How can the assays come into use within a reasonable time and be sufficiently validated to be used with confidence? That question is discussed by considering first the relevant existing guidance on validation and then the chal- lenges faced in validating the new tests. Finally, some general suggestions are made regarding validation of the new tests. In making its suggestions, the committee acknowledges the consid- erable work going on in institutions in the United States and Europe to improve validation methods. Existing Validation Guidance Guidelines on the validation of new and revised methods for regulatory acceptance have been developed by both regulatory agencies and consortia (ICCVAM/NICEAM 2003; OECD 2005). Such guidelines focus on multifactorial aspects of a test, which cover the following elements: â¢ Definition of test rationale, test components, and assay conduct and the provision of details on the test protocol.
Developing the Science Base and Assays 143 â¢ Consideration of the relationship of the test-method end points to the biologic effect of interest. â¢ Characterization of reproducibility in and among laborato- ries, transferability among laboratories, sources of variability, test limits, and other factors related to the reliability of test measure- ments (sometimes referred to as internal validity). â¢ Demonstrated biologic performance of the test with refer- ence chemicals, comparison of the performance with that of the tests it is to replace, and description of test limitations (sometimes referred to as external validity). â¢ Availability, peer review, and good-laboratory-practices status of the data supporting the validation of the test method. â¢ Independent peer review of the methods and results of the test and publication in the peer-reviewed literature. Criteria for regulatory acceptance of new test methods have also been published (ICCVAM/NICEAM 2003). They cover some of the subjects noted above and include criteria related to robust- ness (insensitivity to minor changes in protocol), time and cost effectiveness, capability of being harmonized and accepted by agencies and international groups, and capability of generating useful information for risk assessment. Validation of a new test method typically is a prerequisite for regulatory acceptance but is no guarantee of acceptance. It estab- lishes the performance characteristics of a test method for a par- ticular purpose. Different regulatory agencies may decide that they have no need for a test intended for a given purpose, or they may set their criteria of acceptable performance higher or lower than other agencies. To minimize problems associated with accep- tance, the Organisation for Economic Co-operation and Develop- ment (OECD 2005) recommends that validation and peer-review processes take place before a test is considered for acceptance as an OECD test guideline. OECD recognizes, however, that factors
144 Toxicity Testing in the 21st Century beyond the technical performance of an assay may be viewed dif- ferently by different regulatory authorities. Challenges in Validating Mechanistically Based Assays Validation of the mechanistically based tests envisioned by the committee may be especially challenging for several reasons. First, the tests in the new paradigm that are based on nonapical findings depart from current practice used by regulatory agencies in setting health advisories and guidelines based on apical out- comes. Relevant policy and legal issues are discussed at length in Chapter 6 and will not be repeated here except to note that scien- tific acceptance of a test and its relationship to disease is a critical component of establishment of the validity of the test for regula- tory purposes. Second, the new -omics and related technologies will need to be standardized and refined before specific applications can be validated for regulatory purposes (Corvi et al. 2006). Such pre- liminary work could be seen as an elaborate extension of the rou- tine step of test-method optimization or prevalidation leading to validation of conventional in vivo or in vitro assays. The commit- tee also notes above that some degree of standardization will be necessary early to promote understanding and use of assay find- ings by researchers for knowledge development. Third, because -omics and related technologies are evolving rapidly, the decision to halt optimization of a particular application and begin a formal validation study will be somewhat subjective. Validation and regulatory acceptance of a specific test do not preclude incorporating later technologic advances that would enhance its performance. If it is warranted, the effects of such modifications on performance can be evaluated through an expedited validation that avoids the burdens of a second full- blown validation.
Developing the Science Base and Assays 145 Fourth, the committee envisions that a suite of new tests typically will be needed to replace an individual in vivo test, given that apical findings can be triggered by multiple mecha- nisms. Consequently, although it is current practice to validate a single test against the corresponding conventional test and then to look for one-to-one correspondence, the new paradigm would routinely entail validation of test batteries and would use multi- variate comparisons. Fifth, existing validation guidelines focus on concordance be- tween the results of the new and the existing assays. In practice, that often means comparing results from cell-based in vitro assays with in vivo data from animals. One of the challenges of validat- ing the medium- and high-throughput assays in the new visionâ with its emphasis on human-derived cells, cell lines, and cellular componentsâwill be to identify standards of comparison for as- sessing their relevance and predictiveness while aiming for a transformative paradigm shift that emphasizes human biology, mechanisms of toxicity, and initial, critical perturbations of toxic- ity pathways. Sixth, it is anticipated that virtually all xenobiotics will per- turb signaling pathways to some degree, so a key challenge will be to determine when a perturbation leads to downstream toxicity and when it does not. Thus, specificity may be a bigger challenge than sensitivity. Assay Validation under New Toxicity-Testing Paradigm Validation should not be viewed as an inflexible process that proceeds sequentially through a fixed series of steps and is then judged according to unvarying criteria. For example, because validation assesses fitness for purpose, such exercises should be judged with the specific intended purpose in mind. A testâs in- tended purpose may vary from use as a preliminary screening
146 Toxicity Testing in the 21st Century tool to use as the definitive test. Similarly, a new test may be in- tended to model one or a few toxicity mechanisms for a given api- cal end point but not the full array of mechanisms. Given that the new paradigm would emerge gradually, it would be important to consider validating incremental gains, while recognizing their current strengths and weaknesses. Consequently, applying a one-size-fits-all approach to valida- tion is not conducive to the rapid incorporation of emerging sci- ence or technology into regulatory decision-making. A more flexi- ble approach to assay validation would facilitate the evolution of testing toward a more mechanistic understanding of toxicity end points; the form the validation should take is a point of discussion and deliberation (Balls et al. 2006; Corvi et al. 2006). For nonregu- latory use of assays, preliminary data-gathering, and exploration of mechanisms, at a minimum some general guidance on assay performance appears warranted for intended assays. For assays to be used routinely, somewhat rigorous performance standards and relevance would have to be established. Returning to the case study on estrogen signaling, the valida- tion sequence involves the development of specific assays that track the key molecular triggers linked to human estrogenic ef- fects. This validation component is largely focused first on validat- ing that the assay components recapitulate the key molecular in- teractions above and then on the traditional approach of looking at assay performance in terms of reproducibility and relevance. Assessing intralaboratory and interlaboratory reproducibility is more straightforward than assessing relevance, which is some- times labeled accuracy. To assess relevance, assays would be for- mally linked to organism-level adverse health effects. For example, they would provide the basis of evaluating the level of molecular change that potentially corresponds to an adverse effect. In addi- tion, reference compounds would be used to determine the as- saysâ positive and negative predictive value. Ideally, substances known to cause and substances known not to cause the effect in
Developing the Science Base and Assays 147 humans would be used as the reference agents for positive and negative predictivity. In the absence of adequate numbers of xenobiotics known to be positive and negative in humans, animal data may have to be used in validation. For the assays based on human cell lines, that could be problematic, and some creativity and flexibility in the validation process would be desirable. For example, rodent-based cell assays comparable with the human assay could be used to establish relevance and support the use of the human cell-based assay. Phase III: Assay Relevance and Validity Trial Once assays are developed and formally validated, they would become available for use. The committee suggests three distinct strategies that could aid in the assessment of test validity and relevance and could further the development of improved assays. First, research entities, such as the National Toxicology Program (NTP), should further develop and run the experimental high-throughput assays, some before they are fully validated, on chemicals that have already been extensively tested with standard or other toxicity tests. The NTP has, for example, initiated mechanistic high-throughput assays on at least 500 chemicals that have already been tested using NTP cancer and reproductive and developmental toxicity studies; and, in collaboration with the NIH Molecular Library Initiative, further developed and applied cell- based screening assays that can be automated (NTP 2006). The Environmental Protection Agency (EPA) National Center for Computational Toxicology (NCCT) also has an initiative to screen numerous pesticides and some industrial chemicals in high- throughput tests. Those processes would be essential for validating the new assays and for learning more about which
148 Toxicity Testing in the 21st Century health effects can be predicted from specific perturbations of toxicity pathways. Second, new validated assays should be conducted in parallel with existing toxicity tests for chemicals, such as pesticides and pharmaceuticals, that will be undergoing or have recently undergone toxicity testing under regulatory programs. This research testing, which would be conducted by research entities, would help to foster the evolution of the assays into cell- based test batteries to eventually replace current tests. The testing would also help to gauge the positive and negative predictive values of the various assays and thereby help to avoid (or at least begin to quantify) the associated risks with missing important toxicities with the new assays or incorporating a new assay that detects meaningless physiologic alterations that are not useful for predicting human risk. Third, as the new assays are developed further and validated, they should be deployed as screens for evaluation of chemicals that would not currently undergo toxicity testing, such as existing high-production-volume chemicals that have not been tested or have been evaluated only with the screening information dataset, or new chemicals that are not currently subject to test require- ments. Used as screens for chemicals that would otherwise not be tested or be subject only to little testing, the assays could begin to help to set priorities for testing and could also help to guide the focus of any testing that may be required. Eventually they could provide the basis of an improved framework for addressing chemicals for which testing is limited or not done at all. This is illustrated in Figure 5-2. Resources will be required to implement the three ap- proaches: testing of chemicals with large and robust datasets of apical tests, parallel research testing of chemicals subject to exist- ing regulatory testing requirements, and applying high-through-
Developing the Science Base and Assays 149 Chemical Characterization and No regulatory or other Screening Assay Suite action. No further Risk-Management Decision testing. Chemical Characteristics Assay Results More Testing In Vitro and In Vivo Targeted Testing Regulatory or other action. No further testing. Results More Research FIGURE 5-2 Screening of chemicals that would otherwise not be tested or be subject to only limited testing. The results of the screening tests would be used to decide the nature of further testing needed, if any. put screens to chemicals that are currently not tested. In making those suggestions, the committee is not recommending expanding test requirements for pesticides or pharmaceuticals. Rather, it notes that the tests developed will be a national resource of wide benefit and worthy of funding by federal research programs. Vol- untary testing by industry using validated new assays should also be encouraged. The three approaches are anticipated to pay off substantially in the longer term as scientists, regulators, and stakeholders develop enough familiarity and comfort with the new assays that they begin to replace current apical end-point
150 Toxicity Testing in the 21st Century tests and as mechanistic indicators are increasingly used in envi- ronmental decision-making. In addition to the high-throughput testing by NTP and EPA of chemicals with robust datasets described above, the committee notes the increasing use of mechanistic assays, primarily for fur- ther evaluation of chemicals that have demonstrated toxicity in standard apical assays. The mechanistic studies are done to evalu- ate further a tailored subset of toxicity pathways, such as those involving the peroxisome proliferators-activated receptor, the aryl hydrocarbon receptor, and thyroid and sex hormones. Some com- panies are also using high-throughput assays to guide internal decision-making in new chemical development, but their results typically are not publicly available. A recent example of how the high-throughput assays could play out in the near term is the risk assessment of perchlorate. The data on perchlorate include standard subchronic- and chronic- toxicity tests and developmental-neurotoxicity tests, but risk as- sessments and regulatory decisions have been based on perturba- tion of iodide-uptake inhibitionâthe known toxicity pathway through which perchlorate has its effects (EPA 2006; NRC 2006). If a new chemical were found to inhibit iodide uptake, standard tox- icity tests would not be necessary to demonstrate the predictable effects on thyroid hormone and neurodevelopment. Regulatory decisions could be based on the dose-response relationship for iodide-uptake inhibition. The new data on perchlorate-susceptible subpopulations (for example, those with low iodide) emerging from biomonitoring would also be considered (see Blount et al. 2006). Such a chemical would need to undergo a full battery of toxicity-pathway testing to ascertain that no other important pathways that might have effects at lower doses were disrupted. In the long run, using upstream indicators of toxicity from high-throughput assays based on toxicity pathways can be more sensitive and hence more protective of public health then using apical-end-point observations from assays in small numbers of
Developing the Science Base and Assays 151 live rodents. However, while the new assays are under develop- ment, there will be a long period of uncertainty during which the false-positive and false-negative rates of the testing battery will remain unclear, and the ability of the battery to adequately predict effects in susceptible subpopulations or during susceptible life stages will also be unclear. During the phase-in period and after- ward, there will be a need to pay close attention to whether im- portant toxicities are being missed or are being exaggerated by the toxicity-pathway screening battery. The concern about missing important toxic end points is one of the main reasons for the committeeâs recommendation for a long phase-in period during which the new assays are run in parallel with existing assays and tested on chemicals on which there are already large robust data- sets of apical findings. Parallel testing will allow identification of toxicities that might be missed if the new assays were used alone and will compel the development of assays to address these gaps. Many additional issues would need to be considered during the interim phase of assay development. For example, technical issues, such as cell-culture conditions, and selective pressures that result in molecular evolution of cell lines over time and across laboratories could result in issues that could be addressed only with experience and careful review of assay results. Parallel use of new assays and current tests would probably continue for some time before the adoption of the new assays as first-tier screens or as definitive tests of toxicity. Phase IV: Assembly and Validation of Test Batteries Once toxicity pathways are elucidated and translated into high-throughput assays for a broad field of toxicity testing, such as neurotoxicology, a progressively more comprehensive suite of validated medium- to high-throughput tests would become avail- able to cover the field. Single assays would not be comprehensive
152 Toxicity Testing in the 21st Century or predictive in isolation but would be assembled into suites with targeted tests that would cover the field. The suite or âpanelâ of assays and the scoring of the assays would need to be assessed. This may involve a computational assessment of multivariate end points. Turning again to the estrogen-signaling case study, known estrogen modulators should register as positive in one or more assays. Confidence in the suite of assays can come from the knowledge that all known mechanisms of estrogenic-signaling alteration are modeled. The development and assessment of batteries and the overall testing strategy would be facilitated by a formal uncertainty evaluation. For the different risk contexts and decisions to be made (Chapter 3), the preferred test batteries may differ in sensi- tivity, in this context the probability that the battery identifies as harmful a dose that is harmful, and specificity, the probability that a test battery identifies as not harmful a dose that is not harmful. In screening, the effect of a false-negative finding of no harm at a given dose can be far more costly than a false-positive finding of harm (see, for example, Lave and Omenn 1986). The ability to characterize the specificity and sensitivity of the test battery would aid the consideration of the cost effectiveness and value of the information to be obtained from the test battery (Lave and Omenn 1986; Lave et al. 1988) and ultimately help to identify pre- ferred test strategies. Although considerable effort would be directed at the con- struction of high-throughput batteries, targeted tests would probably also be needed in routine testing strategies to address particular risk contexts (for example, registration of a pesticide for food uses). Still, the end-point-focused targeted assays should by no means remain static. Instead, they should evolve to incorporate new refinements. For example, the rapid developments in imag- ing technologies have offered scientists important new tools to enhance the collection of information from animal bioassays. Promising new assays that use nonmammalian models, such as
Developing the Science Base and Assays 153 Caenorhabditis elegans, are in development. Combined mammalian assays that incorporate a broader array of sensitive end points in a more efficient manner have been developed. The committee as- sumes that development of those approaches will continue, and it encourages development and validation of them in targeted test- ing. As newer targeted-testing approaches become available, older apical approaches should be retired. Intermediate Products of Assay-Development Research One important benefit of the research described is that it could add public-health protection and refinement to current regulatory testing. For example, in some risk contexts, particularly widespread human exposure to existing chemicals, the dose- response data from toxicity-pathway tests could help to refine quantitative relationships between adverse effects identified in the apical tests and perturbations in toxicity pathways and improve the evaluation of perturbations at the low end of the dose- response curve. The results of the toxicity-pathway tests could provide data to aid in interpreting the results of apical tests on a given substance and may guide the selection of further follow-up tests or epidemiologic surveillance. The mechanistic assays would also help to permit the extrapolation of toxicity findings on a chemical under study to other chemicals that target the same mechanism. Additional benefits and research products anticipated for use in the near term include the following: â¢ A battery of inexpensive medium- and high-throughput screening assays that could be incorporated into tiered-testing schemes to identify the most appropriate tests or to provide pre- liminary results for screening risk assessments. With experience, the assays would support the phase-out of apical end-point tests.
154 Toxicity Testing in the 21st Century â¢ Early cell-based replacements for some in vivo tests, such as those for acute toxicity. â¢ Work to develop consensus approaches for DNA- reactivity and mutagenicity assays and strategies for using mechanistic studies in cancer risk assessment. â¢ On-line libraries of results of medium- and high- throughput screens for use in toxicity prediction and improving SAR models. For classes of chemicals well studied in apical end- point tests, the comparison of results from high-throughput stud- ies with those from whole-animal studies could provide the basis of extrapolating toxicity to untested chemicals in the class. â¢ Elucidation of the mechanisms of toxicity of chemicals well studied in high-dose apical end-point tests. Research to achieve the vision must include the study of perturbations of toxicity pathways of well-studied chemicals, many of which have wide- spread human exposure. Such research would bring about better understanding of the mechanisms of toxicity of the chemicals and improve risk assessment. Chemicals with known adverse effects and mechanisms well elucidated with respect to toxicity pathways would be good candidates to serve as positive controls in the high-throughput assays. Such studies would help to distinguish between exposures that result in impaired function and disease and exposures that result in adaptation and normal biologic func- tion (see Figure 2-2). â¢ Indicators of toxicity-pathway activation in the human population. This knowledge could be used to understand the ex- tent to which a single chemical might contribute to disease proc- esses and would be critical for realistic dose-response modeling and extrapolation. â¢ Refined analytic tools for assessing the pharmacokinetics of environmental agents in humans exposed at low concentrations. Such evaluations could be used directly in risk assessments based on apical end-point tests and could aid in design and interpreta- tion of in vitro screens.
Developing the Science Base and Assays 155 â¢ Improvements in targeted human disease surveillance and exposure biomonitoring. BUILDING A TRANSFORMATIVE RESEARCH PROGRAM Instituting Focused Research A long-term, large-scale concerted effort is needed to bring the new toxicity-testing paradigm to fruition. A critical element is the conduct of transformative research to provide the scientific basis of creating the new testing tools and to understand the im- plications of test results and how they may be applied in risk as- sessments used in environmental decision-making. What type of institutional structure would be most appropri- ate for conducting and managing the research effort? It is beyond the committee's charge and expertise to make specific recommen- dations either to change or to create government institutions or to alter their funding decisions. The committee will simply sketch its thoughts on an appropriate institutional structure for implement- ing the vision. Other approaches may also be appropriate. The committee notes that an institutional structure should be selected with the following considerations in mind: â¢ The realization of the vision will entail considerable re- search over many years and require substantial fundingâ hundreds of millions of dollars. â¢ Much of the research will be interdisciplinary and conse- quently, to be most effective, should not be dispersed among dis- cipline-specific laboratories. â¢ The research will need high-level coordination to tackle the challenges presented in the vision efficiently. â¢ The research should be informed by the needs of the regu- latory agencies that would adapt and use the emerging testing
156 Toxicity Testing in the 21st Century procedures, but the research program should be insulated from the short-term orientation and varied mandates of the agencies. Interdisciplinarity, Adaptability, and Timeline The need for an institutional structure that encourages and coordinates the necessarily multidisciplinary research cannot be overstated, and a spirit of interdisciplinarity should infuse the re- search program. Accordingly, the effort would need to draw on a variety of technologies and a number of disciplines, including ba- sic biology, bioinformatics, biostatistics, chemistry, computational biology, developmental biology, engineering, epidemiology, ge- netics, pathology, structural biology, and toxicology. Good com- munication and problem-solving across disciplines are a must, as well as leadership adept at fostering interdisciplinary efforts. The effort will have to be monitored continually, with the necessary cross-interactions engineered, managed, and maintained. The testing paradigm would be progressively elaborated over many years or decades as experience and successes accumu- late. It should continue to evolve with scientific advances. Its evo- lution is likely to entail midcourse changes in the direction of re- search as breakthroughs in technology and science open more promising leads. Neither this committee nor any other constituted committee will be able to foresee the full suite of possibilities or potential limitations of new approaches that might arise with in- creasing biologic knowledge. The research strategy outlined above provides a preview to the future and suggests general steps needed to arrive at a new toxicity-testing paradigm. Some of the suggested steps would need to be reconsidered as time passes and experience is developed with new cell-based assays and interpre- tive tools, but no global change in the vision, which the committee regards as robust, is expected.
Developing the Science Base and Assays 157 The transition from existing tests to the new tests would re- quire active management, involvement of the regulatory agencies, and coherent long-range planning that invests in the creation of new knowledge while refining current testing and, correspond- ingly, stimulating changes in risk-assessment procedures and guidelines. Over time, the research expertise and infrastructure involved in testing regimes could be transformed in important ways as the need for animal testing decreases and pathway- related testing increases. The committee envisions that the new knowledge and tech- nology generated from the proposed research program will be translated to noticeable changes in toxicity-testing practices within 10 years. Within 20 years, testing approaches will more closely reflect the proposed vision than current approaches. That projec- tion assumes adequate and sustained funding. As in the Human Genome Project, progress is expected to be nonlinear, with the pace increasing as technologic and scientific breakthroughs are applied to the effort. Cross-Institution and Sector Linkages The research to describe cellular-response networks and tox- icity pathways and to develop the complementary human bio- monitoring and surveillance strategy would be part of larger cur- rent efforts in medicine and biotechnology. Funding of that research is substantial in medical schools and other academic in- stitutions, some U.S. federal and European agencies, and pharma- ceutical, medical, and biotechnology industries. Links among dif- ferent elements in the research community involved in relevant research will be needed to capitalize on the new knowledge, tech- nologies, and analytic tools as they develop. Mechanisms for en- suring sustained communication and collaboration, such as data- sharing, will also be needed.
158 Toxicity Testing in the 21st Century Some form of participation by industry and public-interest groups should be ensured. Firms have a long-term interest in the new paradigm, and most stand to gain from more efficient testing requirements. Public-health and environmental interest groups, as well as those promoting alternatives to animal testing, should also be engaged. Funding A large-scale, long-term research program is needed to eluci- date the cellular-response networks and individual toxicity path- ways within them. Given the scientific challenges and knowledge development required, moderately large funding will be required. The committee envisions a research and test-development pro- gram similar in scale to the NTP or the Institute for Systems Biol- ogy in Seattle, Washington. The success of the project will depend on attracting the best thinkers to the task, and the endeavor would compete with related research programs in medicine, industry, and government for these researchers. Attracting the best researchers in turn would depend on an adequately funded and managed venture that ap- pears well placed to succeed. Institutional Framework The committee concludes that an appropriate institutional structure for the proposed vision is a research institute that fosters multidisciplinary research intramurally and extramurally. A strong intramural research program is essential. The effort cannot succeed merely by creating a virtual institution to link and inte- grate organizations that are performing relevant research and by dispersing funding on relevant research projects. A mission-
Developing the Science Base and Assays 159 oriented, intramural program with core multidisciplinary pro- grams to answer the critical research questions can foster the kind of cross-discipline activity essential for the success of the initiative. There would be far less chance of success within a reasonable pe- riod if the research were dispersed among different locations and organizations without a core integrating and organizing institute. A collocated, strong intramural research initiative will enable the communication and problem-solving across disciplines required for the research and assay development. Similarly, a strong, well-coordinated, targeted extramural program will leverage the expertise that already exists within academe, pharmaceutical companies, the biotechnology sector, and elsewhere and foster research that complements the intramural program. Through its intramural and highly targeted extramural activities, the envisioned research institute would provide the nexus through which the new testing tools would be conceived, developed, validated, and incorporated into coherent testing schemes. The committee sees the research institute funded and coordi- nated primarily by the federal government, given the scale of the necessary funding, the multiyear nature of the project, and links to government regulatory agencies. That does not mean that there will be no role for other stakeholders. Biotechnology companies, for example, could cofund specific projects. Academic researchers could conduct research with the programâs extramural funds. Moreover, researchers in industry and academe will continue making important progress in fields related to the proposed vision independently of the proposed projects. The key institutional question is where to house the govern- ment research institute that carries out the intramural program of core multidisciplinary research and manages the extramural pro- gram of research. Should it be an existing entity, such as the Na- tional Institute of Environmental Health Sciences (NIEHS), or a new entity devoted exclusively to the proposed vision? The com-
160 Toxicity Testing in the 21st Century mittee notes that the recognized need for research and institu- tional structures that transcend disciplinary boundaries to address critical biomedical research questions has spawned systems- biology institutes and centers at biomedical firms and several leading universities in the country. However, the committee found few examples in the government sector. The Department of Energy (DOE) Genomics GTL Program seeks to engineer systems for energy production, site remediation, and carbon sequestration based on systems-biology research on microorganisms. In its re- view of this DOE program, NRC (2006) found collocated, inte- grated vertical research to be essential to its success. If one were to place the proposed research program into an existing government entity, a possible choice would be the NTP, a multiagency entity administered and housed in NIEHS. The NTP has several features that suggest it as a possible institutional home for the research program envisioned here, including its mandate to develop innovative testing approaches, its multiagency charac- ter, the similarities between its Vision and Roadmap for the Fu- ture and what is envisioned here, and its expertise in validating new tests through the NTP Interagency Center for the Evaluation of Alterative Toxicological Methods and its sister entity, the Inter- agency Coordinating Committee on the Validation of Alternative Methods, and in -omics testing at its Center for Toxicogenomics. It is conceivable that the NTP could absorb the research mandate outlined here if its efforts dramatically scaled up to accommodate the focused program envisioned. If it were placed in the NTP, structures would have to be in place to ensure that the day-to-day technical focus on short-term problems of high-volume chemical testing would not impede progress in evolving testing strategies. As the new test batteries and strategies are developed and vali- dated, they would be moved out of the research arm and be made available for routine application. The committee considered housing the proposed research in- stitute in a regulatory agency and notes that this could be prob-
Developing the Science Base and Assays 161 lematic. The science and technology budgets of regulatory agen- cies have been under considerable stress and appear unlikely to sustain such an effort. Although EPAâs NCCT has initiated impor- tant work in this field, the scale of the endeavor envisioned by the committee is substantially larger and could not be sufficiently supported if recent trends in congressional budgeting for EPA continue. For example, EPAâs science and technology research budget has been suboptimal and decreasing in real dollars for a number of years (EPA 2006, 2007). The research portfolio entailed by the committeeâs vision will also require active management to maintain relevance and the sci- entific focus needed for knowledge development. Although suffi- cient input from regulatory agencies is needed, insulation of the institute from the short-term orientation of regulatory-agency programs that depend on the results of toxicologic testing is im- portant. In the end, the committee noted that wherever the institute is housed, it should be structured along the lines of the NTP, with intramural and focused extramural components and interagency input but with its own focused mission and funding stream. Scientific Surprises and the Need for Midcourse Corrections Research often brings surprises, and todayâs predictions con- cerning the promise of particular lines of research are probably either pessimistic or optimistic in some details. For example, the committeeâs vision of toxicity testing stands on the presumption that a relatively small number of pathways can provide suffi- ciently broad coverage to allow a moderately sized set of high- and medium-throughput assays to be developed for the scientific community to use with confidence and that any important gaps in coverage can be addressed with a relatively small set of targeted assays. That presumption may be found to be incorrect. Further-
162 Toxicity Testing in the 21st Century more, the establishment of links between perturbations and apical end points may prove especially challenging for some end points. Thus, as the research proceeds and learning takes place, adjust- ments in the vision and the research focus can be anticipated. In addition to program oversight noted above, the research program should be assessed every 3-5 years by well-recognized scientific experts independently of vested interests in the public and private sectors. The assessment would weigh practical pro- gress, the promise of methods on the research horizon, and the place of the research in the context of other research, and it would recommend midcourse corrections. CONCLUDING REMARKS In the traditional approach to toxicity testing, the whole ani- mal provides for the integration and evaluation of many toxicity pathways. Yet each animal study is time-consuming and expen- sive and results in the use of many animals. In addition, many animal studies need to be done to evaluate different end points, life stages, and exposure durations. The new approach may re- quire individual assays for hundreds of relevant toxicity path- ways. Despite that apparent complexity, emerging methods allow testing of many pathways extremely rapidly and efficiently (for example, in microarrays or wells). If positive signals from the as- says can be used with confidence to guide risk management, the new approach will ultimately prove more efficient than the tradi- tional one. It is clear, however, that much development and refinement will be needed before a new and efficient system could be in place. For some kinds of toxicity, such as developmental toxicity and neurotoxicity, the identification of replacement toxicity-pathway assays might be particularly challenging, and some degree of targeted testing might continue to be necessary. In addition, the
Developing the Science Base and Assays 163 validation process may uncover unexpected and challenging technical problems that will require targeted testing. Finally, the parallel interim process may discover that some categories of chemicals or of toxicity cannot yet be evaluated with toxicity- pathway testing. Nonetheless, the committee envisions the steady evolution of toxicity testing from apical end-point testing to a system based largely on toxicity-pathway batteries in a manner mindful of information needs and of the capacity of the test system to provide information. In the long term, the committee expects toxicity pathways to become sufficiently well understood and calibrated for batteries of high-throughput assays to provide a substantial fraction of the toxicity-testing data needed for environmental decision-making. Exposure monitoring, human surveillance for early perturbations of toxicity-response pathways, and epidemiologic studies should provide an additional layer of assurance that early indications of adverse effects would be detected if they occurred. The research conducted to realize the committeeâs vision would support a se- ries of substantial improvements in toxicity testing in the rela- tively near term. REFERENCES Balls, M., P. Amcoff, S. Bremer, S. Casati, S. Coecke, R. Clothier, R. Combes, R. Corvi, R. Curren, C. Eskes, J. Fentem, L. Gribaldo, M. Halder, T. Hartung, S. Hoffmann, L. Schectman, L. Scott, H. Spielmann, W. Stokes, R. Tice, D. Wagner, and V. Zuang. 2006. The principles of weight of evidence valida- tion of test methods and testing strategies. The report and recommenda- tions of ECVAM workshop 58. Altern. Lab. Anim. 34(6):603-620. Blount, B.C., J.L. Pirkle, J.D. Osterloh, L. Valentin-Blasini, and K.L. Caldwell. 2006. Urinary perchlorate and thyroid hormone levels in adolescent and adult men and women living in the United States. Environ. Health Per- spect. 114(12):1865â1871. Brazma, A., P. Hingamp, J. Quackenbush, G. Sherlock, P. Spellman, C. Stoeckert, J. Aach, W. Ansorge, C.A. Ball, H.C. Causton, T. Gaasterland, P. Glenisson, F.C. Holstege, I.F. Kim, V. Markowitz, C. Matese, H. Parkinson, A.
164 Toxicity Testing in the 21st Century Robinson, U. Sarkans, S. Schulze-Kremer, J. Stewart, R. Taylor, J. Vilo, and M. Vingron. 2001. Minimum information about a microarray experiment (MIAME)-toward standards for microarray data. Nat. Genet. 29(4):365-371. Corvi, R., H.J. Ahr, S. Albertini, D.H. Blakey, L. Clerici, S. Coecke, G.R. Douglas, L. Gribaldo, J.P. Groten, B. Haase, K. Hamernik, T. Hartung, T. Inoue, I. Indans, D. Maurici, G. Orphanides, D. Rembges, S.A. Sansone, J.R. Snape, E. Toda, W. Tong, J.H. van Delft, B. Weis, and L.M. Schechtman. 2006. Meeting report: Validation of toxicogenomics-based test systems: ECVAM- ICCVAM/NICEATM considerations for regulatory use. Environ Health Perspect. 114(3):420-429. EPA (U.S. Environmental Protection Agency). 2006. Science and Research Budgets for the U.S. Environmental Protection Agency for Fiscal Year 2007; An Advisory Report by the Science Advisory Board. EPA-SAB-ADV-06- 003. U.S. Environmental Protection Agency, Washington DC. March 30, 2006 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/ 36a1ca3f683ae57a85256ce9006a32d0/0EDAAECA1096A5B085257145007 2E33E/$File/sab-adv-06-003.pdf [accessed April 4, 2007]. EPA (U.S. Environmental Protection Agency). 2007. Comments on EPAâs Strategic Research Directions and Research Budget for FY 2008, An Advisory Report of the U.S. Environmental Protection Agency Science Advisory Board. EPA-SAB-ADV-07-004. U.S. Environmental Protection Agency, Washington DC. March 13, 2007 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/997517EFA5FC48798525729F00 73B4D4/$File/sab-07-004.pdf [accessed April 7, 2007]. ICCVAM (Interagency Coordinating Committee on the Validation of Alternative Methods) and NICEATM (National Toxicology Program Interagency Cen- ter for the Evaluation of Alternative Toxicolgical Methods). 2003. ICCVAM Guidelines for the Nomination and Submission of New, Revised, and Al- ternative Test Methods. NIH Publication No. 03-4508. National Institute of Environmental Health Sciences, National Institutes of Health. Inglese, J., D.S. Auld, A. Jadhav, R.L. Johnson, A. Simeonov, A. Yasgar, W. Zheng, and C.P. Austin. 2006. Quantitative high-throughput screening: A titration- based approach that efficiently identifies biological activities in large chemical libraries. Proc. Natl. Acad. Sci. U.S.A. 103(31):11473-11478. Lave, L.B., and G.S. Omenn. 1986. Cost-effectiveness of short-term tests for car- cinogenicity. Nature 324(6092):29-34. Lave, L.B., F.K. Ennever, H.S. Rosenkranz, and G.S. Omenn. 1988. Information value of the rodent bioassay. Nature 336(6200):631-633. NRC (National Research Council). 2000. Scientific Frontiers in Developmental Toxicology and Risk Assessment. Washington, DC: National Academy Press.
Developing the Science Base and Assays 165 NRC (National Research Council). 2006. Review of the Department of Energyâs Genomics: GTL Program. Washington, DC: The National Academies Press. NTP (National Toxicology Program). 2006. Current Directions and Evolving Strategies. National Toxicology Program, National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC[online]. Available: http://ntp.niehs.nih.gov/files/NTP_ CurrDir2006.pdf [accessed April 4, 2007]. OECD (Organisation for Economic Co-operation and Development). 2005. Guidance Document on the Validation and International Acceptance of New or Updated Test Methods for Hazard Assessment. OECD Series on Testing and Assessment No. 34. ENV/JM/Mono(2005)14. Organisation for Economic Co-operation and Development, Paris [online]. Available: http://appli1.oecd.org/olis/2005doc.nsf/linkto/env-jm-mono(2005)14 [ac- cessed April 4, 2007].