The health effects of low levels of ionizing radiation are important to understand. Ionizing radiation—the sort found in X-rays or gamma rays1—is defined as radiation that has sufficient energy to displace electrons from molecules. Free electrons, in turn, can damage human cells. One challenge to understanding the health effects of radiation is that there is no general property that makes the effects of man-made radiation different from those of naturally occurring radiation. Still another difficulty is that of distinguishing cancers that occur because of radiation exposure from cancers that occur due to other causes. These facts are just some of the many that make it difficult to characterize the effects of ionizing radiation at low levels.
Despite these challenges, a great deal about this topic is well understood. Specifically, substantial evidence exists that exposure to high levels of ionizing radiation can cause illness or death. Further, scientists have long known that in addition to cancer, ionizing radiation at high doses causes mental retardation in the children of mothers exposed to radiation during pregnancy. Recently, data from atomic bomb survivors suggest that high doses are also connected to other health effects such as heart disease and stroke.
Because ionizing radiation is a threat to health, it has been studied extensively. This report is the seventh in a series of publications from the National Academies concerning radiation health effects, referred to as the Biological Effects of Ionizing Radiation (BEIR) reports. This report, BEIR VII, focuses on the health effects of low levels of low linear energy transfer (LET) ionizing radiation. Low-LET radiation deposits less energy in the cell along the radiation path and is considered less destructive per radiation track than high-LET radiation. Examples of low-LET radiation, the subject of this report, include X-rays and γ-rays (gamma rays). Health effects of concern include cancer, hereditary diseases, and other effects, such as heart disease.
This summary describes:
how ionizing radiation was discovered,
how ionizing radiation is detected,
units used to describe radiation dose,
what is meant by low doses of ionizing radiation,
exposure from natural “background” radiation,
the contribution of man-made radiation to public exposure,
scenarios illustrating how people might be exposed to ionizing radiation above background levels,
evidence for adverse health effects such as cancer and hereditary disease,
the BEIR VII risk models,
what bodies of research the committee reviewed,
why the committee has not accepted the view that low levels of radiation might be substantially more or less harmful than expected from the model used in this BEIR report, and
the committee’s conclusions.
HOW IONIZING RADIATION WAS DISCOVERED
Low levels of ionizing radiation cannot be seen or felt, so the fact that people are constantly exposed to radiation is not usually apparent. Scientists began to detect the presence of ionizing radiation in the 1890s.2 In 1895, Wilhelm Conrad Roentgen was investigating an electrical discharge generated in a paper-wrapped glass tube from which most of the air had been evacuated. The free electrons generated in the “vacuum tube,” which were then called cathode rays, were
X-rays are man-made and generated by machines, whereas gamma rays occur from unstable atomic nuclei. People are continuously exposed to gamma rays from naturally occurring elements in the earth and outer space.
Health Physics Society. Figures in Radiation History, http://www.hps.org. September 2004.
in themselves a form of radiation. Roentgen noted that when the electrons were being generated, a fluorescent screen on a nearby table began to glow. Roentgen theorized that invisible emissions from the cathode-ray tube were causing the fluorescent screen to glow, and he termed these invisible emissions X-rays. The electrons produced by the electrical discharge had themselves produced another form of radiation, X-rays. The next major discovery occurred when Henri Becquerel noted that unexposed photographic plates stored in a drawer with uranium ore were fogged. He concluded that the fogging was due to an invisible emission emanating from the uranium atoms and their decay products. This turned out to be naturally occurring radiation emanating from the uranium. Marie and Pierre Curie went on to purify radium from uranium ore in Becquerel’s laboratory, and in subsequent years, many other forms of radiation including neutrons, protons, and other particles were discovered. Thus, within a period of several years in the 1890s, man-made and naturally occurring radiation were discovered.
Roentgen’s discovery of X-rays resulted in the eventual invention of X-ray machines used to image structures in the human body and to treat health conditions. Adverse health effects of high levels of ionizing radiation exposure became apparent shortly after these initial discoveries. High doses to radiation workers would redden the skin (erythema), and this rough measure of radiation exposure was called the “skin erythema dose.” The use of very large doses, primitive dosimetry (dose measurement) such as the skin erythema dose, and the fact that many of these early machines were not well shielded led to high radiation exposures both to the patients and to the persons administering the treatments. The development of chronic, slow-healing skin lesions on the hands of early radiologists and their assistants resulted in the loss of extremities in some cases. These incidents were some of the first indications that radiation delivered at high doses could have serious health consequences. Subsequent studies in recent years have shown that early radiologists had a higher mortality rate than other health workers. This increased mortality rate is not seen in radiologists working in later years, presumably due to vastly improved safety conditions resulting in much lower doses to radiologists.
The early indications of health effects after high radiation exposures are too many to chronicle in this Public Summary, but the committee notes one frequently cited example. In 1896, Thomas Edison developed a fluoroscope that consisted of a tapered box with a calcium tungstate screen and a viewing port by which physicians could view X-ray images. During the course of these investigations with X-rays, Clarence Dally, one of Edison’s assistants, developed a degenerative skin disease, that progressed into a carcinoma. In 1904, Dally succumbed to his injuries in what may have been the first death associated with man-made ionizing radiation in the United States. Edison halted all of his X-ray research noting that “the x rays had affected poisonously my assistant, Mr. Dally…”3 Today, radiation is one of the most thoroughly studied potential hazards to humans, and regulatory standards have become increasingly strict over the years in an effort to protect human health.
HOW IONIZING RADIATION IS DETECTED
The detection of ionizing radiation has greatly improved since the days of Roentgen, Becquerel, and the Curies. Ionizations can be detected accurately by Geiger counters and other devices. Because the efficiency of the detector is known, one can determine not only the location of the radiation, but also the amount of radiation present. Other, more sophisticated detectors can evaluate the “signature” energy spectrum of some radiations and thus identify the type of radiation.
UNITS USED TO DESCRIBE RADIATION DOSE
Ionizing radiation can be in the form of electromagnetic radiation, such as X-rays or γ-rays, or in the form of subatomic particles, such as protons, neutrons, alpha particles, and beta particles. Radiation units can be confusing. Radiation is usually measured in dose units called grays (Gy) or sieverts (Sv), which are measures of energy deposited in living tissue. X- and γ-rays are said to have low LET. Low-LET radiation produces ionizations sparsely throughout a cell; in contrast, high-LET radiation transfers more energy per unit length as it traverses the cell and is more destructive per unit length.
Although this BEIR VII report is about low-LET radiation, the committee has considered some information derived from complex exposures that include radiation from high-LET and low-LET sources. High-LET or mixed radiations (radiation from high-LET and low-LET sources) are often described in units known as sievert. The units for low-LET radiation can be sievert or gray. For simplicity, all dose units in the Public Summary are reported in sieverts (Sv). For a more complete description of the various units of dose used in this report, see “Units Used to Express Radiation Dose” which precedes the Public Summary, as well as the terms Gray, Sievert, and Units in the glossary.
WHAT IS MEANT BY LOW DOSES OF IONIZING RADIATION
For this report, the committee has defined low dose as doses in the range of near zero up to about 100 mSv (0.1 Sv) of low-LET radiation. The committee has placed emphasis on the lowest doses where relevant data are available. The annual worldwide background exposure from natural sources of low-LET radiation is about 1 mSv.
Health Physics Society. Figures in Radiation History, http://www.hps.org. September 2004.
EXPOSURE FROM NATURAL BACKGROUND RADIATION
Human beings are exposed to natural background radiation every day from the ground, building materials, air, food, the universe, and even elements in their own bodies. In the United States, the majority of exposure to background ionizing radiation comes from exposure to radon gas and its decay products. Radon is a colorless, odorless gas that emanates from the earth and, along with its decay products, emits a mixture of high- and low-LET radiation. Radon can be hazardous when accumulated in underground areas such as poorly ventilated basements. The National Research Council 1999 report, Health Effects of Exposure to Radon (BEIR VI), reported on the health effects of radon, and therefore those health effects are not discussed in this report. Average annual exposures worldwide to natural radiation sources (both high and low LET) would generally be expected to be in the range of 1–10 mSv, with 2.4 mSv being the present estimate of the central value.4 Of this amount, about one-half (1.2 mSv per year) comes from radon and its decay products. Average annual background exposures in the United States are slightly higher (3.0 mSv) due in part to higher average radon levels. After radon, the next highest percentage of natural ionizing radiation exposure comes from cosmic rays, followed by terrestrial sources, and “internal” emissions. Cosmic rays are particles that travel through the universe. The Sun is a source of some of these particles. Other particles come from exploding stars called supernovas.
The amount of terrestrial radiation from rocks and soils varies geographically. Much of this variation is due to differences in radon levels. “Internal” emissions come from radioactive isotopes in food and water and from the human body itself. Exposures from eating and drinking are due in part to the uranium and thorium series of radioisotopes present in food and drinking water.5 An example of a radioisotope moving through the food chain would be carbon-14 (14C), a substance found in all living things. 14C is created when cosmic rays collide with nitrogen atoms. 14C combines with oxygen to create carbon dioxide gas. Plants absorb carbon dioxide during photosynthesis, and animals feed on those plants. In these ways, 14C accumulates in the food chain and contributes to the internal background dose from ionizing radiation.
As mentioned previously, possible health effects of low-dose, low-LET radiation are the focus of this BEIR VII report. Because of the “mixed” nature of many radiation sources, it is difficult to estimate precisely the percentage of natural background radiation that is low LET. Figure PS-1 illustrates the approximate sources and relative amounts of high-LET and low-LET radiations that comprise the natural background exposure worldwide. This figure illustrates the relative contributions of three natural sources of high-LET radiation and three natural sources of low-LET radiation to the global population exposure. The smaller, detached segment of the chart represents the relative contribution of low-LET radiation sources to the annual background exposure. The total average annual population exposure worldwide due to low-LET radiation would generally be expected to be in the range of 0.2–1.0 mSv, with 0.9 mSv being the present estimate of the central value.
CONTRIBUTION OF MAN-MADE RADIATION TO PUBLIC EXPOSURE
In addition to natural background radiation, people are also exposed to low- and high-LET radiation from man-made sources such as X-ray equipment and radioactive materials used in medicine, research, and industry. A 1987 study6 of ionizing radiation exposure of the population of the United States estimated that natural background radiation comprised 82% of the annual U.S. population exposure, while manmade sources contributed 18% (see Figure PS-2, pie chart in the lower left portion of the figure).
In Figure PS-2, the man-made radiation component (upper right portion of the figure) shows the relative contributions of the various types of man-made radiation to the U.S. population.7 Medical X-rays and nuclear medicine account for about 79% of the man-made radiation exposure in the United States. Elements in consumer products, such as tobacco, the domestic water supply, building materials, and to a lesser extent, smoke detectors, televisions, and computer screens, account for another 16%. Occupational exposures, fallout, and the nuclear fuel cycle comprise less than 5% of the man-made component and less than 1% of the combined background and man-made component. Additional small amounts of exposure from background and man-made radiation come from activities such as traveling by jet aircraft (cosmic radiation—add 0.01 mSv for each 1000 miles traveled), living near a coal-fired power plant (plant emissions—add 0.0003 mSv), being near X-ray luggage inspection scanners (add 0.00002 mSv), or living within 50 miles of a nuclear power plant (add 0.00009 mSv).8
There are many ways in which an individual’s exposure to ionizing radiation could vary from the averages. Factors that might increase exposure to ionizing radiation include (1) increased uses of radiation for medical purposes, (2) occupational exposure to radiation, and (3) smoking tobacco products.9 Factors that might decrease radiation exposure include living at lower altitudes (less cosmic radiation) and living and working in the higher floors of a building (less radon).
SCENARIOS ILLUSTRATING HOW PEOPLE MIGHT BE EXPOSED TO IONIZING RADIATION ABOVE BACKGROUND LEVELS
This section provides three scenarios illustrating how some people might be exposed to ionizing radiation above background levels. These examples are for illustration purposes only and are not meant to be inclusive.
There is growing use of whole-body scanning by computed tomography (CT) as a way of screening for early signs of disease among asymptomatic adults.10 CT examinations result in higher organ doses of radiation than conventional single-film X-rays. This is because CT scanners rotate around the body, taking a series of cross-sectional X-rays. A computer compiles these X-ray slices to produce a three-dimensional portrait. According to Brenner and Elliston, who estimated both radiation dose and risks from such procedures, a single full-body scan results in a mean effective radiation dose of 12 mSv.11 These authors write, “To put this (dose) in perspective, a typical mammogram … has an effective dose of 0.13 mSv—a factor of almost 100 times less.” According to Brenner and Elliston’s calculations, “a 45-year-old adult who plans to undergo 30 annual full-body CT examinations would potentially accrue an estimated lifetime cancer mortality risk of 1.9% (almost 1 in 50)…. Correspondingly, a 60-year-old who plans to undergo 15 annual full-body CT examinations would potentially accrue an estimated lifetime cancer mortality risk of one in 220.” Citing a National Vital Statistics Report, Brenner and Elliston note, for comparison that, “the lifetime odds that an individual born in the United States in 1999 will die in a traffic accident
National Council on Radiation Protection and Measurements. 1987. Radiation exposure of the U.S. population from Consumer Products and Miscellaneous Sources. Bethesda, MD: NCRP, Report No. 95.
Full-Body CT Scans: What You Need to Know (brochure). U.S. Department of Health and Human Services. 2003. Accessed at www.fda.gov/cdrh/ct.
Brenner, D.J., and C.D. Elliston. 2004. Estimated radiation risks potentially associated with full-body CT screening. Radiology 232:735–738.
are estimated to be one in 77.”12 Further information on whole-body scans is available from the U.S. Food and Drug Administration web site.13
CT Scans Used in Diagnostic Procedures
The use of CT scans in adults experiencing symptoms of illness or injury is widely accepted, and CT scan use has increased substantially in the last several decades. The BEIR VII committee recommends that in the interest of radiological protection, there be follow-up studies of cohorts of persons receiving CT scans, especially children. In addition, the committee recommends studies of infants who experience diagnostic radiation exposure related to cardiac catheterization and of premature infants who are monitored with repeated X-rays for pulmonary development.
Working near Ionizing Radiation
People who work at medical facilities, in mining or milling, or with nuclear weapons are required to take steps to protect themselves from occupational exposures to radiation. The maximum amount of radiation that workers are allowed to receive in connection with their occupations is regulated. In general these limits are 50 mSv per year to the whole body, with larger amounts allowed to the extremities. The exposure limits for a pregnant worker, once pregnancy is declared, are more stringent. In practice the guidelines call for limiting exposures to as low as is reasonably achievable.
Combined analyses of data from nuclear workers offer an opportunity to increase the sensitivity of such studies and to
Hoyert, D. L., E. Arias, B.L. Smith, S.L. Murphy, and K.D. Kochanek. 2001. Deaths: Final data for 1999. National Vital Statistics Report USA 49:1–113.
Full-Body CT Scans: What You Need to Know (brochure), U.S. Department of Health and Human Services. 2003. Accessed at www.fda.gov/cdrh/ct.
provide direct estimates of the effects of long-term, low-dose, low-LET radiation. It should be noted however that even with the increased sensitivity, the combined analyses are compatible with a range of possibilities, from a reduction of risk at low doses to risks twice those on which current radiation protection recommendations are based.
Veterans Exposed to Radiation Through Weapons Testing
An example of man-made radiation exposures experienced by large numbers of people in the past is the experience of the U.S. atomic veterans during and after World War II. From 1945 to 1962, about 210,000 military and civilian personnel were exposed directly at a distance to aboveground atomic bomb tests (about 200 atmospheric weapons tests were conducted in this period).14 In general, these exercises, conducted in Nevada, New Mexico, and the Pacific, were intended to familiarize combat teams with conditions that would be present during a potential war in which atomic weapons might be used. As an example, in the series of five atmospheric tests conducted during Operation UPSHOT-KNOTHOLE, individual battalion combat teams experienced low-LET γ-ray doses as low as 0.4 mSv and as high as 31 mSv. This range of exposures would correspond to the equivalent of about five chest X-rays for the lowest-exposed combat team to approximately 390 chest X-rays for the highest-exposed combat team (by assuming a dose from one chest X-ray to be about 0.08 mSv).
EVIDENCE FOR ADVERSE HEALTH EFFECTS SUCH AS CANCER AND HEREDITARY DISEASE
The mechanisms that lead to adverse health effects after exposure to ionizing radiation are not fully understood. Ionizing radiation has sufficient energy to change the structure of molecules, including DNA, within the cells of the human body. Some of these molecular changes are so complex that it may be difficult for the body’s repair mechanisms to mend them correctly. However, the evidence is that only a very small fraction of such changes would be expected to result in cancer or other health effects. Radiation-induced mutations would be expected to occur in the reproductive cells of the human body (sperm and eggs), resulting in heritable disease. The latter risk is sufficiently small that it has not been detected in humans, even in thoroughly studied irradiated populations such as those of Hiroshima and Nagasaki.
As noted above, the most thoroughly studied individuals for determination of the health effects of ionizing radiation are the survivors of the Hiroshima and Nagasaki atomic bombs. Sixty-five percent of these survivors received a low dose of radiation (less than 100 mSv; the definition of low dose used by this BEIR VII report). A dosage of 100 mSv is equivalent to approximately 40 times the average yearly background radiation exposure worldwide from all sources (2.4 mSv) or roughly 100 times the worldwide background exposure from low-LET radiation, the subject of this report. At dose levels of about 100 to 4000 mSv (about 40 to 1600 times the average yearly background exposure), excess cancers have been observed in Japanese atomic bomb survivors. Excess cancers represent the number of cancers above the levels expected in the population. In the case of in utero exposure (exposure of the fetus during pregnancy), excess cancers can be detected at doses as low as 10 mSv.15 For the radiation doses at which excess cancers occur in the Hiroshima and Nagasaki studies, solid cancers16 show an increasing rate with increasing dose that is consistent with a linear association. In other words, as the level of exposure to radiation increased, so did the occurrence of solid cancers.
Major advances have occurred during the last decade in several key areas that are relevant to the assessment of risks at low radiation doses. These advances have contributed to greater insights into the molecular and cellular responses to ionizing radiation and into the nature of the relationship between radiation exposure and the types of damage that underlie adverse health outcomes. Also, more data on radiation-induced cancers in humans have become available since the previous BEIR report on the health effects of low-dose, low-LET radiation, and those data are evaluated in this report.
THE BEIR VII RISK MODELS
Estimating Cancer Risk
An important task of the BEIR VII committee was to develop “risk models” for estimating the relationship between exposure to low levels of low-LET ionizing radiation and harmful health effects. The committee judged that the linear no-threshold model (LNT) provided the most reasonable description of the relation between low-dose exposure to ionizing radiation and the incidence of solid cancers that are induced by ionizing radiation. This section describes the LNT; the linear-quadratic model, which the committee adopted for leukemia; and a hypothetical linear model with a threshold. It then gives an example derived from the BEIR VII risk models using a figure with closed circles representing the frequency of cancers in the general population and a star representing estimated cancer incidence from ra-
National Research Council. 2003. A Review of the Dose Reconstruction Program of the Defense Threat Reduction Agency. Washington, DC: National Academies Press, http://www.nap.edu/catalog/10697.html.
Doll, R., and R. Wakeford. 1997. Risk of childhood cancer from foetal irradiation. Brit J Radiol 70:130–139.
Solid cancers are cellular growths in organs such as the breast or prostate as contrasted with leukemia, a cancer of the blood and blood-forming organs.
diation exposure using the BEIR VII risk models. Next, the section explains how the absence of evidence for induced adverse heritable effects in the children of survivors of atomic bombs is consistent with the genetic risk estimated through the use of the doubling dose method in this report.
At doses less than 40 times the average yearly background exposure (100 mSv), statistical limitations make it difficult to evaluate cancer risk in humans. A comprehensive review of the biology data led the committee to conclude that the risk would continue in a linear fashion at lower doses without a threshold and that the smallest dose has the potential to cause a small increase in risk to humans. This assumption is termed the “linear no-threshold model” (see Figure PS-3).
The BEIR VII committee has developed and presented in Chapter 12 the committee’s best risk estimates for exposure to low-dose, low-LET radiation in human subjects. An example of how the data-based risk models developed in this report can be used to evaluate the risk of radiation exposure is illustrated in Figure PS-4. This example calculates the expected cancer risk from a single exposure of 0.1 Sv. The risk depends on both sex and age at exposure, with higher risks for females and for those exposed at younger ages. On
average, assuming a sex and age distribution similar to that of the entire U.S. population, the BEIR VII lifetime risk model predicts that approximately 1 person in 100 would be expected to develop cancer (solid cancer or leukemia) from a dose of 0.1 Sv above background, while approximately 42 of the 100 individuals would be expected to develop solid cancer or leukemia from other causes. Lower doses would produce proportionally lower risks. For example, the committee predicts that approximately one individual per thousand would develop cancer from an exposure to 0.01 Sv. As another example, approximately one individual per hundred would be expected to develop cancer from a lifetime (70-year) exposure to low-LET, natural background radiation (excluding radon and other high-LET radiation). Because of limitations in the data used to develop risk models, risk estimates are uncertain, and estimates that are a factor of two or three larger or smaller cannot be excluded.
Health Effects Other Than Cancer
In addition to cancer, radiation exposure has been demonstrated to increase the risk of other diseases, particularly cardiovascular disease, in persons exposed to high therapeutic doses and also in A-bomb survivors exposed to more modest doses. However, there is no direct evidence of increased risk of noncancer diseases at low doses, and data are inadequate to quantify this risk if it exists. Radiation exposure has also been shown to increase risks of some benign tumors, but data are inadequate to quantify this risk.
Estimating Risks to Children of Parents Exposed to Ionizing Radiation
Naturally occurring genetic (i.e., hereditary) diseases contribute substantially to illness and death in human populations. These diseases arise as a result of alterations (mutations) occurring in the genetic material (DNA) contained in the germ cells (sperm and ova) and are heritable (i.e., can be transmitted to offspring and subsequent generations). Among the diseases are those that show simple predictable patterns of inheritance (which are rare), such as cystic fibrosis, and those with complex patterns (which are common), such as diabetes mellitus. Diseases in the latter group originate from interactions among multiple genetic and environmental factors.
Early in the twentieth century, it was demonstrated that ionizing radiation could induce mutations in the germ cells of fruit flies. These findings were subsequently extended to a number of other organisms including mice, establishing the fact that radiation is a mutagen (an agent that can cause mutations in body cells); human beings are unlikely to be exceptions. Thus began the concern that exposure of human populations to ionizing radiation would cause an increase in the frequency of genetic diseases. This concern moved to center stage in the aftermath of the detonation of atomic weapons over Hiroshima and Nagasaki in World War II. Extensive research programs to examine the adverse genetic effects of radiation in the children of A-bomb survivors were soon launched. Other studies focusing on mammals that could be bred in the laboratory—primarily the mouse—were also initiated in different research centers around the world.
The aim of the early human genetic studies carried out in Japan was to obtain a direct measure of adverse effects in the children of A-bomb survivors. The indicators that were used included adverse pregnancy outcomes (i.e., stillbirths, early neonatal deaths, congenital abnormalities); deaths among live-born infants over a follow-up period of about 26 years; growth and development of the children; chromosomal abnormalities; and specific types of mutations. Specific genetic diseases were not used as indicators of risk, because not enough was known about them when the studies began.
The initial goal of the mouse experiments was to examine the effects of different doses, types, and modes of delivery of radiation on mutation frequencies and the extent to which the germ cell stages in the two sexes might differ in their responses to radiation-induced mutations. As it turned out, however, the continuing scarcity of data on radiation-induced mutations in humans and the compelling need for quantitative estimates of genetic risk to formulate adequate measures for radiological protection necessitated the use of mouse data for indirect prediction of genetic risks in humans.
As in previous BEIR reports, a method termed the “doubling dose method,” is used to predict the risk of inducible genetic diseases in the children of people exposed to radiation using naturally occurring genetic diseases as a framework. The doubling dose (DD) is defined as the amount of radiation that is required to produce as many mutations as those occurring spontaneously in one generation. The doubling dose is expressed as a ratio of mutation rates. It is calculated as a ratio of the average spontaneous and induced mutation rates in a set of genes. A large DD indicates small relative mutation risk, and a small doubling dose indicates a large relative mutation risk. The DD used in the present report is 1 Sv (1 Gy)18 and derives from human data on spontaneous mutation rates of disease-causing genes and mouse data on induced mutation rates.19 Therefore, if three mutations occur spontaneously in 1 million people in one generation, six mutations will occur per generation if 1 million people are each exposed to 1 Sv of ionizing radiation, and three of these six mutations would be attributed to the radiation exposure.
More than four decades have elapsed since the genetic studies in Japan were initiated. In 1990, the final results of
those studies were published. They show (as earlier reports published from time to time over the intervening years showed) that there are no statistically significant adverse effects detectable in the children of exposed survivors, indicating that at the relatively low doses sustained by survivors (of the order of about 400 mSv or less), the genetic risks, as measured by the indicators mentioned earlier, are very low. Other, mostly small-scale studies of the children of those exposed to high doses of radiation for radiotherapy of cancers have also shown no detectable increases in the frequencies of genetic diseases.
During the past 10 years, major advances have occurred in our understanding of the molecular nature and mechanisms underlying naturally occurring genetic diseases and radiation-induced mutations in experimental organisms including the mouse. These advances have shed light on the relationships between spontaneous mutations and naturally occurring genetic diseases and have provided a firmer scientific basis for inferences on the relationships between induced mutations and diseases. The risk estimates presented in this report have incorporated all of these advances. They show that at low or chronic doses of low-LET irradiation, the genetic risks are very small compared to the baseline frequencies of genetic diseases in the population. Additionally, they are consistent with the lack of significant adverse effects in the Japanese studies based on about 30,000 children of exposed survivors. In other words, given the BEIR VII estimates, one would not expect to see an excess of adverse hereditary effects in a sample of about 30,000 children (the number of children evaluated in Hiroshima and Nagasaki). One reason that genetic risks are low is that only those genetic changes compatible with embryonic development and viability will be recovered in live births.
RESEARCH REVIEWED BY THE COMMITTEE
The committee and staff ensured that the conclusions of this report were informed by a thorough review of published, peer-reviewed materials relevant to the committee’s formal Statement of Task. Specifically, the sponsors of this study asked for a comprehensive review of all relevant epidemiologic data (i.e., data from studies of disease in populations) related to health effects of low doses of ionizing radiation. In addition, the committee was asked to review all relevant biological information important to the understanding or modeling of those health effects. Along with the review of these bodies of literature and drawing on the accumulated knowledge of its members, the committee and staff also considered mailings, publications, and e-mails sent to them. Data on cancer mortality and incidence from the Life Span Study cohort of atomic bomb survivors in Hiroshima and Nagasaki, based on improved dose estimates, were used by the committee. The committee also considered radiation risk information from studies of persons exposed for medical, occupational, and environmental reasons. Models for breast and thyroid cancer drew directly on medical studies. Further information was gathered in open sessions of the committee held at meetings in Washington, D.C., and Irvine, California. Questions and concerns raised in open sessions were considered by committee members in writing this report.
Why Has the Committee Not Accepted the View That Low Doses Are Substantially More Harmful Than Estimated by the Linear No-Threshold Model?
Some of the materials the committee reviewed included arguments that low doses of radiation are more harmful than a LNT model of effects would suggest. The BEIR VII committee has concluded that radiation health effects research, taken as a whole, does not support this view. In essence, the committee concludes that the higher the dose, the greater is the risk; the lower the dose, the lower is the likelihood of harm to human health. There are several intuitive ways to think about the reasons for this conclusion. First, any single track of ionizing radiation has the potential to cause cellular damage. However, if only one ionizing particle passes through a cell’s DNA, the chances of damage to the cell’s DNA are proportionately lower than if there are 10, 100, or 1000 such ionizing particles passing through it. There is no reason to expect a greater effect at lower doses from the physical interaction of the radiation with the cell’s DNA.
New evidence from biology suggests that cells do not necessarily have to be hit directly by a radiation track for the cell to be affected. Some speculate that hit cells communicate with nonhit cells by chemical signals or other means. To some, this suggests that at very low radiation doses, where all of the cells in the body are not hit, “bystander” cells may be adversely affected, resulting in a greater health effect at low doses than would be predicted by extrapolating the observed response at high doses. Others believe that increased cell death caused by so-called bystander effects might lower the risk of cancer by eliminating cells at risk for cancer from the irradiated cell population. Although additional research on this subject is needed, it is unclear at this time whether the bystander effect would have a net positive or net negative effect on the health of an irradiated person.
In sum, the total body of relevant research for the assessment of radiation health effects provides compelling reasons to believe that the risks associated with low doses of low-LET radiation are no greater than expected on the basis of the LNT model.
Why Has the Committee Not Accepted the View That Low Doses Are Substantially Less Harmful Than Estimated by the Linear No-Threshold Model?
In contrast to the previous section’s subject, some materials provided to the committee suggest that the LNT model exaggerates the health effects of low levels of ionizing radiation. They say that the risks are lower than predicted by the
LNT, that they are nonexistent, or that low doses of radiation may even be beneficial. The committee also does not accept this hypothesis. Instead, the committee concludes that the preponderance of information indicates that there will be some risk, even at low doses. As the simple risk calculations in this Public Summary show, the risk at low doses will be small. Nevertheless, the committee’s principal risk model for solid tumors predicts a linear decrease in cancer incidence with decreasing dose.
Before coming to this conclusion, the committee reviewed articles arguing that a threshold or decrease in effect does exist at low doses. Those reports claimed that at very low doses, ionizing radiation does not harm human health or may even be beneficial. The reports were found either to be based on ecologic studies or to cite findings not representative of the overall body of data.
Ecologic studies assess broad regional associations, and in some cases, such studies have suggested that the incidence of cancer is much higher or lower than the numbers observed with more precise epidemiologic studies. When the complete body of research on this question is considered, a consensus view emerges. This view says that the health risks of ionizing radiation, although small at low doses, are a function of dose.
Both the epidemiologic data and the biological data are consistent with a linear model at doses where associations can be measured. The main studies establishing the health effects of ionizing radiation are those analyzing survivors of the Hiroshima and Nagasaki atomic bombings in 1945. Sixty-five percent of these survivors received a low dose of radiation, that is, low according to the definition used in this report (equal to or less than 100 mSv). The arguments for thresholds or beneficial health effects are not supported by these data. Other work in epidemiology also supports the view that the harmfulness of ionizing radiation is a function of dose. Further, studies of cancer in children following exposure in utero or in early life indicate that radiation-induced cancers can occur at low doses. For example, the Oxford Survey of Childhood Cancer found a “40 percent increase in the cancer rate among children up to [age] 15.”20 This increase was detected at radiation doses in the range of 10 to 20 mSv.
There is also compelling support for the linearity view of how cancers form. Studies in radiation biology show that “a single radiation track (resulting in the lowest exposure possible) traversing the nucleus of an appropriate target cell has a low but finite probability of damaging the cell’s DNA.”21 Subsets of this damage, such as ionization “spurs” that can cause multiple damage in a short length of DNA, may be difficult for the cell to repair or may be repaired incorrectly. The committee has concluded that there is no compelling evidence to indicate a dose threshold below which the risk of tumor induction is zero.
Despite the challenges associated with understanding the health effects of low doses of low-LET radiation, current knowledge allows several conclusions. The BEIR VII committee concludes that current scientific evidence is consistent with the hypothesis that there is a linear dose-response relationship between exposure to ionizing radiation and the development of radiation-induced solid cancers in humans. The committee further judges it unlikely that a threshold exists for the induction of cancers but notes that the occurrence of radiation-induced cancers at low doses will be small. The committee maintains that other health effects (such as heart disease and stroke) occur at high radiation doses, but additional data must be gathered before an assessment can be made of any possible connection between low doses of radiation and noncancer health effects. Additionally, the committee concludes that although adverse health effects in children of exposed parents (attributable to radiation-induced mutations) have not been found, there are extensive data on radiation-induced transmissible mutations in mice and other organisms. Thus, there is no reason to believe that humans would be immune to this sort of harm.
This report, prepared by the National Research Council’s Committee on the Biological Effects of Ionizing Radiation (BEIR), is the seventh in a series that addresses the health effects of exposure of human populations to low-dose, low-LET (linear energy transfer) ionizing radiation. The current report focuses on new information available since the 1990 BEIR V report on low-dose, low-LET health effects.
Ionizing radiation arises from both natural and man-made sources and at very high doses can produce damaging effects in tissues that can be evident within days after exposure. At the low-dose exposures that are the focus of this report, so-called late effects, such as cancer, are produced many years after the initial exposure. In this report, the committee has defined low doses as those in the range of near 0 up to about 100 milligray (mGy) of low-LET radiation, with emphasis on the lowest doses for which meaningful effects have been found. Additionally, effects that may occur as a result of chronic exposures over months to a lifetime at dose rates below 0.1 mGy/min, irrespective of total dose, are thought to be most relevant. Medium doses are defined as doses in excess of 100 mGy up to 1 Gy, and high doses encompass doses of 1 Gy or more, including the very high total doses used in radiotherapy (of the order of 20 to 60 Gy).
Well-demonstrated late effects of radiation exposure include the induction of cancer and some degenerative diseases (e.g., cataracts). Also, the induction of mutations in the DNA of germ cells that, when transmitted, have the potential to cause adverse health effects in offspring has been demonstrated in animal studies.
EVIDENCE FROM BIOLOGY
There is an intimate relationship between responses to DNA damage, the appearance of gene or chromosomal mutations, and multistage cancer development. Molecular and cytogenetic studies of radiation-associated animal cancers and more limited human data are consistent with the induction of a multistage process of cancer development. This process does not appear to differ from that which applies to spontaneous cancer or to cancers associated with exposure to other carcinogens.
Animal data support the view that low-dose radiation acts principally on the early stages of tumorigenesis (initiation). High-dose effects on later stages (promotion or progression) are also likely. Although data are limited, the loss of specific genes whose absence might result in animal tumor initiation has been demonstrated in irradiated animals and cells.
Adaptation, low-dose hypersensitivity, bystander effect, hormesis, and genomic instability are based mainly on phenomenological data with little mechanistic information. The data suggest enhancement or reduction in radiation effects and in some cases appear to be restricted to special experimental circumstances.
Radiation-Induced Cancer: Mechanisms, Quantitative Experimental Studies, and the Role of Molecular Genetics
A critical conclusion about mechanisms of radiation tumorigenesis is that the data reviewed greatly strengthen the view that there are intimate links between the dose-dependent induction of DNA damage in cells, the appearance of gene or chromosomal mutations through DNA damage misrepair, and the development of cancer. Although less well established, the available data point toward a single-cell (monoclonal) origin of induced tumors. These data also provide some evidence on candidate radiation-associated mutations in tumors. These mutations include loss-of-function DNA deletions, some of which have been shown to be multigene deletions. Certain point mutations and gene amplifications have also been characterized in radiation-associated tumors, but their origins and status are uncertain.
One mechanistic caveat explored was that novel forms of cellular damage response, collectively termed induced genomic instability, might contribute significantly to radiation
cancer risk. The cellular data reviewed in this report identified uncertainties and some inconsistencies in the expression of this multifaceted phenomenon. However, telomere-associated mechanisms1 did provide a coherent explanation for some in vitro manifestations of induced genomic instability. The data did not reveal consistent evidence for the involvement of induced genomic instability in radiation tumorigenesis, although telomere-associated processes may account for some tumorigenic phenotypes.
Quantitative animal data on dose-response relationships provide a complex picture of low-LET radiation, with some tumor types showing linear or linear-quadratic relationships, while studies of other tumor types are suggestive of a low-dose threshold, particularly for thymic lymphoma and ovarian cancer. However, the induction or development of these two cancer types is believed to proceed via atypical mechanisms involving cell killing; therefore it was judged that the threshold-like responses observed should not be generalized. Adaptive responses for radiation tumorigenesis have been investigated in quantitative animal studies, and recent information is suggestive of adaptive processes that increase tumor latency but do not affect lifetime risk.
The review of cellular, animal, and epidemiologic or clinical studies of the role of genetic factors in radiation tumorigenesis suggest that many of the known, strongly expressing, cancer-prone human genetic disorders are likely to show an elevated risk of radiation-induced cancer, probably with a high degree of organ specificity. Cellular and animal studies suggest that the molecular mechanisms that underlie these genetically determined radiation effects largely mirror those that apply to spontaneous tumorigenesis and are consistent with the knowledge of somatic mechanisms of tumorigenesis. In particular, evidence has been obtained that major deficiencies in DNA damage response and tumor-suppressor-type genes can serve to elevate radiation cancer risk.
A major theme developing in the study of cancer genetics is the interaction and potential impact of more weakly expressing variant cancer genes that may be relatively common in human populations. Knowledge of such gene-gene and gene-environment interactions, although at an early stage, is developing rapidly. The animal genetic data provide proof-of-principle evidence of how such variant genes with functional polymorphisms can influence cancer risk, including limited data on radiation tumorigenesis.
Given that the functional gene polymorphisms associated with cancer risk may be relatively common, the potential for significant distortion of population-based risk was explored with emphasis on the organ specificity of genes of interest. A preliminary conclusion is that common polymorphisms of DNA damage response genes associated with organ-wide radiation cancer risk would be the most likely source of major interindividual differences in radiation response.
ESTIMATION OF HERITABLE GENETIC EFFECTS OF RADIATION IN HUMAN POPULATIONS
In addition to the induction of cancers in humans by radiation, there is evidence for the heritable genetic effects of radiation from animal experiments. It is now possible to estimate risks for all classes of genetic diseases. The advances that deserve particular attention are the following: (1) introduction of a conceptual change for calculating the doubling dose (from the use of mouse data for both spontaneous and induced mutation rates in 1990 to the use of human data on spontaneous mutation rates and mouse data on induced mutation rates now; the latter was the procedure used in the 1972 BEIR report); (2) elaboration of methods to estimate mutation component (i.e., the relative increase in disease frequency per unit relative increase in mutation rate) and use of estimates obtained through these methods to assess the impact of induced mutations on the incidence of Mendelian and chronic multifactorial diseases; (3) introduction of an additional factor, the “potential recoverability correction factor,” in the risk equation to bridge the gap between the rates of radiation-induced mutations estimated from mouse data and the predicted risk of radiation-inducible heritable diseases in humans, and (4) introduction of the concept that multisystem developmental abnormalities are likely to be among the principal phenotypes of radiation-induced genetic damage in humans.
The risk estimates presented in this report incorporate all of the above advances. They show that at low or chronic doses of low-LET irradiation, the genetic risks are very small compared to the baseline frequencies of genetic diseases in the population.
The total risk for all classes of genetic diseases estimated in this report is about 3000 to 4700 cases per million first-generation progeny per gray. These figures are about 0.4 to 0.6% of the baseline risk of 738,000 cases per million (of which chronic diseases constitute the predominant component—namely, 650,000 cases per million). The BEIR V risk estimates (which did not include chronic diseases) were <2400 to 5300 cases per million first-generation progeny per gray. Those figures were about 5 to 14% of the baseline risk of 37,300 to 47,300 cases per million.
EVIDENCE FROM EPIDEMIOLOGY
Studies of Atomic Bomb Survivors
The Life Span Study (LSS) cohort of survivors of the atomic bombings in Hiroshima and Nagasaki continues to serve as a major source of information for evaluating health risks from exposure to ionizing radiation and particularly for developing quantitative estimates of risk. The advantages of
this population include its large size (slightly less than half of the survivors were alive in 2000); the inclusion of both sexes and all ages; a wide range of doses that have been estimated for individual subjects; and high-quality mortality and cancer incidence data. In addition, the whole-body exposure received by this cohort offers the opportunity to assess risks for cancers of a large number of specific sites and to evaluate the comparability of site-specific risks. Special studies of subgroups of the LSS have provided clinical data, biological measurements, and information on potential confounders or modifiers.
Mortality data for the period 1950–1997 have been evaluated in detail. Importantly, cancer incidence data from both the Hiroshima and the Nagasaki tumor registries became available for the first time in the 1990s. These data not only include nonfatal cancers, but also offer diagnostic information that is of higher quality than that based on death certificates, which is especially important when evaluating site-specific cancers. The more extensive data on solid cancer that are now available have allowed more detailed evaluation of several issues pertinent to radiation risk assessment. Analyses evaluating the shape of the dose-response and focusing on the large number of survivors with relatively low doses (less than 0.5 Sv) generally confirm the appropriateness of linear functions to describe solid cancer risks. Both excess relative risk and excess absolute risk models have been used to evaluate the modifying effects of sex, age at exposure, and attained age.
Health end points other than cancer have been linked with radiation exposure in the LSS cohort. Of particular note, a dose-response relationship to mortality from nonneoplastic disease has been demonstrated with statistically significant associations for the categories of heart disease; stroke; and diseases of the digestive, respiratory, and hematopoietic systems. However, noncancer risks at the low doses of interest for this report are especially uncertain, and the committee has not modeled the dose-response for nonneoplastic diseases, or developed risk estimates for these diseases.
Medical Radiation Studies
Published studies on the health effects of medical exposures were reviewed to identify those that provide information for quantitative risk estimation. Particular attention was focused on estimating risks of leukemia and of lung, breast, thyroid, and stomach cancer in relation to radiation dose for comparison with the estimates derived from other exposed populations, in particular atomic bomb survivors.
For lung cancer, the excess relative risk (ERR)2 per gray from the studies of acute or fractionated high dose-rate exposures are statistically compatible and in the range 0.1–0.4 per Gy. For breast cancer, both the ERR and the excess absolute risk (EAR) appear to be quite variable across studies. A pooled analysis of A-bomb survivors and selected medically exposed cohorts indicated that the EAR for breast cancer was similar (about 10 per 104 person-years ([PY]) per gray at age 50) following acute and fractionated moderate to high-dose-rate exposure despite differences in baseline risks and dose rate. Women treated for benign breast conditions appeared to be at higher risk, whereas the risk was lower following protracted low-dose-rate exposures in hemangioma cohorts.
For thyroid cancer, all of the studies providing quantitative information about risks are studies of children who received radiotherapy for benign conditions. For subjects exposed below the age of 15, a linear dose-response was seen, with a leveling or decrease in risk at the higher doses used for cancer therapy (10+ Gy). An ERR of 7.7 per gray and an EAR of 4.4 per 104 PY per gray were derived from pooled analyses of data from medical exposures and atomic bomb survivors. Both estimates were significantly affected by age at exposure, with a strong decrease in risk with increasing age at exposure and little apparent risk for exposures after age 20. The ERR appeared to decline over time about 30 years after exposure but was still elevated at 40 years. Little information on thyroid cancer risk in relation to medical iodine-131 (131I) exposure in childhood was available. Studies of the effects of 131I exposure later in life provide little evidence of an increased risk of thyroid cancer.
For leukemia, ERR estimates from studies with average doses ranging from 0.1 to 2 Gy are relatively close, in the range 1.9 to 5 per gray, and are statistically compatible. Estimates of EAR are also similar across studies, ranging from 1 to 2.6 per 104 PY per gray. Little information is available on the effects of age at exposure or of exposure protraction.
For stomach cancer, the estimates of ERR per gray range from negative to 1.3. The confidence intervals are wide however, and they all overlap, indicating that these estimates are statistically compatible. Finally, studies of patients having undergone radiotherapy for Hodgkin’s disease or breast cancer suggest that there may be some risk of cardiovascular morbidity and mortality for very high doses and high-dose-rate exposures. The magnitude of the radiation risk and the shape of the dose-response curve for these outcomes are uncertain.
Occupational Radiation Studies
Numerous studies have considered the mortality and incidence of cancer among various occupationally exposed groups in the medical, manufacturing, nuclear, research, and aviation industries.
The most informative studies are those of nuclear industry workers (including the workers of Mayak in the former Soviet Union), for whom individual real-time estimates of
doses have been collected over time with the use of personal dosimeters. More than 1 million workers have been employed in this industry since its beginning in the early 1940s. Studies of individual worker cohorts are limited, however, in their ability to estimate precisely the potentially small risks associated with low levels of exposure.
Combined analyses of data from multiple cohorts offer an opportunity to increase the sensitivity of such studies and provide direct estimates of the effects of long-term, low-dose, low-LET radiation. The most comprehensive and precise estimates to date are those derived from the UK National Registry of Radiation Workers and the Three-Country Study (Canada-United Kingdom-United States), which have provided estimates of leukemia and all cancer risks. In these studies, the leukemia risk estimates are intermediate between those derived using linear and linear-quadratic extrapolations from the A-bomb survivors’ study. The estimate for all cancers is smaller, but the confidence intervals are wide and consistent both with no risk and with risks up to twice the linear extrapolation from atomic bomb survivors.
Because of the remaining uncertainty in occupational risk estimates and the fact that errors in doses have not formally been taken into account in these studies, the committee concluded that the risk estimates from occupational studies, although directly relevant to the estimation of effects of low-dose protracted exposures, are not sufficiently precise to form the sole basis for radiation risk estimates.
Ecological studies of populations living around nuclear facilities and of other environmentally exposed populations do not contain individual estimates of radiation dose or provide a direct quantitative estimate of risk in relation to dose. This limits the interpretation of such data. Several cohort studies have reported health outcomes among persons exposed to environmental radiation. No consistent or generalizable information is contained in these studies.
Results from environmental exposures to 131I have been inconsistent. The most informative findings are from studies of individuals exposed to radiation after the Chernobyl accident. Recent evidence indicates that exposure to radiation from Chernobyl is associated with an increased risk of thyroid cancer and that the relationship is dose dependent. The quantitative estimate of excess thyroid cancer risk is generally consistent with estimates from other radiation-exposed populations and is observed in both males and females. Iodine deficiency appears to be an important modifier of risk, enhancing the risk of thyroid cancer following radiation exposure.
INTEGRATION OF BIOLOGY AND EPIDEMIOLOGY
The principal conclusions from this work are the following:
Current knowledge of cellular or molecular mechanisms of radiation tumorigenesis tends to support the application of models that incorporate the excess relative risk projection over time.
The choice of models for the transport of cancer risk from Japanese A-bomb survivors to the U.S. population is influenced by mechanistic knowledge and information on the etiology of different cancer types.
A combined Bayesian analysis of A-bomb epidemiologic information and experimental data has been developed to provide an estimation of the dose and dose-rate effectiveness factor (DDREF) for cancer risk estimates reported in this study.
Knowledge of adaptive responses, genomic instability, and bystander signaling among cells that may act to alter radiation cancer risk was judged to be insufficient to be incorporated in a meaningful way into the modeling of epidemiologic data.
Genetic variation in the population is a potentially important factor in the estimation of radiation cancer risk. Modeling studies suggest that strongly expressing mutations that predispose humans to cancer are too rare to distort appreciably population-based estimates of risk, but are a significant issue in some medical radiation settings.
Estimation of the heritable effects of radiation takes advantage of new information on human genetic disease and on mechanisms of radiation-induced germline mutation. The application of a new approach to genetic risk estimation leads the committee to conclude that low-dose induced genetic risks are very small when compared to baseline risks in the population.
The committee judges that the balance of evidence from epidemiologic, animal, and mechanistic studies tends to favor a simple proportionate relationship at low doses between radiation dose and cancer risk. Uncertainties in this judgment are recognized and noted.
Each of the above points contributes to refining earlier risk estimates, but none leads to a major change in the overall evaluation of the relation between exposure to ionizing radiation and human health effects.
ESTIMATING CANCER RISKS
As in past risk assessments, the LSS cohort of survivors of the atomic bombings in Hiroshima and Nagasaki plays a principal role in the committee’s development of cancer risk estimates. Risk models were developed primarily from cancer incidence data for the period 1958–1998 and based on DS02 (Dosimetry System 2002) dosimetry, the result of a major international effort to reassess and improve survivor dose estimates. Data from studies involving medical and occupational exposure were also evaluated. Models for estimating risks of breast and thyroid cancer were based on pooled analyses that included data on both the LSS cohort and medically exposed persons.
To use models developed primarily from the LSS cohort for the estimation of lifetime risks for the U.S. population, it was necessary to make several assumptions that involve uncertainty. Two important sources of uncertainty are (1) the possible reduction in risk for exposure at low doses and dose rates (i.e., the DDREF) and (2) the use of risk estimates based on Japanese atomic bomb survivors for estimating risks for the U.S. population.
The committee has developed and presented its best possible risk estimates for exposure to low-dose, low-LET radiation in human subjects. As an example, Table ES-1 shows the estimated number of incident cancer cases and deaths that would be expected to result if each individual in a population of 100,000 persons with an age distribution similar to that of the entire U.S. population was exposed to a single dose of 0.1 Gy, and also shows the numbers that would be expected in the absence of exposure. Results for solid cancers are based on linear models and reduced by a DDREF of 1.5. Results for leukemia are based on a linear-quadratic model.
The estimates are accompanied by 95% subjective confidence intervals (i.e., random as well as judgmental) that reflect the most important sources of uncertainty—namely, statistical variation, uncertainty in the factor used to adjust risk estimates for exposure at low doses and dose rates, and uncertainty in the method of transport. In this report the committee also presents example estimates for each of several specific cancer sites and other exposure scenarios, although they are not shown here.
In general the magnitude of estimated risks for total cancer mortality or leukemia has not changed greatly from estimates in past reports such as BEIR V and recent reports of the United Nations Scientific Committee on the Effects of Atomic Radiation and the International Commission on Radiological Protection. New data and analyses have reduced sampling uncertainty, but uncertainties related to estimating risk for exposure at low doses and dose rates and to transporting risks from Japanese A-bomb survivors to the U.S. population remain large. Uncertainties in estimating risks of site-specific cancers are especially large.
As an illustration, Figure ES-1 shows estimated excess relative risks of solid cancer versus dose (averaged over sex and standardized to represent individuals exposed at age 30 who have attained age 60) for atomic bomb survivors, with doses in each of 10 dose intervals less than 2.0 Sv. The figure in the insert represents the ERR versus dose for leukemia. This plot conveys the overall dose-response relationship for the LSS cohort and its role in low-dose risk estimation. It is important to note that the difference between the linear and linear-quadratic models in the low-dose ranges is small relative to the error bars; therefore, the difference between these models is small relative to the uncertainty in the risk estimates produced from them. For solid cancer incidence the linear-quadratic model did not offer a statistically significant improvement in fit, so the linear model was used. For leukemia, a linear-quadratic model (insert in Figure ES-1) was used since it fitted the data significantly better than the linear model.
The committee concludes that current scientific evidence is consistent with the hypothesis that there is a linear, no-threshold dose-response relationship between exposure to ionizing radiation and the development of cancer in humans.
RECOMMENDED RESEARCH NEEDS
A more detailed listing of the BEIR VII recommended research needs can be found at the end of Chapter 13.
Research Need 1: Determination of the level of various molecular markers of DNA damage as a function of low-dose ionizing radiation
Currently identified molecular markers of DNA damage and other biomarkers that can be identified in the future should be used to quantify low levels of DNA damage and to identify the chemical nature and repair characteristics of the damage to the DNA molecule.
TABLE ES-1 The Committee’s Preferred Estimates of the Lifetime Attributable Risk of Incidence and Mortality for All Solid Cancers and for Leukemia
Research Need 2: Determination of DNA repair fidelity, especially with regard to double and multiple strand breaks at low doses, and whether repair capacity is independent of dose
Repair capacity at low levels of damage should be investigated, especially in light of conflicting evidence for stimulation of repair at low doses. In these studies the accuracy of DNA sequences rejoined by these pathways must be determined, and the mechanisms of error-prone repair of radiation lesions have to be elucidated.
Research Need 3: Evaluation of the relevance of adaptation, low-dose hypersensitivity, bystander effect, hormesis, and genomic instability for radiation carcinogenesis
Mechanistic data are needed to establish the relevance of these processes to low-dose radiation exposure (i.e., <100 mGy). Relevant end points should include not only chromosomal aberrations and mutations but also genomic instability and induction of cancer. In vitro and in vivo data are needed for delivery of low doses over several weeks or
months at very low dose rates or with fractionated exposures. The cumulative effect of multiple low doses of less than 10 mGy delivered over extended periods has to be explored further. The development of in vitro transformation assays utilizing nontransformed human diploid cells is judged to be of special importance.
Research Need 4: Identification of molecular mechanisms for postulated hormetic effects at low doses
Definitive experiments that identify molecular mechanisms are necessary to establish whether hormetic effects exist for radiation-induced carcinogenesis.
Research Need 5: Tumorigenic mechanisms
Further cytogenetic and molecular genetic studies are necessary to reduce current uncertainties about the specific role of radiation in multistage radiation tumorigenesis.
Research Need 6: Genetic factors in radiation cancer risk
Further work is needed in humans and mice on gene mutations and functional polymorphisms that influence radiation response and cancer risk.
Research Need 7: Heritable genetic effects of radiation
Further work should be done to establish (1) the potential roles of DNA double-strand break repair processes in the origin of deletions in irradiated stem cell spermatogonia and oocytes (the germ cell stages of importance in risk estimation) in mice and humans and (2) the extent to which large radiation-induced deletions in mice are associated with multisystem development defects. In humans, the problem can be explored using genomic databases and knowledge of mechanisms of origin of radiation-induced deletions to predict regions that may be particularly prone to radiation-inducible deletions.
With respect to epidemiology, studies on the genetic effects of radiotherapy for childhood cancer should be encouraged, especially when they can be coupled with modern molecular techniques (such as array-based comparative genomic hybridization).
Research Need 8: Future medical radiation studies
Most studies of medical radiation should rely on exposure information collected prospectively, including cohort studies as well as nested case-control studies. Future studies should continue to include individual dose estimation for the site of interest, as well as an evaluation of the uncertainty in dose estimation.
Studies of populations with high- and moderate-dose medical exposures are particularly important for the study of modifiers of radiation risks. Because of the high level of radiation exposure in these populations, they are also ideally suited to study the effects of gene-radiation interactions, which may render particular subsets of the population more sensitive to radiation-induced cancer. Genes of particular interest include BRCA1, BRCA2, ATM, CHEK2, NBS1, XRCC1, and XRCC3.
Of concern for radiological protection is the increasing use of computed tomography (CT) scans and diagnostic X-rays. Epidemiologic studies of the following exposed populations, if feasible, would be particularly useful: (1) followup studies of persons receiving CT scans, especially children; and (2) studies of infants who experience diagnostic exposures related to cardiac catheterization, those who have recurrent exposures to follow their clinical status, and premature babies monitored for pulmonary development with repeated X-rays.
There is a need to organize worldwide consortia that would use similar methods in data collection and follow-up. These consortia should record delivered doses and technical data from all X-ray or isotope-based imaging approaches including CT, positron emission tomography, and single photon emission computed tomography.
Research Need 9: Future occupational radiation studies
Studies of occupational radiation exposures, in particular among nuclear industry workers, including nuclear power plant workers, are well suited for direct assessment of the carcinogenic effects of long-term, low-level radiation exposure in humans. Ideally, studies of occupational radiation should be prospective in nature and rely on individual real-time estimates of radiation doses. Where possible, national registries of radiation exposure of workers should be established and updated as additional radiation exposure is accumulated and as workers change employers. These registries should include at least annual estimates of whole-body radiation dose from external photon exposure. These exposure registries should be linked with mortality registries and, where they exist, national tumor (and other disease) registries. It is also important to continue follow-up of workers exposed to relatively high doses, that is, workers at the Mayak nuclear facility and workers involved in the Chernobyl cleanup.
Research Need 10: Future environmental radiation studies
In general, additional ecological studies of persons exposed to low levels of radiation from environmental sources are not recommended. However, if there are disasters in which a local population is exposed to unusually high levels of radiation, it is important that there be a rapid response not only for the prevention of further exposure but also for scientific evaluation of possible effects of the exposure. The data collected should include basic demographic information on individuals, estimates of acute and possible continuing exposure, the nature of the ionizing radiation, and the means of following these individuals for many years. The possibility of enrolling a comparable nonexposed population should be considered. Studies of persons exposed environmentally as a result of the Chernobyl disaster or as a re-
sult of releases from the Mayak nuclear facility should continue.
Research Need 11: Japanese atomic bomb survivor studies
The LSS cohort of Japanese A-bomb survivors has played a central role in BEIR VII and in past risk assessments. It is important that follow-up for mortality and cancer incidence continue for the 45% of the cohort who remained alive at the end of 2000.
In the near future, an uncertainty evaluation of the DS02 dosimetry system is expected to become available. Dose-response analyses that make use of this evaluation should thus be conducted to account for dosimetry uncertainties.
Development and application of analytic methods that allow more reliable estimation of site-specific estimates is also needed. Specifically, methods that draw on both data for the specific site and data for broader cancer categories could be useful.
Research Need 12: Epidemiologic studies in general
Data from the LSS cohort of A-bomb survivors should be supplemented with data on populations exposed to low doses and/or dose rates, especially those with large enough doses to allow risks to be estimated with reasonable precision. Studies of nuclear industry workers and careful studies of persons exposed in countries of the former Soviet Union are particularly important in this regard.