Historians of science and technology have argued over the extent to which push factors—the drivers of new scientific knowledge and human inventiveness— determine the directions of technological development. Some, the hard-line technological determinists, believe that in one way or another, push factors alone ultimately determine which technologies will be adopted and, as well, what social changes will occur. Others, however, are less convinced that the matter is so simple. Their position, sometimes called soft determinism, is that technological development and adoption involve a more interactive process. Demand in the economic sense, based on consumer preferences, is considered to be very important in determining whether a technology will succeed. The AT&T Picturephone is a classic instance of a technological capability that consumers were not interested in having, in part because of its high price but in part because of its impact on privacy. The societal context of an age also has an influence on the scientific questions that are asked, and values, biases, needs, markets, legal structures, and other circumstances determine what research will be supported and which projects will turn out to be acceptable or—even more important—viable.
The committee is in the latter camp and believes that, in the next decade, societal (or pull) factors will clearly have an important influence. Among the factors that will place new demands on U.S. science and technology and offer new opportunities for U.S. industry are the following:
The need for greater efficiency and effectiveness of supply chains and of services,
Aging of the population,
Rapidly accelerating urbanization worldwide,
The revolution in military affairs,
The quest for alternative energy sources,
Widespread global poverty,
The quest for global sustainability, and
Protection from natural and man-caused disasters (terrorism and war).
Our ever-increasing ability to alter biological forms and to change what many would view as the natural course of life is challenging personal beliefs, raising deep ethical questions, and leading to society-wide discussions about whether we should do all the things we can do. Our ability to gather, store, manipulate, and communicate digitized information is shifting power between and among government institutions, nongovernment organizations, the private sector, and individuals in ways that can alter the political and legal structure of the society. Many of these capabilities impinge on individual rights or violate strongly held beliefs, and as the public becomes more and more aware of the potential consequences, it is pressing for a voice in determining just which technology-driven changes are valuable and should be allowed to go forward and which should be proscribed or controlled by appropriate legislation.
The market, of course, continues to be an important factor in what innovations succeed, but it is driven by what people value. There is evidence, for example (discussed in more detail below), that what people value in health care is changing and that this will have an effect on which medical technologies are acceptable. The explosion of interest in and concern about the environment, a fringe issue just 30 years ago, appears likely to continue and to affect what people look for and insist upon in technological products and processes. Indeed, beyond market factors, there is every indication that there will be more, rather than less, environmental regulation in the future and this, too, will affect the direction of technological development.
At the international level, the United States faces several challenges that will demand contributions from science and technology. They include the asymmetric threat of terrorism and the economic globalization of economic activities, including R&D.
The United States must provide for its security in the face of challenges quite different from those of the Cold War, a situation reinforced by the terrorism acts of September 11, 2001, which expanded national security needs to include measures for homeland security to an unprecedented degree. Short-term problems include improved security for the mail system, airlines, nuclear power plants and
other key infrastructure facilities, and prominent buildings. The new security requirements cry out for better technologies for measurement and analysis, needed to improve detection and identification of biological, chemical, radiological, and nuclear threats, and for better ways to destroy or neutralize dangerous biological and other substances used as weapons. For example, “labs on a chip” and biochips could be developed that identify known chemical and biological warfare agents within seconds. Miniature robots with sensors and scanners could sweep sites to detect chemical, biological, and nuclear hazards or to locate terrorists or victims in collapsed buildings. Also needed are better methods of identification, such as national identification cards and biometric identification technologies.
Over the long run, there will be a need to improve the protection of key national infrastructures, including computer networks and other communications systems, power grids, and transportation systems, and the ability of buildings and other facilities to withstand structural damage from attacks. Antiterrorism products purchased by state and local governments as well as the private sector will need to be tested and validated for effectiveness and reliability, and standards for interoperability will need to be developed. All this will have to be achieved in the face of changes in the conduct of research itself, including improved security measures for classified work, increased scrutiny of science and engineering students and researchers, especially foreign students and researchers, and limitations on the sharing of some scientific information with scientists in other countries, all of which impinge on the tradition of open dialogue among scientists, which has always hastened progress.
The United States must also preserve its economic strength in the face of increasing global competition, particularly from developed countries. At the same time, in the interest of preserving global stability, on which its own security and well-being rests, the nation must do what it can to promote and assist economic growth in the developing world. Technology, of course, figures prominently in the strategies for achieving U.S. goals in all these areas, so that it is to be expected that the government will use its market and regulatory powers to encourage technological development along trajectories that serve these goals.
In addition to the globalization of trade, there is the globalization of R&D. Companies around the world, including U.S. companies, have been locating more R&D abroad, to help them meet the needs of foreign customers, keep abreast of advances in science and technology, employ the talents of foreign scientists and engineers, and work with foreign R&D laboratories. This trend raises a number of issues, not least the appropriate role of the U.S. government not only in supporting industrial research at companies that are becoming more international than national in ownership and location, but also in setting industrial standards and regulations for them.
In the following sections, the potential influence of these pull factors is discussed.
BIOLOGICAL SCIENCE AND ENGINEERING
It is not unusual for science and technology to raise value issues for people. But the modern biological sciences are unique in that, for many people, our everincreasing capacity to understand the structure and function of living systems at the most fundamental level and to manipulate and modify both structure and function raise questions about the meaning of being human. It is one thing to cure a disease; it is quite another to replace an organ actively involved in maintaining life. And it is qualitatively different still to modify the building blocks and processes of a living system.
The issue is particularly complicated because it goes beyond the relatively narrow questions of just how much the biological system is being altered or what broad effect the alterations might have on the nature of the organism—more or less factual matters. For many, the important thing is the symbolic meaning of these new technologies, such as cloning, and what our willingness to use them says about our values.
For others, the issue relates either directly or indirectly to religious beliefs about the beginning and the end of life—and also raises the question of what those terms mean (a question that has been debated for a few decades now). And even for research whose goals are not controversial, as they were for research using fetal tissue, its possible relation to an issue of great concern—abortion—is sufficient to enmesh the research with that other issue.
Still another complicating factor, particularly in Europe but also in the United States, is the uncomfortable history of the eugenics movement. It has led a number of people to be concerned that genetic engineering will reinforce unhealthy normative notions of human perfection, leading to the view that people with disabilities or whose physical appearance does not conform to those norms are inferior. It is, therefore, no surprise that organizations representing people with disabilities have been particularly vocal in opposing most aspects of genetic engineering.
It is the view of the committee that the great promise of molecular and cellular biology will overcome these objections, but there are some significant caveats. The development of clinical therapies based on molecular and cellular manipulation will be subject to extremely close scrutiny and held to an evidentiary standard and a demonstration of efficacy even more rigorous than usual in medicine. Indeed, there is much work to be done to convince the public that genetic alterations have only the intended effects and cannot lead to inadvertent alterations. Our relatively complete knowledge of the human genome (and the genomes of other species) should help in that respect.
It is certainly likely that standing advisory committees with significant repre-
sentation from the public and the bioethics community will exercise major influence in determining which lines of research are allowed and which are proscribed. For example, it is very unlikely that any therapies based on germ cell modification will be allowed, even if scientific research leads to greater confidence that such alterations can be carefully prescribed and controlled. Of course, the ability of the government to impose research standards and to regulate research is much greater when it supports the research, as was made clear recently in the case of embryonic stem cells. It seems likely that this will provide a strong incentive for the government to stay involved in such research.
Genetically modified organisms (GMOs) in agriculture are likely to encounter even stronger resistance, both domestically and internationally, than genetic manipulation in humans, primarily because the benefits to society are seen as less important than advances in curing human diseases. Thus, stronger weight will be given to arguments against the genetic modification of plants. The greatest challenges are to the use of genetic manipulation to build “nonnatural” genes into plants—for example, to program plants to produce herbicides and pesticides. Politically, these changes give rise to a coalition between environmental activists and those who oppose genetically modified foods. Our ability to develop and agree on approaches to field testing and ecological assessment will clearly be very important, as recent examples of the spread of engineered genes from Bt corn (a type of GMO) to nonbiotech corn plants in the field indicate. The related issue of mixing of a GMO (Starlink corn) into snack foods is another area with implications for public support of technology development in this area. Continued growth of agricultural biotechnology will depend on satisfactory solutions to these challenges.
The use of genetic engineering techniques as a more efficient approach to traditional plant breeding for optimal characteristics will be somewhat easier to sell. Of course, a significant fraction of U.S. crops has already been developed using these techniques. They should become even more widely acceptable, at least in this country, as our knowledge of each plant’s genome allows us to ensure that no changes but the intended changes have taken place.
At the international level, the problem is more difficult because it is hard to separate legitimate scientific differences between countries from trade issues. If genetic alteration does not take hold in other parts of the developed world, as it has not to any great extent so far, there will certainly be great pressure in other countries to exclude American GMOs and to cast the issue as one of public health standards. This reinforces the need for objective procedures for establishing scientific fact at the international level when important public policy rests on the outcome.
There are similar differences with respect to medical technologies—namely, the United States and the European Union have markedly different approaches to regulating medical devices. Generally speaking, and in strong contrast to its views on genetic manipulation, the European Union has been much more recep-
tive to new medical technologies and, in the view of most medical device companies, more successful than the United States in developing protocols for device assessment that are both effective and efficient. The early indications are that medical device companies are willing to move research and development operations to Europe and to introduce new technologies there at a much earlier time. This suggests that there will be pressure from the U.S. public in the next decade for the United States to move closer to the European Union in this matter so that Americans do not lose their lead in medical devices or the health benefits that derive from them.
Two other issues related to medical technologies will grow in importance in the next decade. First, driven in part by their general discomfort with what might be termed the technological invasion of their bodies, the public will look for technologies that increase their personal control over their medical care rather than those that make them more dependent on physicians and technicians. Computer-aided devices that they are able to regulate without the intervention of a physician, diagnostic kits that they can use on their own, and interactive medical sites on the Web that can help them to make informed decisions about their own treatment: all are more likely to find wide acceptance. There will be a premium on technologies whose sophistication is applied to create simplicity for the patient.
The other pressing issue is the growing cost of health care in the United States—it now exceeds $1 trillion per year, more than 14 percent of GDP—and the fact that more than 40 million Americans lack health insurance. The introduction of medical technology has often cited as one cause of this increase in medical costs. How much of the increase is due to technology and whether that increase is cost-effective (that is, whether it cures particular diseases or extends human longevity at a lower cost than existing treatments—if such treatments exist) are matters of continuing debate. But the concern about total cost and the fact there is likely to be differential access to new technologies because health insurance is not universal present difficult social problems that will undoubtedly affect the rate of adoption of medical technologies in the next decade. For example, if chip diagnostic manufacturers want to compete in the $19 billion diagnostic market, the assays they develop will have to be inexpensive.
COMPUTER AND INFORMATION SCIENCE AND TECHNOLOGY
Biomedical trends help to illustrate a key issue in the computer and information field—privacy. Health care and information technology intersect in a number of ways, but the area of patient records is a particularly noteworthy synergy because our ability to get more data about an individual is growing as rapidly as our ability to store, analyze, and transmit those data. Obviously, the richer the information the health care system can gather about a person’s medical condition,
and the more it can analyze and synthesize that information, the more effectively it can preserve the health of the individual.
However, others could use this same information to the disadvantage of the individual. For example, if a prospective employer or an insurance company were aware of a person’s genetic predisposition to heart disease or some types of cancer, it might refuse to employ the person or to provide coverage.
This hypothetical example raises two major issues. Because this kind of adverse selection would be applied to a person who has no more than the potential for a certain disease,1 it would significantly extend the kind of adverse selection traditionally applied by insurance companies to people with preexisting conditions. Many would argue that such deselection would compromise an individual’s entitlement to equal protection under the law. Second, if information about a person were obtained from easily accessible electronic records without the person’s permission, it would be a significant violation of his or her privacy.
Both issues are important, but it is the latter—the privacy issue—that is likely to present the toughest policy difficulties during the next decade and to have the most impact on the development and use of information technologies in uses well beyond health care. Privacy must be understood not so much as the confidentiality of information about an individual as the right of the individual to control information about himself or herself, including being able to decide to keep it confidential or to share it selectively.
At the moment, U.S. law concerning privacy is in a state of flux. There is no one legislative authority, jurisdiction, or law governing privacy. There are both state and federal laws, but only for certain aspects of privacy. There is a law protecting the privacy of an individual’s record of videotape rental, but none covering his or her Web site visits. There is discussion about not allowing a person’s genetic data to be used in reaching decisions about insurance coverage but no similar discussion about using family history for the same purpose. There are regulations concerning the release of data held by the federal government about individuals, but much more limited regulation (at least in the United States) covering what private entities are allowed to do with such data.
It is clear to many people that sharing data about oneself may often be in one’s interest—and the technologies being developed will vastly increase that advantage and the services available by the sharing of data. Emergency medical information, global positioning systems, personal preferences in books and mu-
sic, information mining services, entertainment, interest group linkages, all hold out the promise of leveraging technology for personal benefit. However, unless we, as a society, can develop an orderly and rational approach to the protection of privacy (and the limitations of that protection), the technology development will be hampered by legal uncertainties, and individuals will see its threats more clearly than its promises.
In reaching some general understanding of the social issue involved, it is useful to frame the problem as a discussion of the appropriate boundary between public and private spaces. Clearly, there should be some rational boundary. People are entitled to control their own lives (and information about themselves), but in return for membership in the society and all that such membership entails, they take on obligations that may place limits on their privacy. That is part of what may be thought of as an implied social contract, given immediacy by the events of September 11.
Unfortunately, our decisions on where that boundary properly belongs have either been based on the political or security exigencies of the moment or been driven by the current state of the relevant technology. For example, as heavier encryption technologies become available to individuals, private space widens because individuals are better able to shield information or transactions from government or other entities. As government signal-capture and computational capacity increase, private space narrows.
Privacy is only one of the many social, political, and legal issues that will affect how Web-based information technologies develop in the next decade. Free speech will conflict with the desire to protect minors and with concerns about hate speech. The historically libertarian nature of the Internet will conflict with the desire of governments to hold Internet hosts or other providers accountable for enforcing local ordinances. The Internet’s openness and its historical and deliberate lack of governance structure will also conflict with the desire of governments to provide consumer protection for those doing business on the Internet, to enforce intellectual property laws, to deal with taxation issues, or to resolve jurisdictional issues.
Because the Internet is global, it is to be expected that issues of harmonization will be very important in both governmental and commercial terms. For example, as the Internet penetrates other societies, particularly those in less-developed nations, the issue of software language will become increasingly important, and it is likely that there will be pressure on almost all software developers to have source code that is compatible with a variety of front-end language systems. TCP/IP is, of course, established throughout the world as the standardized packet-switching system, and it is likely to continue as such through the decade. However, although the routing of messages is, in principle, random, the network hardware architecture is such that most messages actually pass through the United States. That situation is likely to be increasingly uncomfortable for many nations
and may bring pressure to diminish the de facto hub dominance of the United States.
Interest in the environment has grown enormously since the United Nations Conference on the Human Environment, held at Stockholm in 1972. What was once a peripheral issue of concern to only a few groups has become a mainstream issue in all countries of the developed world, including the United States, and the list of concerns included in its compass has vastly expanded. Questions remain as to how deeply committed the public is to taking action to protect the environment, but several factors suggest that the next decade will see increasing concern and deepening commitment to give environmental protection a high priority.
First, the growing interest has given rise to new scientific investigations (and even relatively new sciences, such as ecology), which have reinforced public concerns. Observational data on the effects of chlorofluorocarbons on stratospheric ozone led in almost record time to an international treaty to eliminate their use and simultaneously stimulated the development of effective substitutes for them. Indeed, work continues on still better materials than the hydrofluorocarbons now being used. The work of the Intergovernmental Panel on Climate Change and the National Academy of Sciences points increasingly to anthropic causes of global warming and the seriousness of the consequences.
Second, improved instrumentation has led to orders of magnitude increases in our ability to detect trace compounds in the environment. This, in turn, has given rise to pressures to remove those compounds, to prevent them from entering the environment in the first place, or to improve our understanding of dose/ response effects for various potential toxins in order to establish a rational basis for setting tolerance limits.
Third, the correlations among gross world product, energy use, and environmental stress create significant tensions between the justifiable desire to improve the income and standard of living of people all over the world and the equally justifiable desire to protect the environment. Some argue that technology is part of the problem, but the committee believes that it is more likely to be part of the solution—and the pressure to make it so will help to guide technology development over the next decade. Technologies will be encouraged that accomplish the following: decrease energy intensity (the energy used per unit of gross product); move us away from fossil fuel-based primary energy production and/or increase the efficiency of primary energy production, transmission, and distribution; reduce the concentrations of greenhouse gases in the atmosphere; cut down on the material flows associated with production; prevent the dispersion of toxic wastes from point sources; reduce the need for water, fertilizers, herbicides, and pesticides in agriculture; or provide for the economical remediation of polluted land and bodies of water.
The increasing pressure to incorporate technologies that protect the environment into all production activities will give large companies a competitive advantage because they are much more able to conduct R&D across a broad spectrum of fields. Small companies, which—as noted earlier—are often the source of innovations, have fewer financial resources and must choose between spending their R&D dollars on current product or on innovation. Thus, absent significant government investment in environmental technologies, the pressure for environmental protection will generally work against small and medium-size companies unless their business is environmental technologies.
On the other hand, raising the standards for environmental protection in the United States—given the ability of the country’s R&D establishment to respond to such a stimulus with increased innovation in environmental technologies— may well give U.S. companies a comparative advantage internationally and lead them to push the government to negotiate the harmonization and tightening of environmental standards throughout the world. Thus, the corporate sector itself may become a force for increased environmental protection.
All of the fields identified in this report have already had significant impact on environmental issues and are likely to figure heavily in future efforts to protect the environment. Computer technology has been a key to the global modeling studies that are confirming society’s contribution to the greenhouse effect. Remote satellite observations have provided an enormous amount of retrospective and prospective data on the consequences of warming and the spread and effect of various pollutants. Advances in sensors as well as in remote imaging have played an important role in monitoring compliance with international treaties such as the treaty governing transboundary air pollution. Microbial approaches to cleaning up oil spills were shown to be effective in a number of highly publicized incidents. New materials have decreased our dependence on natural resources; for example, fiber optics has limited the rate of increase in demand for copper, the mining of which has severe environmental consequences. Strong lightweight materials used in transportation vehicles have helped to limit the increase in energy consumption in that sector.
In the production of primary energy, computer design and control of windmill blades has so lowered the cost of wind-generated energy that it is competitive with fossil fuels in many parts of the country. Stationary fuel cells are also in use and are perhaps the most promising new technology both for primary energy generation and for transportation. Both technologies are likely to grow in importance in the next decade.
This record of success, coupled with the likely increase in emphasis on environmental protection in the next decade, suggests that there will be strong pressures—and rewards—for companies to exploit these fields where scientific knowledge is rapidly changing to develop innovative technologies for protecting the environment.
INTERNATIONAL SECURITY AND THE GLOBAL ECONOMY
In recent decades, the once predominant concept of a linear progression from basic research to innovation has given way to the realization that there is often significant interaction between basic research and technological development. It has become more and more difficult to distinguish basic from applied research; the time frame from new idea to application has been foreshortened; feedback loops have been created, with technological development providing data and insights that inform new rounds of basic research; and cross-fertilization between different fields of science has made it all but impossible to judge what field of research will provide the breakthrough for a desired innovation. In short, the linear model of R&D has given way to a matrix-feedback model.
The shift is important for several reasons. First, the strategic management of R&D has become significantly more difficult, because it is much harder to determine what research needs to be done in order to achieve some technological goal. Second, the clear demarcation between activities whose support is the appropriate responsibility of government and those that should be funded by the corporate sector has blurred, as has the line between precompetitive and competitive research. Third, in a world marked by the globalization of both research and technological development and the linking of economies, nations need to cooperate in some aspects of R&D and compete in others. The patterns of R&D development make it increasingly difficult to say which activities should be cooperative and which competitive.
The international cooperation/competition dichotomy could have important consequences for research because it can lead to either underinvestment or overinvestment in particular research areas. If cooperation amounts to little more than the open availability of new scientific knowledge to the entire global community, nations may assume that they will be able to benefit from advances anywhere in the world and need not make any investment themselves. Economists point out that this free rider phenomenon can lead to global underinvestment in basic research. On the other hand, if competition is the dominant theme even at an early stage in the R&D cycle, overinvestment could result: Nations would compete for early market advantage by embarking separately on research, thereby duplicating one another’s efforts. Since only one nation will ultimately gain the advantage in a particular area, the investments by other nations will have been wasted.
The growth of scientific and technological capacity throughout the developed world will certainly continue in the next decade, and slow but steady growth can also be expected in certain of the newly industrialized nations. Therefore, we will be continually need to find ways to share the costs of R&D and to share technical talent at the same time as we work to achieve our national economic goals.
It seems more likely that, if we err, it will be on the side of failing to exploit
the potential for cooperation. Although the United States continues to be the foremost contributor to the world’s research literature, with about 35 percent of published research papers having American authorship, other countries now contribute more than half. The long experience of this country as the dominant player in R&D has resulted in an underemphasis on developing the language skills and the perspective necessary to learn from others. Added to that problem, political concerns about losing technological advantage by allowing others to ride free on our R&D advances will probably continue to lead to sporadic efforts to protect our scientific and technological knowledge base. This has been manifest in occasional efforts to limit foreign corporation participation in consortia or to proscribe foreign student participation in government-sponsored research projects. It remains to be seen whether increased efforts to facilitate technology transfer between universities and industry will exacerbate that problem, but it is not unreasonable to expect that it might.
The emergence of astronomy, astrophysics, and particle physics—“big science”—has added impetus to the need for international cooperation to share the cost burden, but big science, too, will continue to encounter obstacles, some of which originate with scientists themselves. Since a facility must be sited somewhere, scientists as well as governments are usually eager to have it in their own country for reasons of pride and convenience, and siting it elsewhere is considered a negotiating loss. It will be interesting to see whether advances in information technology in the next decade reduce the value and cachet of a country’s possessing large experimental facilities, encouraging more cooperative big science projects. It will also be interesting to see whether the inability of our superconducting supercollider to attract international sponsorship will lead us to find better ways of seeking international cooperation early on in the planning of large facilities in order to create an atmosphere of partnership.
A complication related to this last point is that mechanisms for international financing of research are not well developed, particularly where multiyear efforts are involved. The United States, for example, has no easy way of guaranteeing multiyear support for projects because its budget cycle is annual. Two frequently mentioned solutions are (1) to use treaties to formalize research arrangements since treaties are one of the few ways for the U.S. government to obligate future funds and (2) to have the U.S. government delegate authority and responsibility for multinational research to quasi-governmental institutions.
Finally, there is the question of the relationship between technological development and national security, an issue given great urgency and prominence by the events of September 11. It is a problem made considerably more difficult by the emergence of dual-use technologies and the shift from “spin off” to “spin on” in the relation between the technologies for military and civilian purposes.2
Much has been written about the shift from spin off to spin on in technology development. For much of the Cold War, the most sophisticated modern technologies were developed for military applications with support from the government. One of the justifications for the investments was that the technological innovations were ultimately spun off to civilian applications that improved the lives of the general citizenry. The Internet, the Global Positioning System, satellite imagery, composite materials, even Gatorade, came from what were once military applications.
But the very success of these examples of technological diffusion created huge markets and industries that stimulated civilian R&D and led to the current situation, in which the most sophisticated technologies are being developed in the civilian sector and later subsequently adapted for military use. This has required a sea change in the culture of the military technology establishment, which is well under way. It has required rethinking the issue of technical specifications to better utilize off-the-shelf technologies. It has somewhat altered the traditional prime contractor system of the Department of Defense. It has led to a heavier reliance on civilian infrastructure in communications. And it has increased reliance on non-American suppliers (along with altered strategies to ensure the stability of supplies under a variety of circumstances).
There is much that is positive about these changes, but the emergence of technologies that can serve both military and civilian purposes—dual-use technologies—has also created difficulties. The global diffusion of technologies for civilian purposes also makes those technologies available for military purposes. The early strategy for dealing with the problem was the negotiation (with our trading partners) of a list of technologies that were not to be exported to designated countries—the so-called COCOM list.3 Heavily represented on the list were computer and other information technologies, but also certain advanced materials and other emerging technologies.
To many in the corporate sector, the restrictions created a disadvantage for U.S. industry, so with the end of the Cold War, the list of prohibited items was renegotiated and considerably reduced in scope. Greater emphasis was placed on restricting the distribution of systems used mainly, if not exclusively, by the military, accompanied by a loosening of restrictions on the underlying technologies. Insofar as this has made it considerably easier to develop global markets in information-related technologies and in advanced materials methods, the U.S. economy has benefited.
In the current situation, however, the threats the United States and its allies face are less from sophisticated technologies adopted and used against us by state entities than they are from clandestine groups using technologies of modest sophistication. Under these circumstances, distinguishing equipment and technology of military use from those with a legitimate civilian purpose is much more difficult, particularly where biological systems are involved. The fermenters for growing useful microbes and pharmaceuticals look much the same as those for growing dangerous species.
Moreover, the threat posed by the possible misuse of genetic engineering to create highly virulent species is likely to increase public distrust of the use of those same techniques to improve the quality of food, the hardiness of plants, and the yield of crops. Therefore, if the threat of biological terrorism grows in the next decade, the application of genetically modified organisms for other purposes is likely to be severely restricted.
In sum, the challenge posed by dual-use technologies in the next decade is likely to be considerably different from the challenge in the past. The threat remains that sophisticated weaponry technology—for example, long-range, remote-guided missiles carrying nuclear warheads—will be misused; recent acts of terrorism remind us of the need to develop better defenses against low-tech, poormen’s weapons, including conventional explosives as well as biological and chemical weapons. For the scientific community, the challenge will be to develop approaches for recognizing, preventing, and combating the misuse of these dual-use technologies. Indeed, the ability to do so may turn out to be a necessary element in their widespread adoption.
There have already been some rudimentary efforts in this direction. For example, some experts have studied means of ascertaining the origin of chemicals or biologicals to help in tracing material while others are working on new sensors for detecting recent shifts in the use of fermenters, and political/economic programs have been initiated to minimize the likelihood that underfunded or underemployed scientists in the former Soviet Union will be tempted to cooperate with terrorist groups. More needs to be done, and it is likely that rather than separating military research from civilian research, a dual perspective will become a normal and continuing requirement for those working on a wide range of technological applications.