The Pursuit of Science in the National Interest
Albert Narath, Sandia Corporation
P.A. Fleury and R.J. Eagan
Sandia National Laboratories
The Social Contract
Few would dispute the assertion that science has provided the foundation upon which have been built the extraordinary improvements in the human condition that characterize this century. Unlike the arts, science is thus not merely a reflection of human progress, but rather an essential contributor. It is for this reason that our nation has supported the pursuit of scientific knowledge during the latter half of this century at a sustained level never afforded other intellectual pursuits. Society has always supported those efforts that it perceived to provide commensurate useful return. Unfortunately, public confidence in the benefit of scientific research has recently eroded. The success of foreign competitors in challenging the economic dominance of the United States has made it clear that the correlation between a nation 's scientific excellence and its prosperity is not as direct as the public had once believed. Thus, as we approach the turn of the century, the relevance of science and its priority in competing for federal funding have suddenly become the subject of intense debate.
The thesis of this paper is that science will continue to be an essential element in enhancing national security broadly defined to include not only military security, but energy security, environmental integrity, and economic competitiveness, as well. Indeed, the opportunity for science to improve the well-being of our citizens has never been greater. Moreover, the components of national security, defined in this manner, are tightly linked by a science and technology base that has a high degree of commonality. For science to regain its favored status, however, greater accountability will be required from its scientists. Emphasis will have to shift from concern for the health of science to greater concern for the well-being of the nation. Scientists have always been driven by intellectual curiosity and a desire to understand. The motivation to link scientific advances to tangible societal benefits, such as economic growth, must become equally strong. The question is not whether basic science can thrive in such a critical environment—it can!—but rather, how to balance federal support for basic research with more applied, goal-oriented efforts to achieve maximum benefit (i.e., utility) in the national interest, both in the short term and the long term. By “basic science” is meant the generation of new knowledge whose practical value is not predictable in advance. This includes, for example, most aspects of high-energy physics, many topics in the physical and biological sciences, and mathematics but excludes most biomedical and engineering research.
Over the past 40 years, the size and structure of the scientific enterprise have changed dramatically. Prior to World War II the fundamental-knowledge-creating research work was done in universities (mostly in Europe) by a relative handful of the intellectually elite. Support for scientific research was scarce and sparse, and only the best and most dedicated professors succeeded. World War II brought about a fundamental change in the relationship between science and society. Most notably, the Manhattan project joined basic and applied science in an unprecedented way and in a very short time achieved spectacular technological results that were highly visible to the public. The intellectual challenge met by scientists and engineers in achieving a nuclear-fission explosion had a profound impact on public attitudes toward science. It was an accomplishment that played a major role in the formulation of a new social contract in which science (physics, in particular) was elevated to a level of very high national priority. Following the war, the United States rapidly expanded its investment in basic science in both the public and private sectors. The new “love affair” with science facilitated the formation of national laboratories, industrial research laboratories (many in attractive, rural locations), and university research parks. These developments occurred without much understanding, however, of the links between effort and resulting utility (investment and return). Rather, a mixture of awe and faith, sustained by breakthrough discoveries that led to other examples of utility (antibiotics, the transistor, lasers, and computers, among others), kept the social contract intact. Even today, most universities in the United States aspire to be “research universities,” and a large fraction of new science PhDs still aspire to a government-supported career in basic research at a university, at a federal laboratory, or in one of the few industrial laboratories still conducting such research.
The image of science as a sure-fire source of social utility (now expanded to include “high-tech” consumer products, medical diagnosis and treatment, high-speed transportation, and so on) persisted in the public mind for three decades with little questioning. However, by the 1980s several events had brought substantial change. Industry was withdrawing from basic research to reduce costs in the face of growing foreign competition. At the same time, increasing federal budget deficits, social problems, disclosures of environmental abuses, distrust of major institutions (especially the defense establishment), and the rising costs of military systems and mega-science projects were causing a growing, if unconscious, unrest with the social contract. The demise of the Soviet Union and the end of the Cold War accelerated this process by opening to scrutiny all forms of government-supported research. Among other consequences, this led many to suggest that our expensive defense research laboratories be dismantled and that the dollars thus saved be redeployed to address pressing near-term social and economic problems.
Finally, the realization that the real threat to national security is an economic struggle in which many foreign nations have achieved startling successes without sizable investments in basic science is causing a critical reassessment of national research priorities. The current situation, in which other countries are converting U.S. scientific discoveries and technological innovations into competitive advantage, is reminiscent of U.S. successes during and after World War II that were based in large part on European science of the 1920s and 1930s. This phenomenon has led to the realization that economic success in a global economy depends more on the speed and efficiency with which a nation transforms scientific and
technological advances into superior products and services than on its contributions to fundamental scientific understanding.
Can basic science thrive in an environment of accountability for the utility of research that the public, as represented by the Congress, appears to be demanding? If by “science thriving” we mean automatic federal research support for every PhD in math and science, the answer is certainly “no.” If by thriving we mean sustaining a sufficient base of core curiosity-driven research needed to fuel the engines of social utility on the one hand, and to challenge the intellectual frontiers of understanding the physical universe on the other, the answer is “yes.” But how?
The Nature of Accountability
It is generally true that scientists are motivated principally by discovery and understanding, while society expects demonstrated utility from all its major investments. The usual answer to resolving this tension is to “move science closer to the application,” that is, to utility. But this will only work for the applied sciences, which the purists regard as engineering anyway. If pushed too far, such a shift in basic-science emphasis would distort or destroy the scientific foundations in many areas, particularly those at the forefronts of energy, scale, and complexity. How then can we have both accountability and thriving? Effectiveness and excellence? What is the proper basis for a “new social contract”?
The research discoveries in quantum mechanics and nuclear physics that made possible the bomb and subsequent revolutionary technological advances were not produced on demand in the Manhattan project. They were the product of an intellectual need to understand the laws that govern the behavior of the physical universe. Nevertheless, society perceived the relationship between the work of brilliant physicists and the “atomic” bomb as continuous and immediate—a single step. Physicists have not done much since to set perceptions right. Thus the answer to the questions posed above becomes clearer by parsing basic-science accountability into two parts: we define these as the “discovery” phase and the “utility” phase. The former is motivated primarily by intellectual curiosity; the latter is motivated by a defined, practical need.
Although not universally recognized, the two-stage process has always existed. In point of fact, neither can remain productive for long without the other. They usually operate concurrently rather than sequentially, and at times utility leads to discovery. In this admittedly simplistic but nonetheless useful model, increasing the return on investment does not mean “moving science closer to application.” Rather, it means concentrating sufficient effort in the utility phase, with appropriate emphasis on areas of highest practical potential, to support national economic objectives, while at the same time safeguarding adequate investments to sustain the national basic-science base.
For the discovery phase of accountability, including pure mathematics and high-energy physics, we should view effectiveness on the basis of cost of significant discovery. We already know how to determine whether a basic research effort advances a field significantly. The product is “new knowledge.” We can even judge the value of new knowledge in terms of intellectual impact on other fields. The basic-science enterprise can surely be held accountable in this, conceptually simple, way. That this approach is not widely or stringently applied today is evidenced by the large body of derivative research that neither
challenges the frontiers of fundamental understanding nor aims at any specific practical utility. It should be noted that new discoveries in some scientific fields like atomic physics, materials science, or biochemistry appear to have greater potential for utility than discoveries in fields like astrophysics or particle physics. It might appear self-evident that this difference should be a significant factor in establishing basic-science priorities. However, such an approach must be followed with considerable caution, because the paths leading from discovery to eventual utility are rarely straight and in no case predictable with certainty.
The utility phase of accountability involves the transformation of discoveries into utility. It is demonstrable success in this phase, as measured, for example, by economic growth and job creation, that will ultimately ensure adequate public support for science as a whole.
The New Social Contract
Selling science to society in a one-step mode is oversimplified and misleading. It has led to unrealistic expectations and widespread disillusionment with “science.” Witness the confusion over the proposed space station as a “science” project and the emergence of the “Jurassic Park” syndrome (i.e., distrust of science and technology). A requirement for science to thrive is a new social contract that supports the discovery phase not only because it is the social responsibility of a wealthy nation to enrich the human spirit and challenge human intellect, but also because it recognizes the critical dependence of continued human progress on a steady flow of new ideas. Curiosity-driven research is the real basis of the new ideas upon which future utility-phase accomplishments will be built. It follows that greater accountability of the appropriate kind in the discovery phase needs to be applied, both within subfields and among subfields of science. A good place to start is for the science community to strongly and unequivocally oppose earmarked or pork barrel projects in the federal research funding portfolio. A second need is for priority setting among different fields of science, a challenge that remains unmet today. Finally, every effort must be made to remove barriers that impede constructive interactions between discovery and utility phases.
A problem in science priority setting has been the absence of clearly understood criteria. Separating accountability into two phases would make it easier to compare “apples” to “apples” (i.e., the potential for frontier-breaking discoveries in one subfield vs. those in another). Of course, this approach immediately raises a more serious question. How should such comparisons for basic research be weighted when combined with the likelihood of convening discoveries into utility? Society obviously needs to consider both factors in any rational decision-making process. The danger of overly conservative, linear thinking in such an assessment is self-evident. This must be guarded against, for it could easily lead to an overly narrow focus on those science subfields that appear to have the strongest near-term connections to utility. History tells us that the most revolutionary practical advances often arise from apparently unrelated discoveries. Similar attention must be paid to continuity of support for, and understanding of, the long-term nature of most scientific research. In short, the unavoidable risk in the proposed prioritization process must be carefully managed. Excessive emphasis on “bottom-line,” short-term accountability could easily destroy the basic-science base, as has already happened in several of U.S. industry's finest R&D laboratories. What is essential is to maintain adequate balance. Since federal support for the
basic sciences (i.e., discovery-phase research) amounts to only a modest fraction of total federal R&D investments, it should be possible to sustain a broad-based program.
Ambiguities in the assignment of responsibility for basic-science support among several government agencies also need to be clarified. The need for “accountability” should extend to the federal agencies themselves. Responsibilities for the support of basic science have not always been clearly delegated. Even the National Science Foundation—arguably the most appropriate supporter of basic-science research at U.S. universities—has seen its mission expanded without the resources to address them fully. Government policies to encourage regrowth of our industrial research base are also sorely needed. As difficult as these problem are, they appear manageable. The increased candor and understanding between scientists and society that would result would put science support on a more sustainable foundation.
The much larger challenge lies in devising and improving accountability for the second phase—the successful blending of scientific discovery and technological innovation in support of national needs, on a time scale set by global competitive forces. Science investments aimed at this broad objective must strive for an appropriate balance among a number of important areas competing for federal support. These include:
Development of a sound base of knowledge to provide rational guidance for regulatory and legislative actions in contentious areas (e.g., global warming);
Stimulation of technological breakthroughs leading to new “high-tech” industries;
Establishment of an information infrastructure to assist small- to medium-sized enterprises in acquiring advanced technologies; and
Creation of cost-shared technology partnerships with established major industries.
Partnerships addressing major, national technology goals that are market driven, industry defined, and precompetitive in nature appear especially promising. The problems we face today are too complex for any one sector of society to solve in isolation. What is called for is a greater commitment to enhanced communication and cooperation and the building of bridges between different sectors. But can industries, universities, and government agencies and their laboratories provide answers collectively to problems involving complex sets of technologies, systems, and policies? The answer is not at all obvious, although it is clear that more effective mechanisms of public and private partnership are needed in which industry, federal laboratories, and universities can leverage existing resources to speed the movement of knowledge from research discovery into practical applications.
This is not a surprising or new idea. But what might be surprising is that such technology bridges already exist—for example, through joint R&D activities between many federal laboratories and high-technology industries. We are also beginning to see the emergence of large-scale industrial alliances based on precompetitive technology road maps. Examples include the semiconductor industry, textile industry, auto industry, and others. Realizing the full potential of federal participation in addressing such national technology goals will, however, require an unprecedented degree of intra- and inter-agency cooperation that has yet to be demonstrated and that has not before been deemed necessary or even desirable. However, there are encouraging signs, such as the early accomplishments of the
High Performance Computing and Communications initiative. Enhanced cooperation at the more fundamental scientific level needs to be encouraged, as well. Teaming (including active international cooperation in large-scale, capital-intensive, basic-science ventures) will deliver greater return on public investment.
Past practices have had the effect of isolating research from the customer—U.S. industry—especially in those cases where universities and federally funded laboratories were the predominant performers of research. The transformation from discovery to utility has tended to be a stepwise, sequential process (research to development to application) rather than a continuous, integrated process. It is not surprising, therefore, that product realization has often been substantially slower in the United States than in countries that emphasize greater vertical integration in their product realization efforts.
There has been a recent trend toward more effective coupling of research to applications, especially in the federally funded laboratories. Many laboratory-industry partnerships now recognize that effective product realization involves continuous feedback. However, this process is still not balanced. When it is balanced, the strengths of research, development, and application are similar, the linkages among the three elements are strong, and the flow is bidirectional. Ideally, the coupling is so strong that simultaneous, complementary operations occur, for example, concurrent engineering.
Generalizing the “Utility” Assurance Paradigm
We have sketched in some detail the elements, relationships, and accountabilities pertaining to utility as defined earlier. With some variations other important “utilities” supporting different societal goals such as improved health, increased personal safety, restoration of the environment, preservation of species, and so on can be involved in the accountability of the entire spectrum of science-based advances.
Only by invoking the accountability chain from scientific discovery to utility, and strengthening the linkages along that chain, can the scientific community expect a generous and sustainable social contract. As in any chain, every link is important. Without a broad, curiosity-driven, excellence-demanding, and affordably balanced basic-science base, there will eventually be no discoveries to transform into utility. Yet without improved efficiency and accountability in transforming discoveries into utility, the tangible societal benefits will be insufficient to argue convincingly for sustained support of science at levels much higher than our society has traditionally deemed appropriate for purely intellectual endeavors like poetry or music.
The simple, inescapable truth is that science needs society in order to thrive, and society needs science if its dreams of a better world for future generations are to be realized.