National Academies Press: OpenBook
« Previous: Executive Summary
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

1
The Purpose of the Study

The U.S. federal government supports scientific and technological research to address a broad range of national needs and objectives and to gain fundamental understanding of the processes that shape the world in which people live. Each federal science agency promotes scientific progress toward these objectives in the areas of its mission responsibilities. This is done most obviously by providing funds for research and its infrastructure, including the education and training of succeeding cohorts of researchers; by organizing and setting rules for the external groups that advise on worthy research investments; and by setting research priorities and making choices among specific research programs and projects. Each agency also does so by recruiting, training, and evaluating research managers for their scientific expertise and managerial skills. Agencies redirect support among lines of research when opportunities arise to open new and exciting paths to knowledge and societal benefit, when changes occur in the relative importance of the results from past investments in research, and when specific lines of inquiry or modes of research support are deemed no longer to be productive.

Historically, federal government support has been instrumental in the development of important new fields of science and technology, such as materials science and computer science. Less well understood is how closely the rise or demise of a research field may be tied to federal support. At any point in time, numerous research fields are in an embryonic state and are potential candidates for further maturation. Not all flourish, however, beyond the involvement of small cadres of researchers subsumed within larger subspecialties and disciplines. Support from federal government agencies can

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

make the difference between development or stagnation for embryonic or fledgling research fields. Priority-setting decisions by federal science agencies thus affect the vitality of existing fields of research, although the strength of this effect is not well known.

Because society has limited resources to support scientific activities, assessing scientific progress and setting priorities are perennial practical components of national science policy decision making. Perennial questions arise, too, about matching agency funding practices with the conditions perceived to be most likely to lead to program or project success in terms of contributions to scientific knowledge and societal objectives. Which objectives most deserve support for science—defense, economic competitiveness, energy, health, the environment, or some other? How should funds be distributed across agencies? Which fields of science most require or merit public-sector support? And which modes of research support are most productive? These questions raise issues of outcomes—assessing the likely benefits1 of the scientific work being supported—and processes—ensuring that decisions are made in ways that satisfy the criteria of equity, transparency, expertise, and accountability normally demanded of public decisions in a democratic society.

Concerns about the processes and outcomes of priority setting in U.S. government research agencies have intensified in recent years because of a confluence of several clusters of influences. One of these relates to increased demands for accountability and documented performance throughout government, extending across all functional activities, including support of research. For federal government agencies, these demands are most visible in the requirements of the Government Performance and Results Act (GPRA) of 1993 and the Office of Management and Budget’s Performance Assessment Rating Tool (PART) process. Demands for accountability, amounting in some accounts to an audit explosion (Power, 1997), have also become more acute because of a recent deceleration or reversal of research budget growth across many functional areas of federal government activities, except for defense and homeland security. In the specific case of the National Institutes of Health (NIH), it is evident that the recent era, which produced a doubling of funds, has ended. The success rates of research grant applications at NIH have declined rapidly over the past several years: for new applications, from around 20 percent in 1999-2002 to 9 percent in 2005; for renewal applications, from around 50 percent to 32.4 percent in the same period (Mandel and Vesell, 2006).

An important source of the renewed interest in reexamination of agency priority-setting criteria and processes has been the endeavors of federal agencies and their division directors and other managers for continuous improvement of the quality of their programs. In their roles as professional science managers, agency officials seek to direct agency resources to support

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

missions most effectively and efficiently. In doing this, they must simultaneously work to satisfy the priorities expressed by the administration and Congress via the budget process, adhere to the related legislative and administrative requirements on the allocation and expenditure of federal research support, respond to the communities of researchers currently active in relevant fields of inquiry, and in periods of scientific transformation attract and nurture researchers whose work connects to their agency’s mission—all this while being responsive to often unpredictable changes in the potential for progress along different paths of scientific inquiry. Science managers do this differently depending on the responsibilities, powers, and activities associated with their positions in their agencies (Seidman, 1998).2

Related to both these influences are increased demands for evidence-based decision making and particularly for decision making based on quantitative evidence. These demands are contained in the provisions of GPRA and PART. They also appear implicitly in the call by the president’s science adviser and the director of the Office of Science and Technology Policy, John Marburger (2005), for a “social science of science policy” that would, among other things, use econometrics and other social science methods to help examine the effectiveness of federal investments in science (American Sociological Association, 2006).

Yet another stimulus for renewed attention to priority setting has been the beliefs, latent in recent assessments of the state of U.S. science, that the decision processes of science agencies are unduly conservative in program and project selection and that they fall short in converting research findings into usable and useful applications. When coupled, these two propositions imply that the national investment in scientific research is not yielding its expected returns in improvements in the quality of life and suggest that the United States may lose its preeminience in world science.

THE COMMITTEE’S CHARGE

This national context sets the framework for this report, which responds to the specific needs of the Behavioral and Social Research Program (BSR) of the National Institute on Aging (NIA) to assess the progress and prospects of behavioral and social science research on the processes of aging at both the individual and societal levels. Specifically, BSR asked the National Academies to organize this study with two major goals: “to explore methodologies for assessing the progress and vitality of areas of behavioral and social science research on aging … [and] to identify the factors that contribute to the likelihood of discoveries in areas of aging research” (National Academies’ proposal to NIA, 2003).

Contained within these two major goals are several specific questions and subthemes, including the following: Given increasing pressures for ac-

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

countability and for research to have broader impacts, widening choices of research areas to support, and the resulting increased competition for funds, what information can research managers rely upon to guide their allocations of research resources? How can they more effectively advance scientific disciplines and other research areas and make important discoveries more likely? Can we measure or at least compare the progress in different disciplines and research areas? Can the vitality of research areas be defined and assessed? What indicators for fields, as well as for individuals, would be useful? Can progress be effectively tracked through discoveries? How would discoveries be determined and selected for this purpose? BSR has requested advice on methods for the retrospective assessment of scientific progress and for addressing the prospective problem of priority setting to promote the future progress of areas of research.3

We have addressed the two major issues for this study by reviewing and assessing relevant literatures and techniques. We also commissioned a special pilot project to assess the validity and feasibility of newly developed bibliometric methods for assessing research progress.

This study is being conducted at a time of considerable ferment and disagreement about the optimal portfolio of research funding mechanisms. In a stylized (and at times unduly polarized) manner, the choice is presented as between single investigator-initiated, discipline-based proposals and multidisciplinary, team-based proposals prepared in response to an agency request, referred to occasionally as Mode I and Mode II forms of research (Gibbons et al., 1994). As detailed in the following chapters, endorsements of the merits of each approach (and implicit or explicit criticism of other funding approaches), as well as advocacy of numerous intermediate arrangements, are easy to find in current statements on science policy. Systematic empirical work that would permit more evidence-based assessments of policy options, however, is not easy to find, either generally across areas of federally funded research or specifically with respect to aging research.

Both conceptually and empirically, the two components of our charge overlap. For example, to determine the factors that contribute to scientific discoveries, one must rely on the same set of methods (e.g., peer review, bibliometrics) that are used for assessing the progress and potential of these fields and for informing research policy decisions about them. We have therefore collapsed much of the discussion of both elements of the charge into the treatment of assessment methodologies, while separately discussing bodies of knowledge specifically addressed to factors deemed to contribute to scientific advances, especially as they may pertain to research on aging. As those discussions make clear, there are significant gaps and uncertainties in knowledge about the factors that contribute to scientific advances, so that considerable interpretation and judgment are necessary in evaluating the past progress of science or projecting future prospects.

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

TECHNIQUES AND PROCESSES FOR SCIENCE ASSESSMENT

Interest in methods for setting science priorities on the part of Congress, executive branch units, and agency science program managers has a long history (early contributions include Scherer, 1965; Nelson, 1959; National Academy of Sciences 1965; Rettig et al., 1974; U.S. Congress, Office of Technology Assessment, 1986; a more recent effort is National Science and Technology Council, 1996). Interest continues because of the continued salience of the underlying questions and a widespread belief that, for all their sophistication, the existing studies do not fully satisfy the needs of science program managers for reliable and defensible methods for making priority-setting decisions.

Interest in identifying the conditions that lead to advances in the social sciences also has a long, distinguished pedigree. Antecedents may be found in the 1933 President’s Research Committee on Social Trends (see Gerstein, 1986; Smelser, 1986). A more recent inquiry along these lines was the work of Deutsch et al. (1971), who identified and analyzed the conditions that underlay 62 “major advances in the social sciences.” This work stimulated a continuing line of inquiry that has sought to distill the relative influence on the conduct of research of such factors as whether the research was conducted by individuals or teams; whether it focused on theory, method, or empirical study; the age of the researcher; and the use of capital equipment. This line of research has confronted but not resolved important methodological and conceptual issues, such as how to select “advances” for study and distinctions between “discovery” and “application” (see Smelser, 2005). Research on the conditions for scientific advances thus encompasses a large and diverse range of inquiry into the organization and performance of scientific endeavors.

The questions raised by our charge arise across the sciences. For example, many of the questions posed by BSR also arise in the conduct of industrially funded research, have been posed by industrial research and development managers (Industrial Research Institute, 1999), and are the subject of an extensive literature on research and development portfolio selection (e.g., Bretschneider, 1993). These questions are also commonplace in the science priority-setting processes of other countries, as evidenced prominently in the Foresight and related activities (described below) that enter into the formulation of the European Union’s Framework programs (Pichler, 2006). To review all the literature on these questions across the sciences and internationally would widen the scope of our inquiry to unmanageable dimensions.

Restricting our search only to literature dealing explicitly with behavioral and social science research on aging, however, would unacceptably narrow the scope of this study. If we used the narrower approach, we would fail to take advantage of research in other areas of science that has lessons

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

to offer. Thus, we have used our judgment in selecting sources of knowledge that seem to provide useful insight for the tasks facing BSR, providing passing coverage of some and omitting others we deemed less germane. We have concentrated on some widely applicable techniques for assessing the past performance and progress of scientific fields and the prospects for scientific progress, and on frequently discussed variants or alternatives to these techniques. We have also conducted a pilot study using some new and promising bibliometric techniques, customized to correspond to substantive areas of research supported by BSR.

This work leads to the conclusion that all available techniques for assessing the progress and prospects of scientific fields embody significant uncertainties and will continue to do so for the foreseeable future. By itself, this is neither a novel nor a surprising conclusion. It reaffirms similar conclusions offered by both older and more recent undertakings (e.g., National Research Council, 2005c). Similarly, our review of studies of the factors likely to contribute to scientific discoveries reveals broad consensus about major influences—“adequate funding,” for example—but uncertainty and indeed disagreement about their programmatic implications. For example, many sources point to the importance of interdisciplinary teams working in an open “collegial” environment. However, more systematic research is needed before it can be concluded that these lessons apply to the conduct of behavioral and social science research on aging.

Given the inconclusive and open-ended nature of current knowledge, the key practical problem for BSR is how to make wise choices when even the best techniques of analysis give uncertain information. BSR, and most likely other parts of NIH and other federal science agencies, need to establish processes for considering, interpreting, and using assessments offered by different parties—congressional and executive branch decision makers, researchers, user communities, program managers—of the potential contributions of different fields of science toward mission objectives. Good processes can integrate improved analytic techniques as they appear, while ensuring that imperfect measures do not trump good judgment.

Working from a focus on process, we propose a strategy that BSR can use for assessing and comparing the value of research across areas of inquiry. The strategy uses quantitative measures, indicators, and the like to inform judgment rather than to replace it.4 It treats analytic techniques, including the application of indicators, as useful for disciplining the judgments of expert groups and focusing their deliberations, but it emphasizes the essential contribution of expert deliberation for interpreting quantitative information and informing strategic decisions about research policy. While recognizing the limitations of existing and emerging methodologies, it sees value in experimenting with promising techniques.

We propose this strategy from a recognition that an assessment strategy

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

should address both the internal needs for decision making—the agency’s specific mission and the methods it finds acceptable for formulating priorities and assessing progress—and the decision context. Context refers to the structures and procedures of the NIH, NIA, and BSR within which decisions are made and to the distribution of decision-making authority and influence among various actors, including program managers, other decision-making entities in the agency, and extramural scientific bodies and policy actors.5

Among the key contextual factors for BSR is the paradigmatic use of peer review and expert judgment mechanisms in NIH, as in other federal science agencies, such as the National Science Foundation (NSF) and the Department of Energy. The context also includes the accepted structures for priority setting and proposal selection in NIH (see Chapter 2). Over time, what is learned from this study and others with similar objectives (e.g., National Research Council, 2005c) may lead to broader understanding of the generic issues that may be useful for assessing science and setting research priorities in other domains and organizational contexts as well.

The tasks for this study involve both prospective and retrospective assessment. Priority setting has a prospective focus. Working in a decision-making context shaped by legislative and executive branch mandates, budget allocations, political imperatives, stakeholder interests, and inputs from the affected scientific communities, agency program managers consider how best to distribute their programs’ available resources among many possible lines of science to maximize attainment of the program’s goals, such as the advancement of scientific knowledge and human well-being.

Assessments are usually retrospective. They may be conducted for summative purposes—that is, to determine how well past funds have been spent—and for formative purposes—to generate lessons learned for future decisions. Thus, retrospective assessments can affect the allocation of future funds across research fields, types of funding mechanisms (such as between individual investigator awards and multidisciplinary centers), and types of recipients (individuals or organizations).

The connection of the past with the future is not always linear or predictive. This is especially the case during periods such as the present when there is a widespread consensus in NIH, as indicated by its Roadmap initiatives (http://nihroadmap.nih.gov/initiatives.asp), and other federal science agencies, as illustrated by the National Science Board’s 2020 Vision for the National Science Foundation (http://www.nsf.gov/pubs/2006/nsb05142/nsb05142.pdf), that transformational changes are occurring in relationships among scientific and technological fields and increased attention is being given to the need to translate research findings into techniques, methods, and policies that enhance human health and well-being.

In this report, we propose a strategy for assessing the progress and prospects of science that embeds analytic techniques in a structured deliberative

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

process. We think this strategy will make sense both to science managers and to working scientists involved in BSR’s domain of responsibility and that it will allow for discrepancies in judgment between different individuals or groups to be deliberated in a more informed way than in the past. Over time, with examination and reflection on how the strategy is used in BSR’s advisory processes, it will be possible to continue to improve practice.

GOALS OF THE STUDY

The BSR Program of the NIA is the lead federal agency assigned the mission of supporting behavioral and social science research related to aging. As described on the NIA web site (http://www.nia.nih.gov/ResearchInformation/ExtramuralPrograms/BehavioralAndSocialResearch/), BSR focuses its research support on the following topics:

  • How people change during the adult lifespan

  • Interrelationships between older people and social institutions

  • The societal impact of the changing age composition of the population

BSR support has emphasized “(1) the dynamic interplay between individuals’ aging; (2) their changing biomedical, social, and physical environments; and (3) multilevel interactions among psychological, physiological, social, and cultural levels.” In pursuit of its objectives, “BSR supports research, training, and the development of research resources and methodologies to produce a scientific knowledge base for maximizing active life and health expectancy. This knowledge base is required for informed and effective public policy, professional practice, and everyday life. BSR also encourages the translation of behavioral and social research into practical applications.” NIA expends the bulk of its funds on grants and contracts.

BSR is seeking to address the challenges of research assessment and priority setting explicitly and systematically. It seeks to develop valid and defensible procedures for making judgments about the progress and prospects of the scientific activities it supports at the level of lines or areas of research. It seeks to identify the factors that contribute to discovery so as to have a firmer basis for allocating and reallocating funding across types of funding instruments and types of recipients (e.g., grants for research projects versus programs; grants to individuals versus research groups; disciplinary versus interdisciplinary research teams).

It seeks improved procedures for assessing scientific progress and prospects and firmer rationales for allocating incremental research funds across areas on other than a percentage-based formula and, as appropriate, for reallocating research funds from one area to another. By requesting this

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

study, BSR has offered itself as a test bed for addressing important generic priority-setting questions that arise in many areas of federal government science policy. One of these is how best to assess the performance of investments in science when some of the objectives of those investments are hard to quantify (e.g., improving knowledge, the quality of policy decisions, or human well-being). Another is how to compare the performance of different kinds of investments when the sponsoring agency has multiple goals and different lines of research contribute to different goals.

A third is how to assess the progress and prospects of scientific fields that differ systematically in their basic objectives, methods, and philosophical underpinnings. The social and behavioral sciences exemplify this issue well. Despite much-discussed trends toward consilience across fields of science and convergence and cross-fertilization among the behavioral and social sciences (e.g., behavioral economics), significant differences in philosophical underpinnings and methodologies remain among and even within these disciplines (see, e.g., Furner, 1975; Ross, 2003; Ash, 2003; Stigler, 1999).6 These differences underlie the historical division of the behavioral and social sciences into disciplines and subdisciplines, are unlikely to be easily resolved, and serve as the basis for competitive claims on the support provided by research sponsors, such as BSR.7

A fourth issue is the effects of priority-setting decisions by major research funding organizations on the competition among disciplines and departments in the contemporary American research university. Assessments of scientific fields at times become enmeshed in disciplinary rivalries. Indeed, our assessment highlights the challenge to BSR of disengaging its problem or mission focus on aging from the claims of different academic disciplines to “own” a particular facet of research on aging. The progress of disciplines, however measured, does not automatically translate into progress in the kinds of areas of inquiry of greatest interest to BSR or similarly mission-oriented science programs. In this report, we use such terms as research “areas” or “fields” flexibly to refer to topics or lines of inquiry that may be as appropriately defined by a problem as by a discipline or subdiscipline.

The questions that BSR is asking, especially about the comparisons among the several areas of behavioral and social science research it supports, have received surprisingly little systematic attention. Research agencies often engage in serious efforts at priority setting, but comparative assessments of lines of research within or across scientific fields are usually approached indirectly or implicitly. For example, the National Research Council has often been asked to advise federal agencies on criteria for making such assessments (e.g., Institute of Medicine, 1998, 2004; Committee on Science, Engineering, and Public Policy, 2004; National Research Council, 2005c) or to identify priority areas for research from among a broad range of possibilities in many disciplines (e.g., Institute of Medicine, 1991; National

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

Research Council, 2001b). The typical method for providing an answer involves creating an expert group and asking it, often after considering input solicited from the relevant research communities, to deliberate on the question at hand and arrive at a consensus judgment that is advisory to the relevant decision makers. Only occasionally have such groups been self-conscious about developing and applying explicit methods for comparing fields so as to set priorities among them (e.g., National Research Council, 2005a, 2005c).

Scholarly work on the assessment of science and the operation of scientific advisory panels has focused on somewhat different questions. For example, there has been considerable empirical research on the process of review for individual research proposals (e.g., Cole, Rubin, and Cole, 1978; Cole and Cole, 1981; Cole, Cole, and Simon, 1981; Abrams, 1991; Blank, 1991; Wessely, 1996; Lamont and Mallard, 2005), and some studies aimed at comparing larger scale activities of a single type, such as graduate departments in the same field (e.g., National Research Council, 2003) or research enterprises in a single field but in different countries (e.g., Committee on Science, Engineering, and Public Policy, 2000). Scientists and science policy analysts do sometimes make comparisons among research fields, but seldom in ways that would provide validated decision techniques to a research program manager. Members of scientific communities sometimes disagree about federal agency research priorities, as evidenced by disagreements concerning the budgetary priorities that should be accorded to the superconducting supercollider, the relative emphasis in energy research between discovering new fuel sources or improving energy-saving technologies, and the relative priority of manned and unmanned space exploration. However, research communities typically do not try to resolve such disagreements by applying formal assessment methodologies, such as those of benefit-cost or decision analysis. When challenges are posed to the intellectual substance or vitality of lines of research, they typically are directed at newly emerging ones, particularly those whose conceptual or methodological underpinnings deviate markedly from mainstream fields—and they are focused on attributes of the field in question rather than on techniques for comparison.

One interesting recent exception to these observations is empirical research that is beginning to investigate the characteristics of “successful” interdisciplinary research programs in ways that could help build a knowledge base that could inform systematic comparisons of substantively dissimilar activities or organizations (e.g., Hollingsworth, 2003; Mansilla and Gardner, 2004; National Research Council, 2005b; Bruun et al., 2005; Boix-Mansilla et al., 2006; Feller, 2006). Relatedly, as federal science agencies actively promote interdisciplinary research initiatives, as in the NIH Roadmap, they are beginning to experiment with new procedures for making comparative assessments of the quality of proposals from different fields,

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

including more deliberate attention to establishing review panels composed of experts from different disciplines (Boix-Mansilla et al., 2006). Although not directly intended as a means of assessing the scientific vitality of different fields or their projected contribution to important societal objectives, the deliberations and conclusions of such panels may provide insights into how to make comparative assessments across fields.

BSR is seeking more systematic methods for such assessments, in part because of a judgment that its interdisciplinary advisory panels have not responded to the issue of comparative assessment of research fields with assessments that differentiated among fields according to the likelihood of returns from research investments. When such differentiation is needed, BSR wants valid ways to justify its recommendations about program priorities and proposal selections to senior NIH officials, Congress, and affected stakeholder and research communities.

The primary focus of this report is on questions of comparative assessment at the level of areas or fields of scientific research. It is not concerned with the overall assessment of the BSR research portfolio in the larger context of NIA or other NIH institutes. Neither is it concerned with comparisons among individuals, research projects, nor university programs, even though some of the methods we discuss have been applied at these levels of analysis. Also, the report’s focus is primarily on behavioral and social research, though its analysis and conclusions may be applicable to research in other sciences. Finally, the report’s focus is on the needs of an agency whose mission includes both the advancement of basic scientific knowledge and its application to a particular social goal: to improve the health and well-being of older people. An agency with such a twofold mission faces a more complex assessment problem than one whose mission is restricted either to pure science or to specific practical applications of science. In keeping with NIH’s overall mission and traditions, it needs both to adhere and advance standards of the highest scientific merit and assess the contributions of fields of science, existing and embryonic, for their potential contributions to NIH’s overarching missions.

ORGANIZATION OF THE REPORT

This study has been asked to address several interrelated questions. We have centered our endeavors on what we consider to be the core questions, which concern ways to make defensible assessments of the progress and prospects of areas of scientific research for the purpose of setting priorities among public investments of science within the mission and organizational settings in which BSR functions. In addressing these core questions, we have addressed the remaining questions either explicitly or implicitly. In keeping with our emphasis on context, we begin with BSR’s activities.

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

Chapter 2 considers the BSR Program at NIA. It describes the strategic goals for research in that program and the parent institute, shows the kinds of research investments that have been made, and describes the ways in which the portfolio of research investments is currently evaluated.

Chapter 3 examines what is at stake in research assessment. It briefly reviews the history of federal science priority setting and the debates over priority setting and science assessment, focusing particularly on how concerns with accountability have supported pressures for quantification and the consequent debate over the strengths and limitations of quantitative and other methods for science assessment, particularly traditional peer review. Finally, it addresses the important question of the balance of power and influence between scientists and managers that underlies debates over quantification.

Chapter 4 presents an overview of theories of scientific progress, with special attention paid to the generic problem of the comparative assessment of research fields. It considers what is known about the nature and processes of scientific progress and about the links from societal progress to societal benefit, the variety of kinds of progress that science makes, the factors that contribute to scientific discovery, and the implications of each of the above for priority setting among scientific fields.

Chapter 5 examines the major methods available for assessing scientific progress in a general framework that distinguishes methods that emphasize the use of quantitative measures (analytic techniques, such as the use of bibliometric indicators and the application of decision analysis), methods that rely heavily on deliberation in groups of experts (e.g., traditional peer review), and those that explicitly combine analysis and deliberation. It considers each method and the three general strategies in the context of NIA’s objectives, the needs for accountability and rational decision making in science policy, and current knowledge about how science progresses.

Chapter 6 presents the committee’s findings and recommendations. It describes our recommended strategy for assessing and comparing the progress and prospects of scientific fields and our specific recommendations for implementing that strategy for assessing the fields of behavioral and social science research supported by NIA.

NOTES

  

1. Benefits are usually considered in terms of two main kinds of values: expanded knowledge and societal gain. These values are made explicit in proposal review criteria. For example, the NSF identifies two review criteria: intellectual merit (e.g., importance to advancing knowledge and understanding, exploration of creative and original concepts) and broader impacts (e.g., promoting teaching, training, and learning; broadening the participation of underrepresented groups; enhancing the infrastructure for research and education; and benefiting society). The NIH lists five criteria for evaluating applications: significance (e.g.,

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×

  

importance of the problem, likely effects on scientific knowledge or clinical practice), approach (e.g., adequacy of conceptual framework, research design), innovation (e.g., originality, challenging existing paradigms, testing innovative hypotheses), investigators (their training and suitability), and environment (suitability of the scientific environment for success) (see http://grants.nih.gov/grants/guide/notice-files/NOT-OD-05-002.html). Among these five, the benefits are listed under significance and innovation. In science policy, benefits are also judged against costs, that is, against alternative uses of the funds, and the efficiency and effectiveness with which funds are used.

  

2. We use “science manager” as a generic term to cover a variety of positions and titles found across federal agencies. Generally, these positions include responsibility for developing intra-agency program and budget plans; maintaining contact with relevant scientific communities; overseeing proposal review and selection processes; endorsing, modifying, or rejecting recommendations made by proposal review panels and justifying these choices to higher organizational levels; and identifying research initiatives.

  

3. The latter request is a perennial of U.S. science policy. Four decades ago, one of the two questions posed by the U.S. Congress to the National Science and Technology Council (National Academy of Sciences, 1965:1) read as follows: “What judgment can be reached on the balance of support now being given by the Federal Government to various fields of scientific endeavor, and on adjustments that should be considered, either within existing levels of overall support or under conditions of increased or decreased overall support?”

  

4. We accept van Raan’s (2004:22) definition of an indicator as “the result of a specific mathematical operation with data” designed to serve the purposes of (a) describing “the recent past in such a way that … can guide us, can inform us about the near future” and (b) contribute to testing “aspects of theories and models of scientific development and its interaction with society.”

  

5. The critical role of context in science policy decision making was expressed concisely by Harvey Brooks (1965:99), as follows: “criteria are considerably less important than who applies them …. [T]he fundamental problem of resource allocation within basic research is who makes the important decisions and how they are made.”

  

6. Deep philosophical differences exist even within single social science disciplines. Lamont (2004:8) has observed with reference to sociology that it “produces different types of knowledge … and that this diversity should be acknowledged in our definition of theoretical growth or vitality. To order sociological contributions within a single hierarchy or paradigm, as economists do … would be to weaken it by underestimating the contributions of its various strands…. It also would place our discipline very low on the totem pole of fields, which to my view would grossly misrepresent the many contributions of our paradigmatic discipline.”

  

7. For all the interest expressed by behavioral and social scientists in having a secure and stable home in NIH for basic behavioral science research and training, these communities have expressed little interest in changing the structure or functioning of existing basic behavioral and social science research programs across institutes (Association for Psychological Science, 2005).

Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 7
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 8
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 9
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 10
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 11
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 12
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 13
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 14
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 15
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 16
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 17
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 18
Suggested Citation:"1 The Purpose of the Study." National Research Council. 2007. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington, DC: The National Academies Press. doi: 10.17226/11788.
×
Page 19
Next: 2 The NIA Behavioral and Social Science Research Program »
A Strategy for Assessing Science: Behavioral and Social Research on Aging Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

A Strategy for Assessing Science offers strategic advice on the perennial issue of assessing rates of progress in different scientific fields. It considers available knowledge about how science makes progress and examines a range of decision-making strategies for addressing key science policy concerns. These include avoiding undue conservatism that may arise from the influence of established disciplines; achieving rational, high-quality, accountable, and transparent decision processes; and establishing an appropriate balance of influence between scientific communities and agency science managers. A Strategy for Assessing Science identifies principles for setting priorities and specific recommendations for the context of behavioral and social research on aging.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!