Skip to main content

Currently Skimming:

5 Measuring Research Impacts and Quality
Pages 73-98

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 73...
... • Universities often use metrics to make the case for annual bud gets without infrastructure to analyze research outcomes over time. Alternative measures focus on presenting the income earned from and expenditures devoted to technology transfer activities, tracking invention disclosures, reporting on equity investments, and tracking reimbursement of legal fees.
From page 74...
... Among the most useful was a recent study (Guthrie et al., 2013) by the RAND Corporation, Measuring Research: A Guide to Research Evaluation Frameworks and Tools, which is summarized in Appendix C and cited frequently in Chapter 4.
From page 75...
... notes additional barriers to assessing research impacts: research can have both positive and negative effects (e.g., the creation of chlorofluorocarbons reduced stratospheric ozone) ; the adoption of research findings depends on sociocultural factors; transformative innovations often depend on previous research; it is extremely difficult to assess the individual and collective impacts of multiple researchers who are tackling the same problem; and finally, it is difficult to assess the transferability of research findings to other, unintended problems.
From page 76...
... , however, the San Francisco Declaration on Research Assessment1 acknowledges the potential of journal impact factors to distort the evaluation of scientific research. Alberts asserts that the impact factor must not be used as "a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions." Serious consequences -- both positive and negative -- can occur when governments use metrics with the potential to change researchers' behavior.
From page 77...
... strongly urges research councils to take the lead on the knowledge transfer agenda, to influence the knowledge transfer behavior of universities and research institutes, to better engage user organizations, and to consider metrics that would better demonstrate the economic and social impacts of scientific research. These metrics, it is argued, should assess research excellence as well as the relevance of research findings to user needs, the propensity for economic benefits, and the quality of
From page 78...
... identifying several drivers of research impact, including networks of researchers, the involvement of users throughout the research process, and the supportiveness of the current policy environment. A subsequent ESRC report suggests a more comprehensive picture of the interactions between researchers and policy makers might aid efforts to track the policy impacts of research (Economic and Social Research Council, 2012)
From page 79...
... A second Australian study used a different methodology, comparing past research investments with projected future health benefits and basing the value of life on a metaanalysis of studies. In the United Kingdom, the Academy of Medical Sciences, the Medical Research Council, and the Wellcome Trust commissioned research
From page 80...
... and for the World Health Organization (Buxton et al., 2004) , raise many issues concerning the valuation of research aimed at improving health: • Measuring the economic returns on research investments -- Approaches include using a benefit/cost ratio (ratio of the value of health benefits to the costs of research)
From page 81...
... research and the health benefits that accrue to the United States from research in other countries, and determining how such international transfers of research knowledge should be accounted for. • Attribution -- It is difficult to disentangle how much of health improvement can be attributed to health care, as opposed to improved hygiene, diet, and other behaviors; to what extent behavior changes to improve health can be attributed to behav ioral and social science research; and how the contributions of behavioral and social science research to improved health can be distinguished from those of medical research on therapeutics.
From page 82...
... . Thus, DoD funds technology platform research through the Defense Advanced Research Projects Agency, and DoE funds similar research through the Advanced Research Projects Agency-Energy.
From page 83...
... must be tested through economic analyses of industry investment patterns and the causes of any underinvestment trends determined. R02662, Figure 5-1, editable Such analysis leads to the design and implementation of policy responses, which are followed by periodic economic impact assessments.
From page 84...
... The attempt in the 2001 report to highlight the "options value" of technological advances points to another type of benefit that is difficult to capture in retrospective evaluations of R&D investments but is important nonetheless: in a world of great uncertainty about economic, climatic, and technological developments, there is value in having a broad array of technological options through which to respond to changes in the broader environment.3 The DoE programs examined in the 2001 study produced a number of innovations that were not introduced commercially simply because their characteristics and performance did not make them competitive with existing or other new technologies. But there is a definite value associated with the availability of these technological alternatives or options in the face of an uncertain future (see Box 5-1 for a discussion of shale oil extraction technologies, many of which benefited from DoE and other federal R&D but have been applied only in the past decade)
From page 85...
... . DoE fossil energy programs during 1986-2000, by contrast, accounted for an investment of $4.5 billion and yielded economic benefits estimated at $7.4 billion (again in 1999 dollars)
From page 86...
... supporting business units with innovative technologies for their product and service roadmaps, (3) exploring the science underlying IT, and (4)
From page 87...
... . That analysis, however, has not been universally accepted, in part because it examined only economic activity and not the impact on human health, and it attributed the economic returns to the government's investment when other factors, including private investments in genomics, have contributed (Brice, 2013; Wadman, 2013a)
From page 88...
... A 2009 study, Entrepreneurial Impact: The Role of MIT (Ewing Marion Kauffman Foundation, 2009) , analyzes the economic impacts of companies started by MIT alumni.
From page 89...
... It should be noted as well that at least some metrics proposed or implemented for faculty evaluation at some universities, such as patenting, could have effects similar to the use of publication counts in China and other economies: if faculty perceive an incentive to obtain more patents, they are likely to file for more patents; however, the quality of these patents could well be low, and the legal fees paid by academic institutions to protect the rights to a larger flow of patent applications could increase. Moreover, the appropriateness of commonplace metrics depends largely on whether the goal of the university's technology transfer office is to increase the university's revenue through licensing, to assist university entrepreneurs, to support small firms, to support regional development, to attract and retain entrepreneurial faculty, or any number of other goals.
From page 90...
... . Alternative measures focus on identifying the income earned and expendi tures devoted to technology transfer activities, tracking invention disclosures, and a different strategy and different forms of data gathering from those typically used by research funding agencies and program managers.
From page 91...
... study discussed in the text (Ewing Marion Kauffman Foundation, 2009) with the use of "outcome measures" such as those discussed in the previous paragraph.
From page 92...
... Programs that allocate funds among different research areas, such as NSF's Science and Technology Centers Program, are more difficult to evaluate than programs that allocate funds among researchers in a specific research area, such as economics research supported by NSF. One reason for this greater difficulty is the many alternative research funding programs with which the program under consideration should be compared.
From page 93...
... The standard review mechanism for prospective evaluation of research grant and contract proposals is some form of peer review and assessment. Some have criticized peer review for discouraging the funding of high-risk research or radically new research approaches, but more recently, others have criticized it for the dilution of expertise in the NIH review process: Historically, study sections that review applications were composed largely of highly respected leaders in the field, and there was wide spread trust in the fairness of the system.
From page 94...
... As an example, consider the NSF Science and Technology Centers Program���������������������������������������������������������������� , aimed at developing ������������������������������������������ large-scale, long-term, potentially transformative research collaborations (National Science Foundation, 2014b)
From page 95...
... . Lal and colleagues set out to compare the effects of different funding programs using retrospective matching.
From page 96...
... That program conducted a number of evaluations, including comparisons with firms that had not applied for an ATP grant and with applicants that had applied but not been funded (Advanced Technology Program, 2005; Kerwin and Campbell, 2007)
From page 97...
... This can be the charge of a government unit with the capability to systematically evaluate the research enterprise, assess its impact, and develop policy options for federally funded research. As noted, however, no federal agency or department currently is tasked with performing policy analysis for research.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.