Skip to main content

Currently Skimming:

4 The Usefulness and Limitations of Metrics in Measuring the Returns on Publicly Funded Research
Pages 51-72

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 51...
... research enterprise to determine how all of its component parts interrelate, a theme that is explored in detail in Chapter 6. • Ongoing data collection efforts, including Science and Tech nology for America's Reinvestment: Measuring the Effect of Research on Innovation, Competitiveness and Science (STAR METRICS)
From page 52...
... It is also important to note the subtle difference between "measuring inputs to and outputs from the research enterprise" and "evaluating the impacts of the research enterprise": the former focuses on the measurement of external factors that modulate the process of research and on the measurement of intermediate research outputs, such as publications and patents; the latter focuses on how research ultimately affects society. This chapter reviews in turn existing measures; the uses and limitations of a commonly used input indicator and a commonly used output indicator; the challenges of data collection to inform measurement tools, with a focus on the STAR METRICS Program; the limitations of existing metrics; and the need to move beyond current indicators.
From page 53...
... This method depends greatly on access to and the quality of existing data. The next two sections describe indicators of the broader systems of research and innovation from two different perspectives -- research inputs and research outputs.
From page 54...
... It is a crude measure that allows for international comparisons of the levels of national investment in R&D, investments that are correlated with overall innovative performance. Nonetheless, like many widely used metrics for R&D investment, R&D/GDP ratios conceal a great deal of cross-national heterogeneity.
From page 55...
... . Moreover, combining data on national investments in research and in development does not allow for cross-national comparisons of research investments alone, presenting a significant barrier to examining the effects of federal research investments.
From page 56...
... Work by Hall (2005) using the market value of private firms suggests that the appropriate private depreciation rate may be larger than 15 percent and will vary over time and sector, depending on competitive conditions.
From page 57...
... Examples are increased understanding of scientific processes useful for one's own product development that is obtained from reading scientific publications or attending scientific meetings. In addition to spillovers from public R&D, firms and others frequently benefit from observing the introduction of new products and processes by their competitors.
From page 58...
... . The diffuse nature of the output of federal research has led some researchers to attempt measurement at the aggregate level by relating aggregate total factor productivity, or TFP, to various types of R&D spending across countries (Guellec and van Pottelsberghe de la Potterie, 2001; Westmore, 2013)
From page 59...
... Programs currently under way include the Research Excellence Framework in the United Kingdom, which is intended to measure the performance of universities and determine funding allocation based on the wider nonacademic impacts of research; the Excellence in Research for Australia framework, which uses bibliometrics and other quantitative indicators to measure research performance for accountability and advocacy purposes, and potentially for allocation of funds; and the Canadian Academy of Health Science Payback Framework, which relies on several indicators of research impact and incorporates a logic model for health 9  For an example of how bibliometric measures can be used to help evaluate research impacts, see Lichtenberg (2013)
From page 60...
... quality of Australian research. Canadian Academy of Draws on well- Five categories; advancing Health Sciences Payback established ‘payback' knowledge; capacity Framework, Canada framework.
From page 61...
... Data mining Level 1 rolled Feedback Potentially very approach, out to 80 generally wide depending automated. universities.
From page 62...
... NOTES: CIHR = Canadian Institutes of Health Research; EC = European Commission; NIHR = National Institute for Health Research; RAE = Research Assessment Exercise; RQF = Research Quality Framework. SOURCE: Reprinted with permission from Guthrie et al.
From page 63...
... institutions. Phase II is currently gathering information on scientific activities from individual researchers, commercial publication databases, administrative data, and other sources.
From page 64...
... Finally, STAR METRICS data would be more useful if steps were taken to ensure that the data can be flexibly linked to other relevant data sources, including but not limited to those maintained by the federal statistical and science agencies, as well as proprietary data sources such as the Institute for Scientific Information's Science Citation Index, recognizing that data emanating from such databases have very different meanings from field to field. Creating a robust and linkable dataset may require the addition of individual and organizational identifiers to the current STAR METRICS data.
From page 65...
... Finally, the World RePORT database will plot information about NIH-funded projects onto a geographic map to facilitate greater coordination among public and private funders. LIMITATIONS OF EXISTING METRICS The committee's review of many current metrics for research inputs and outputs revealed them to be lacking.
From page 66...
... These components often are intangible, including opportunities and relationships that are not captured by most data collection programs and cannot be measured by any method available today. The challenge, which has yet to be met, is to capture and articulate how these intangible factors enable the success of the research enterprise.
From page 67...
... Strategic planning studies that examine the entire technology base in question can also identify gaps in the existing technology platforms and infratechnologies. Another crucial issue with the use of metrics to assess research quality and impacts, one stressed throughout this report, is that knowledge from basic research often underpins applied research.
From page 68...
... That acknowledgment was eventually expanded to include three earlier NSF grants that extend back to 1974 and span fields of science as seemingly abstruse as centrality measures, analyses of prominence in international article citation networks, and methods for crawling and cataloguing websites. Twenty research articles cited by Page, covering highly abstract topics such as hypertext link structures, information retrieval, databases, bibliometrics (citation analysis)
From page 69...
... These are the kinds of questions raised by the case studies in Boxes 4-1 and 4-2. Bibliometrics, for example, would not have flagged the supporting citations in the patent application for Page's Google search algorithm (see Box 4-1)
From page 70...
... Azoulay and colleagues standardized publication outputs from these two groups of researchers using statistical methods.12 They discovered that HHMI researchers produced 96 percent more high-impact papers and 35 percent more low-impact papers compared with NIH researchers. In addition, HHMI researchers were awarded six times as many grants and introduced more new keywords into their fields of science.
From page 71...
... They are most effective when their definitions and specific uses have been spelled out clearly in advance. THE NEED TO MOVE BEYOND CURRENT INDICATORS There are countless indicators of research performance.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.