Because of the role of innovation as a driver of economic productivity and growth and as a mechanism for improving people’s well-being in other ways, understanding the nature, determinants, and impacts of innovation has become increasingly important to policy makers. To be effective, investment in innovation requires this understanding, which, in turn, requires measurement of the underlying inputs and subsequent outcomes of innovation processes.
In May 2016, at the request of the National Center for Science and Engineering Statistics (NCSES) of the National Science Foundation (NSF), the Committee on National Statistics (CNSTAT) of the National Academies of Sciences, Engineering, and Medicine convened a workshop—bringing together academic researchers, private- and public-sector experts, and representatives from public policy agencies—to develop strategies for broadening and modernizing innovation information systems. As described in the statement of task (see Box 1-1), the workshop was organized by a steering committee to assist NCSES in refining and prioritizing its work on innovation metrics to maximize the relevance and utility of its data collection programs and statistical products to users. A background paper (Robbins, 2016) was also prepared to help identify topics of interest to address. The focus of the workshop was on a continuation of the communities’ discussions and conceptualizing innovation—its inputs, dynamics, outputs, and consequences—in a way that reveals elements that are being measured well and those that are being measured inaccurately or not at all. Presenters and discussants were asked to take into
account the role of innovation not only as it affects economic growth and productivity, but also as a mechanism for creating greater public good and meeting social challenges that are often nonmarket in nature. Workshop participants were also asked to consider how new kinds of data can be used to complement more traditional survey and administrative sources in the construction of innovation metrics. Throughout the discussions, one objective was to identify questions that cannot be answered now but could be with additional data that have a reasonable chance of being collected.
During his introductory comments, workshop chair Scott Stern (Massachusetts Institute of Technology) outlined why the meeting was convened and why the topical coverage is timely. He pointed out that the term “innovation” is meant to identify phenomena that are themselves important, but that they are often elusive. Nonetheless, in the context of the measurement of science and technology indicators, there is a growing consensus among practitioners and researchers who gather the data and report their results that innovation—both its inputs and potentially various elements of its outputs—is measurable and central to economic and social progress. This, Stern added, is a particularly important contention in an era when the United States and other countries are facing relatively low rates of productivity growth and periods of greater economic fluctuation. One purpose of the workshop, he said, was to bring together communities to talk about the linkage between measurement and policies that might help promote economic and social progress.
Stern identified the three major communities represented at the workshop: (1) government agencies and policy analysts who—in addition to developing large-scale statistical programs to measure various parts of the economy and society, including science and technology—are increasingly and intently focused on the measurement of the causes, consequences, and phenomena of innovation; (2) academic researchers who have developed conceptual frameworks and metrics that advance understanding of innovation; and (3) practitioners, the people who are doing the innovating. One consequence of the presence of three distinct communities is that people often use the word “innovation” to mean very different things, which can lead to an inability to establish standards that allow for clear and transparent communication. Innovation may be a complex phenomenon, and many statistics may be needed to capture its multiple dimensions. Yet it should be clear that, if communities are talking past each other, it becomes even more difficult for policy makers, practitioners, and researchers to take advantage of important new findings as they emerge.
Stern also noted the timeliness of the workshop. In addition to its long-range goals, NCSES is about to embark on a “medium-term” goal to influence the revision process by the OECD and Eurostat of the Oslo Manual (described in Chapter 2) and to provide input to the OECD Blue Sky Forum in autumn 2016. These activities are committed to advancing and modernizing innovation statistical programs and setting a measurement agenda for the coming decade. The workshop was intended to present broader lessons that may enhance the capabilities of NSF and other statistical agencies to continue on their path of developing a robust innovation measurement program.
Speaking on behalf of NCSES, John Gawalt (NSF) noted that, as part of its mission to conduct data collections related to U.S. competitiveness, the agency has a focused interest in innovation and, consequently, in collecting and reporting science, technology, and innovation (STI)-related information. He outlined the goals of the workshop and described NCSES activities that relate to innovation measurement.1 He noted that the agency’s history of collaboration with CNSTAT has allowed it to address challenges associated with its mandate to cover a number of topics, mostly
1The agency conducts and supports large-scale surveys on (1) the science and engineering workforce and the progress of STEM education and (2) R&D funding and performance across all sectors of the economy—business, academia, and, in the near future, on the nonprofit sector; the agency also collects analogous information on the federal and state governments. NCSES publishes information about the academic research infrastructure and about public understanding and attitudes toward science. Under the guidance of the National Science Board, the agency produces a biennial report to Congress, Science and Engineering Indicators, which publishes high-quality data on the science and engineering enterprise overall, where possible in an international context.
within the confines of the business sector and the nonprofit sector. Most recently, NCSES commissioned CNSTAT to produce the report Capturing Change in Science, Technology, and Innovation: Improving Indicators to Inform Policy (National Research Council, 2014) which provided an “assessment of the types of data, metrics, and indicators that would be particularly influential in evidentiary policy and decision making for the long term.” The authoring panel of that report was also charged with “recommending indicators that would reflect the fundamental and rapid changes in the global STI system while having practical resonance with a broad base of users in the near, medium, and long terms” (p. 7). The report covered a broad spectrum of topics that fall within the purview of NCSES responsibilities, and included a chapter on the measurement of innovation. Gawalt noted that, in commissioning this workshop as a follow up to the Capturing Change report, NCSES sought to tap into participants’ expertise to inform its data collection activities and to refine and prioritize its plan to improve the breadth of innovation measures. With input from the workshop and elsewhere, he said the agency hopes to shape its innovation measurement agenda for the future in a way that will benefit the user community.
As referenced in the prospectus for the workshop,2 the Oslo Manual (OECD-Eurostat, 2005, p. 46) defines innovation as “. . . the implementation of a new or significantly improved product (good or service), or process, a new marketing method, or a new organizational method in business practices, workplace organization or external relations.” Inputs to innovation such as research and development (R&D), capital expenditures, and training are defined by the Oslo Manual as innovation activities that, in principle, are “all scientific, technological, organizational, financial, and commercial steps which actually, or are intended to, lead to the implementation of innovations. Some of these activities are themselves innovative, while others are not novel but play a role in the implementation of innovations” (p. 18).
Definitions in the Oslo Manual are oriented toward measurement of innovation by business enterprises at the firm level and in a way that facilitates international comparability. In examining the purpose of innovation measurement, Robbins (2016, p. 2) raised a set of questions about the appropriate scope of data collection objectives:
2This unpublished document was circulated to the workshop steering committee and to participants to communicate the goals and provide background information for the activity.
- What concepts identify the outputs of an innovation process, resulting from the activities of actors interacting with other inputs in a particular environment?
- What concepts explain the effect of different kinds of innovation over time, as with incremental innovation and more transformative or disruptive innovation, such as general purpose technologies?
- What concepts for innovation output explain industry and technology variations?
- What measures of innovation reflect the broader impact of innovation on economic growth and social outcomes?
As described in the workshop prospectus, researchers are seeking greater conceptual flexibility and bringing into scope how the successful exploitation of ideas affects the well-being of society more broadly, beyond the contribution to efficiency, effectiveness, or quality in the production of market goods and services. Knowledge creation has the capacity to influence nonmarket outcomes in areas such as health, environmental sustainability, and education.3 Similarly, while advocating for measurement of the transmission mechanisms of new knowledge and its impacts on economic development, OECD (2010) also highlighted questions of how innovation affects the workplace, communities, and social habits.
Regardless of scope, many participants noted that measurement of innovation is difficult. Many kinds of innovations occur, which means that a simple aggregation of cases cannot be expected to be very useful for assessing (or predicting) impacts on economic growth or on other outcomes. Some innovations have a huge impact, while others are useful but minor. Additionally, not all innovation inputs and outputs can be easily quantified. Sources such as NCSES human resource data provide information necessary to measure “education, skills, and other dimensions of human capital that are used in the process of innovation” (Robbins, 2016). However, the process of R&D, for example, also involves a continuous flow of idea creation and should be measured as such. As highlighted in the workshop session on “measuring public-sector innovation and social progress,” connecting measures of innovation to economic and social outcomes is often even more challenging than quantifying innovation or its inputs.
The of emphasis for this workshop—on conceptual bases for measuring innovation—was motivated in part by a key message that emerged
3For example, Cutler and McClellan (2001) found that innovations in the treatment of heart attacks, depression, cataracts, and other conditions have led to increased longevity and less absenteeism from the workplace.
from the OECD Blue Sky Forum (2006) a decade ago: that research on innovation is highly fragmented, which undermines its effectiveness. This observation led to calls for the development of general analytic frameworks and greater coordination of research efforts
In part, due to the multiple objectives of innovation measurement, workshop participants discussed at length the activities of individuals as well as those of institutions and organizations. Firm-based innovation has been carefully defined in the Oslo Manual (OECD-Eurostat, 2005) as “the interplay of institutions and the interactive processes at work in the creation of knowledge and in its diffusion and application.” Less attention has been given to the role of individuals in innovative processes. In his address to the 2006 Blue Sky II conference (OECD, 2007), John Marburger called for new work in a way that implicitly recognized the importance of this comparatively neglected front: “We need models—economists would call them microeconomic models—that simulate social behaviors and that feed into macroeconomic models that we can exercise to make intelligent guesses at what we might expect the future to bring and how to prepare for it.”
In summarizing the workshop, the rest of this report is organized around the topical issues presented and discussed:
- Assessing innovation measurement: How accurately are innovation processes and resultant societal and economic outcomes measured? (Chapter 2)
- What is the nature of innovation that takes place beyond R&D, and how well is it measured? (Chapter 3)
- What roles do individuals (and networks of individuals) play in innovation, and how well are they measured? (Chapter 4)
- How can public-sector innovation and innovation resulting in social progress be measured? (Chapter 5)
- What do regional innovation models tell us about innovation processes, and what are the data needs for improving measurement of at the subnational level? (Chapter 6)
- How best can innovation measurement agendas of the future be shaped? (Chapter 7)
Finally, key themes and possible future developments discussed in the closing sessions of the workshop are summarized in Chapter 8.
This report has been prepared by the workshop rapporteur as a factual summary of what occurred at the workshop. The planning commit-
tee’s role involved planning and convening the workshop. The views contained in the report are those of individual workshop participants and do not necessarily represent the views of all workshop participants, the planning committee, or the National Academies of Science, Engineering, and Medicine.
This page intentionally left blank.