Skip to main content

Currently Skimming:

5 Evaluating SARP
Pages 58-73

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 58...
... For program managers, evaluation can be a source of organizational learning and improvement. Stakeholders care greatly about what a program produces.
From page 59...
... The data for the assessment would include valid and reliable quantitative measures of the desired outcomes. For programs aimed at achieving a variety of results, metrics could be included for all of them.
From page 60...
... . The research value mapping initiative aims to evaluate both the output produced by such a program and also the capacity -- the scientific and human capital generated.
From page 61...
... . Similarly, evaluations of the Sea Grant Program have had the benefit of being able to use an extensive time horizon going back to 1967 (e.g., National Research Council, 1994)
From page 62...
... and promote partnerships. Input Metrics (measure tangible quantities put into a process to achieve a goal)
From page 63...
... 3. Appropriate stakeholders judge these results to be sufficient to address scientific questions and/or to inform management and policy decisions.
From page 64...
... Given the newness of SARP and the uncertainties about the nature of its possible benefits, such learning is an important objective for any evaluation of SARP. A MONITORINg APPROACH TO EVALuATION Because the standard evaluation approaches are not appropriate for the Sectoral Applications Research Program, we recommend that
From page 65...
... As detailed in Chapters 3 and 4, we recommend three lines of activity for SARP: a limited program of use-inspired social and behavioral science research to inform climate-related decisions in sectors defined by resources or decision arenas; workshops; and, at some point following the first year of workshops, one or more multiyear pilot projects aimed at facilitating existing networks or initiating new ones, to support and study the evolution of sectoral knowledge-action networks of decision makers and scientists. All three activities have some similar long-term objectives in terms of outcomes: to induce decision makers to consider and use climate information in their decisions and to do so appropriately.
From page 66...
... Pilot projects can reasonably be expected to change the actual information-collecting and informationusing behavior of participating decision makers, and perhaps the information-collecting behaviors of participating scientists. Workshops may lead to establishing better communication between the producers and users of climate information, but other behavioral changes may occur only after effective communication has been in place for a while.
From page 67...
... . Among the metrics that could be recorded fairly easily and regularly and that can be captured by minor modifications or additions to existing data systems, five stand out: the number of new partners receiving climate-related information; the variety of users; the number of new decision areas in which climate-related information is involved; the number and extent to which existing models, maps, texts, documents, assessments, and decision routines are modified to integrate climate-change information; and the judgment of target audiences.
From page 68...
... Finally, the judgments of target audiences are a useful metric. Potential users of climate science information can themselves provide valuable information regarding the extent to which their decision-making context has been altered in relevant ways, the kinds of information available and used in making decisions, and the extent to which they are aware of climate information and believe it to be relevant to their decisions.
From page 69...
... As SARP moves forward, and especially to the extent that the program chooses to emphasize capacitybuilding approaches, one should expect some linkages to dissolve while others develop, persist, and stimulate still additional connections -- and thus additional capacity building. One way the use of climate science information becomes institutionalized is by the creation of new organizations or organizational roles to fulfill intermediary functions between climate information producers and users.
From page 70...
... 5. Changes in Communication: Increased efforts by scientists involved in the workshop to discuss research results with users involved in the workshop and other users in the same decision arena, increased efforts
From page 71...
... However, workshops involve a lower level of program investment over a shorter period of time, and they occur when less is known about the relationship between available climate information and users' needs and about how to link science producers and users most effectively. Therefore, workshops should be considered as an early phase in the social innovation process, and expectations should be set accordingly.
From page 72...
... The scientific contributions should also be assessed against relevant output metrics for science, such as producing peer reviewed and accessible results, developing a research community and associated infrastructure to support continued development and dissemination of the use-inspired work generated by the program, and developing institutional and human capacities to address related research issues. The key scientific outcomes from the research are likely to be improved understanding of the processes by which climate-related information comes to be produced in a use-inspired way and the means by which such information comes to be used or not used by those it can benefit.
From page 73...
... CONCLuSION Textbook program evaluations can be very valuable. However, given the small size of SARP, the expectation that desired outcomes will take at least several years to achieve, the multiple types and levels of decisions that could be influenced by climate information, the variety of relevant decision makers, and the multiplicity of programmatic approaches to shape decision support systems, such an evaluation approach is not appropriate for SARP.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.