National Academies Press: OpenBook

Improving Evaluation of Anticrime Programs (2005)

Chapter: 6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?

« Previous: 5 How Should the Evaluation Be Implemented?
Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

6
What Organizational Infrastructure and Procedures Support High-Quality Evaluation?

Adequate funding is a prerequisite for sustaining a critical mass of timely and high-quality impact evaluations distributed over the criminal justice programs of national and regional policy interest. Relative to the resources devoted to studying the effectiveness of interventions in health and education, those available from all sources for evaluation of criminal justice programs are meager (Sherman 2004). This limitation constrains the potential quantity of criminal justice program evaluation and inhibits allocation of sufficient funding for high-quality research in any given evaluation project. The reality of this constraint makes it especially important for any agency funding criminal justice evaluation to prioritize evaluation projects in ways that provide the greatest amount of credible and useful information for each investment.

Effective prioritizing, in turn, requires a funding agency to maintain a strategic planning function designed to focus evaluation resources where they will make the most difference. Such planning must include an ongoing effort to scan the horizon for pertinent policy issues and identify emerging information needs, survey the field, and assess prospects for evaluation. In is not sufficient, however, to only monitor the state of the science and literature in criminal justice. The evolving political agenda must be understood as well so that policy makers’ need for information about criminal justice programs can be anticipated to the extent possible.

One important organizational implication of this circumstance is that agencies supporting evaluation research must have effective ongoing mechanisms for obtaining input from practitioners, policy makers, and researchers about priorities for program evaluation. Typical procedures

Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

for accomplishing this include scanning of relevant information sources and interaction with networks of key informants by knowledgeable program staff, consultation via advisory boards or study groups, and strategic planning studies.

As mentioned in the previous chapter, it may be problematic to combine the functions of setting priorities for program evaluation with those of reviewing proposals for evaluation of specific programs. Practitioner and policy maker perspectives are critical to setting priorities that advance practice and policy, but of limited value for assessing the quality of proposed evaluation research. Conversely, the current state of research evidence about criminal justice programs, especially emerging and innovative ideas, is relevant to strategic planning for evaluation but the perspective of researchers on what best serves practice and policy is generally limited.

Obtaining well-informed and thoughtful input from practitioners, policy makers, and researchers in their respective areas of expertise requires that an agency have ready access to quality consultants and reviewers. Moreover, those consultants and reviewers must be willing to serve on advisory boards, review panels, and the like. It follows that an agency that wishes to set effective priorities and sponsor high-quality program evaluation must include personnel who maintain networks of contacts with outside experts and attend to the incentives that encourage such persons to participate in the pertinent agency processes. Correspondingly, the relevant staff must be supported with opportunities for participation in conferences and similar events that allow personal interactions and monitoring of developments in the field. They must also have time within the scope of their official duties to monitor and assimilate information from the respective research, practitioner, and policy literatures.

AGENCY STAFF RESPONSIBLE FOR EVALUATION

Given well-developed priorities for evaluation, the functions related to developing and supporting quality evaluations include more than the ability to assemble and work with qualified review panels. As discussed in the previous chapter, formulation of an RFP that provides clear and detailed guidance for development of strong evaluation proposals, and the preliminary site visits, feasibility studies, or evaluability assessments that may be necessary to do that well can also be significant to the ultimate quality and successful implementation of impact evaluations. After an evaluation is commissioned, knowledgeable participation in the monitoring process is also an important function for the responsible agency personnel. In addition, such personnel may be expected to respond to questions from policy makers and practitioners about research evidence

Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

for the effectiveness of the programs evaluated. For instance, staff may be asked to provide an assessment of what interventions are thought to work and what promising new interventions are on the horizon.

These various functions are best undertaken by staff members who understand research methodology and the underlying principles of the interventions. Moreover, given the diverse methods applicable to the evaluation of criminal justice programs, it would be an advantage for the responsible staff members to have broad research training and not be strongly identified with any particular methodological camp. The selection of personnel for these positions is an important agency function. Opportunities for appropriate professional development, such as further methodological training or short-term placement in other funding agencies, may also be beneficial to enable staff to stay current with methodological and conceptual advances in the field. Other ways of enhancing the evaluation and program expertise resident in the agency include hosting outside experts as visiting fellows, supporting advanced graduate student interns, and regular engagement with a standing advisory board.

High-quality evaluation research occurs most readily in an organizational context in which the culture and leadership clearly value and nurture such research and the associated concept of evidence-based decision-making (GAO, 2003b; Garner and Visher, 2003; Palmer and Petrosino, 2003). This support includes attracting and retaining well-qualified professional staff, encouraging the sharing and use of information, and proactively identifying opportunities to push the evidence base in the direction of decision-making priorities. These considerations, and those discussed above, suggest that sound evaluation will be best developed and administered through a designated evaluation unit with clear responsibility for the quality of the resulting projects. To function effectively in this role, such a unit needs a dedicated budget and relative independence from program and political influence that might compromise the integrity of the evaluation research. Such a unit would also require staff with research backgrounds as well as practical experience and sufficient continuity to develop expertise in the essential functions particular to the programs and evaluations of the agency.

RELATIONSHIPS WITH OTHER AGENCIES AND EVALUATION OPPORTUNITIES

Given limited resources for evaluating criminal justice programs and policies, opportunities for agencies to leverage resources through collaborative relationships with other organizations offer potential advantages. One direct approach is through partnerships for sponsoring evaluation with organizations that share those interests. Many criminal justice top-

Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

ics, such as substance abuse and violence, are of interest to federal agencies and foundations outside the ambit of the National Institute of Justice, the major federal funder of criminal justice evaluation research. Other organizations, such as the Campbell Collaboration, engage in evaluation activities that routinely involve networks of prominent researchers and relevant organizations.

An especially productive form of collaboration occurs when a high-quality evaluation can “piggy back” on funding for a criminal justice service program. Funding for service programs often includes support for evaluation and data collection, and may even require it. Supplements that enhance the quality and utility of these embedded evaluations in selected circumstances are a cost-effective strategy for maximizing the value of research dollars. These opportunities can be developed by building collaborative relationships with agencies and units that fund service programs and may have the additional advantage of helping promote evaluation as a standard practice rather than a unique event. It should be noted that such interaction between service funding and evaluation implementation is in keeping with the increased advocacy for evidence-based policy that has occurred in recent years.

Impact evaluations frequently involve collaboration with the criminal justice programs being evaluated. However, the programs are often not enthusiastic collaborators and, in many instance, evaluators must seek programs willing to volunteer to participate in the evaluation. Difficulty in recruiting such reluctant volunteers, as noted earlier, is one of the recurring problems of implementation for impact evaluations. In this context, a critical function for an agency sponsoring impact evaluation is finding ways to ensure the participation of the programs for which evaluation is desired. The most effective procedure is for program agreement to participate in an external evaluation to be a condition of program funding, even if that option is not always exercised by the evaluation sponsor. Programs that accept external funding but are not willing to be evaluated or, perhaps even actively resist any such attempt, undermine both the development of knowledge about effective programs and the principle of accountability for programs that receive outside funding.

A relevant function for major funders of criminal justice evaluations, therefore, is to exercise what influence and advocacy they can to encourage agencies that fund programs, including their own, to require participation in evaluation when asked unless there are compelling reasons to the contrary. A related function is to facilitate participation by offering effective incentives to the candidate programs and supporting them in ways that help minimize any disruption or inconvenience associated with participation in an impact evaluation.

Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

EFFECTIVE USE OF EVALUATION RESULTS

To influence policy and practice in constructive ways, the findings of impact evaluations must be disseminated in an accessible manner to policy makers and practitioners. A less obvious function, however, is the integration of the findings into the cumulative body of evaluation research in a way that facilitates program improvement and broader knowledge about program effectiveness. This function has several different aspects. Most fundamentally, agencies that sponsor evaluation research must make the results available, with full technical details, to the research community in a timely manner. They may garner praise but, especially for important programs and policies, are at least equally likely to attract criticism. This response may not be gratifying to the sponsoring agency, but the importance of review and discussion of evaluation studies by a critical scientific community cannot be overestimated for purposes of improving evaluation methods and practice as the field evolves.

Potentially encompassed in critical reviews are re-analyses of the data using different models or assumptions and attempts to reconcile divergent findings across evaluation studies. Scrutiny at this level of detail, and the value of what can be learned from that endeavor, of course, are dependent upon access to the data collected in the evaluation. Making such data freely available at an appropriate time and encouraging reanalysis and critique will, in the long run, improve both the evaluations commissioned by the sponsoring agency and general practice in the field. It has the additional value of providing a second (and sometimes third and fourth) opinion about the credibility and utility of evaluation findings that might significantly influence policy or practice. As such, it can reduce the potential for inappropriate use of misleading results.

The value of close review of impact evaluation studies is not confined to those that are successfully implemented and completed. As discussed in Chapter 1, many evaluations fail for reasons of poor design or inadequate implementation. The sponsoring agency and the evaluation field generally can learn much of value for future practice by investigating the circumstances associated with failed evaluations and the problems that led to that failure. For these reasons, it will be useful for an agency to routinely conduct “post-mortems” on unsuccessful projects so that the reasons for failure can be better understood and integrated into the selection and planning of future evaluation projects. To allow comparison and better identification of distinctive sources of problems, similar reviews could be conducted on successful projects as well.

Another consideration regarding the use of evaluation studies has to do with the limitations of individual studies that were discussed in Chapter 4. Impact evaluations, by their nature, are focused on assessing the

Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

effects of a particular program at a particular time on particular participants. Any given evaluation thus has limited inherent generalizability. It is for this reason that evaluation researchers and policy makers are increasingly turning to the systematic synthesis or meta-analysis of multiple impact studies of a type of program for robust and generalizable indications of program effectiveness (Petrosino et al., 2003b; Sherman et al., 1997). Contributing studies to such synthesis activities, and providing support to those activities, therefore, are relevant functions for an agency that sponsors significant amounts of impact evaluation research. Indeed, a promising model for managing evaluation research is to combine ongoing research synthesis and meta-analysis by agency staff or contractors, funding of studies in identified gaps in the knowledge base, and occasional larger scale studies in areas where resolving uncertainty is of high value.

DEVELOPING AND SUPPORTING THE TOOLS FOR EVALUATION

Conducting high-quality impact evaluations of criminal justice programs is often hampered by methodological limitations. No one with experience conducting such evaluations would argue that available methods are as fully developed and useful as they could be and even those—such as randomized experiments—that are generally well developed are often difficult to adapt without compromise when applied to operational programs in the field. Moreover, improvements and useful new techniques in evaluation methods in criminal justice are inhibited by limited support for methodological development. A relevant function for any major agency that sponsors impact evaluation, therefore, is to contribute to the improvement of evaluation methods.

There are at least two readily identifiable domains of methodological problems in criminal justice evaluation. One has to do with the availability and adequacy of data for relevant indicators of program outcomes. For criminal justice programs, the outcomes of interest generally have to do with the prevalence of criminal or delinquent offenses or, conversely, victimization. For local data collections, there is little standardization for how such outcomes should be measured and little empirical work to examine how different approaches affect the results. Thus different studies measure recidivism in different ways and over different time periods and varying self-report instruments are used to assess victimization. For evaluation projects that rely on pre-existing data, e.g., crime data from the Uniform Crime Reports (UCR), it is often difficult to find variables that match the specific outcomes of interest and to disaggregate the data to the relevant program site. Multisite studies, in turn, require a common core of

Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

data to permit comparison of results across sites, but these must usually be developed ad hoc because there are few standards and little basis for identifying the most relevant measures.

There is much that the agencies that sponsor criminal justice evaluations might do to help alleviate these problems. Most directly, work should be supported on outcome measurement aimed at improving program evaluation and establishing cross-project comparability when possible. It would be especially valuable for evaluation projects if a compendium of scales and items for measuring criminal justice outcomes and the intermediate variables frequently used in criminal justice evaluations could be developed or identified and promoted for general use. Grantees could be asked to select measures from this compendium when appropriate to the evaluation issues. Also, public-use dataset delivery could be incorporated into grant and contract requirements and existing datasets could be expanded to include replication at other sites. Small-scale data augmentation and measurement development projects could be added to large evaluation projects.

The other area in which significant methodological development is needed relates to the research design component of impact evaluations. For the crucial issue of estimating program effects, randomized designs can be difficult to use in many applications and impossible in some and observational studies depend heavily on statistical modeling and assumptions about the influence of uncontrolled variables. Improvements are possible on both fronts. Creative adaptations of randomized designs to operational programs and fuller development of strong quasi-experimental designs, such as regression discontinuity, hold the potential to greatly improve the quality of impact evaluations. Similarly, improvements in statistical modeling and the related area of selection modeling for nonrandomized quasi-experiments could significantly advance evaluation practice in criminal justice.

As with measurement issues, there is much that agencies interested in high-quality impact evaluations could do to advance methodological improvement in evaluation design, and at relatively modest cost. Design-side studies could be added to large evaluation projects; for instance, small quasi-experimental control groups of different sorts to compare with randomized controls and supplementary data collections that allowed exploration of potentially important control variables for statistical modeling. Where small-scale or pilot evaluation studies are appropriate, innovative designs could be tried out to build more experience and better understanding of them. Secondary analysis of existing data and simulations with contrived data could also be supported to explore certain critical design issues. In similar spirit, meta-analysis of existing studies could be undertaken with a focus on methodological influences in contrast to the typical meta-analytic orientation to program effects.

Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 54
Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 55
Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 56
Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 57
Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 58
Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 59
Suggested Citation:"6 What Organizational Infrastructure and Procedures Support High-Quality Evaluation?." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 60
Next: 7 Summary, Conclusions, and Recommendations: Priorities and Focus »
Improving Evaluation of Anticrime Programs Get This Book
×
Buy Paperback | $37.00 Buy Ebook | $29.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Although billions of dollars have been spent on crime prevention and control programs during the past decade, scientifically strong impact evaluations of these programs are uncommon in the context of the overall number of programs that have received funding. Improving Evaluation of Anticrime Programs is designed as a working guide for agencies and organizations responsible for program evaluation, for researchers who must design scientifically credible evaluations of government and privately sponsored programs, and for policy officials who are investing more and more in the concept of evidence-based policy to guide their decisions in crucial areas of crime prevention and control.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!