Skip to main content

Currently Skimming:

6 Implementing Impact Evaluations in the Field
Pages 151-176

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 151...
... Thus, even if willing to accept the desirability in principle of adopting the methodology of randomized evaluation, it is reasonable to wonder how readily it can be applied to the sorts of programs that USAID missions in the field regularly undertake. To find out, the committee commissioned three expert teams to visit USAID missions overseas in an effort to assess the viability of impact evaluations for past and present DG programming.
From page 152...
... The second part provides responses to the most commonly raised objections that the committee and its field teams heard expressed about the use of randomized evaluations in DG programs. Before turning to the details of what the field teams found, it is important to highlight a clear and consistent message that came through from all three field visits.
From page 153...
... The teams focused on local government/decentralization in Albania and Peru and support for multiparty democracy in Uganda. Each field team was composed of methodological consultants, academic or other experts with relevant experience in research design or program evaluation and DG issues, and country or regional expertise; a Washington-based USAID staff member who was familiar with the mission, the committee's work, and USAID policies and practices; and National Research Council professional staff, who assisted the consultants in meeting the team's objectives and coordinated the logistics of the field visits. In evaluating the findings of the three field teams, it is important to keep in mind that the field teams visited missions that had expressed an interest in improving their evaluation strategies.
From page 154...
... We begin with a decentralization project in Peru that has already been implemented, outlining how the project monitoring strategy that was employed could have been adjusted to accommodate a randomized component that would have made it an impact evaluation design and showing how such an adjustment would have permitted the mission to generate much stronger inferences about project impact. Then a planned multipronged effort to support multiparty democracy in Uganda is described, emphasizing how pieces of the existing project might be amenable to randomized evaluation and showing how adopting such an evaluation method would improve USAID's ability to assess the project's effects. The committee's goal is to use these projects as illustrations of the potential payoffs that could accrue from improved evaluation strategies.   The discussion here of decentralization in Peru is drawn from the report of a field team led by Thad Dunning, assistant professor of political science, Yale University.
From page 155...
... program was intended to: • support the implementation of mechanisms for citizen participation with subnational governments (such as "participatory budgeting") , • strengthen the management skills of subnational governments in selected regions of Peru, and • increase the capacity of nongovernmental organizations in these same regions to interact with their local government (USAID/Peru 2002)
From page 156...
... As mentioned, the decentralization project sought to foster citizen participation, transparency, and accountability at the local level, with the ultimate objective of promoting "increased responsiveness of subnational elected governments to citizens." Though some of these outcomes are potentially, albeit imperfectly, measurable, indicators gathered at the local level related almost exclusively to outputs rather than outcomes. For example, the indicators gathered included the percentage of municipalities that signed "participation agreements" with local contractors; the percentage of participating municipalities from which at least two individuals (local authorities or representatives of CSOs)
From page 157...
... USAID/Peru's implementer was tasked with carrying out the decentralization project in all 536 districts of the seven selected regions. Once the rollout of interventions in all municipalities had been completed, no untreated municipalities remained available in the selected regions.
From page 158...
... The exception is the 2006 commissioned survey taken as a part of the Latin American Public Opinion Project (LAPOP) , which administered a questionnaire to a nationwide probability sample of adults including an oversample of residents in the seven regions in which USAID works (Carrión et al 2007)
From page 159...
... As just one example, many municipalities in the seven regions had been ravaged by the conflict with the Shining Path during the 1980s and 1990s. Investment and population return have picked up in some areas during the past decade, especially the past five years; at least some of this upturn must be due to the end of the war and other factors.13 Improvements in measured municipal capacity or in citizens' perceptions of local government responsiveness during the life of the program may, therefore, not be readily attributable to USAID support for decentralization.
From page 160...
... Survey evidence on citizens' perceptions of local government responsiveness would be useful, as would information on participation in local government and evaluations of municipal governance capacity taken across all municipalities in the seven regions (both treated and untreated)
From page 161...
... However, another possibility discussed below is to implement a more complex design in which different municipalities would be randomized to different bundles of interventions. USAID/Peru is preparing to roll out a second five-year phase of the decentralization project, possibly again in the seven regions in which it typically works.
From page 162...
... Current monitoring efforts do not give USAID evidence about the impact of investments in local government, yet such decentralization and local government strengthening projects are a staple in the USAID DG toolbox. The good news is that the committee's field team concluded that a randomized evaluation of key aspects of the Peru decentralization project would be feasible with only modest adjustments in project design.
From page 163...
... Support for CSOs One of the core activities envisioned in the Linkages project is a capacity-building program with grants to CSOs to enable them to monitor local governments and help improve representation and service delivery at the local level (USAID/Uganda 2007a)
From page 164...
... to evaluate the total impact of awarding a grant. Carefully matched groups of three subcounties would be selected purposively so that the subcounties in each group are similar along a number of dimensions that are measurable and likely to be associated with CSO capacity and government service delivery for HIV/AIDS programs.
From page 165...
... To evaluate the effectiveness of CSO grants on the delivery of government services, data could be collected on HIV/AIDS services and outcomes within each subcounty. Much of these data may already be collected by the government (such as the periodic National Service Delivery Survey conducted by the Uganda Bureau of Statistics -- though perhaps USAID would need to fund an oversampling of respondents in treatment and control subcounties)
From page 166...
... Ideally, baseline data would be collected before implementation of the program and then again during and after. USAID could also investigate the possibility of contributing to ongoing data collection efforts by the government or other agencies (such as the yearly school census, the service delivery survey, the Afrobarometer public opinion survey, and public expenditure tracking surveys)
From page 167...
... Randomized evaluation offers a powerful tool for assessing the impact of interparty dialogues. Five voting precincts could be randomly selected to be in the treatment group for each of 14 different parliamentary constituencies.
From page 168...
... Although these evaluation models do not cover every planned intervention currently under consideration by the Uganda mission, if implemented, they would provide substantial new evidence about the efficacy of USAID DG programming in Uganda. Challenges in Applying Randomized Evaluation to DG Programs The evaluation designs described above are the basis for the unanimous conclusion of the field teams that randomized evaluations, apart from being valuable where they can be successfully applied, are also feasible designs for measuring the impact of (at least some)
From page 169...
... For some USAID staff and implementers with whom the committee spoke, this was a major reason to resist adoption of randomized evaluations. It was pointed out, for example, that in many situations USAID and its implementers can only work with local authorities that accept their help.
From page 170...
... Continuing to channel scarce resources to projects that, once properly evaluated, turn out to have no positive impact is wasteful, particularly when properly executed randomized evaluations could put USAID in a position to identify projects that do work and whose reach and impact could usefully be expanded with a shift in resources from those that have been found to be underperforming. A second defense of randomized assignments against the criticism that some units will go untreated is that, in any project being implemented across a large number of potential units, there will virtually always be untreated units.
From page 171...
... Taking advantage of the fact that the treatment is randomly assigned across space, they estimate the size of these spillover effects and then use the estimates to calculate the true effects of the deworming program, which they find to be positive once the spillover effects are accounted for. Their study underscores that not just minimizing but also measuring contamination must be a core aspect of any well-designed randomized evaluation.
From page 172...
... A common concern the field teams heard was that randomized evaluations are insufficiently flexible to be practical. As a political officer at the U.S.
From page 173...
... While the idea of randomized evaluation is intuitive and easy to understand, the design of high-quality randomized evaluations requires additional academic training, specialized expertise, and good instincts for research design. It is likely that many (or most)
From page 174...
... It will cost too much to conduct randomized evaluations. Perhaps the most important objection the committee encountered in the field is that randomized evaluation will cost too much.
From page 175...
... The real peril lies in believing wrongly that the consequences of a program are, in fact, known and allocating resources on that basis when the hypotheses behind a program have not been tested by impact evaluations. Conclusions The committee's consultants believed they had demonstrated that at least some of the types of projects USAID is now undertaking could be subject to the most powerful impact evaluation designs -- large N randomized evaluations -- within the normal parameters of the project design.
From page 176...
... It is, therefore, recommended in Chapter 9, as part of a broader effort to improve evaluations and learning regarding DG programs at USAID, that USAID begin with a limited but high-visibility initiative to provide a test of the feasibility and value of applying impact evaluation methods to a select number of its DG projects. REFERENCES Carrión, J.F., Zárate, P., and Seligson, M.A.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.