Skip to main content

Currently Skimming:

3 Producing High-Quality Economic Evidence to Inform Investments in Children, Youth, and Families
Pages 83-158

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 83...
... High-quality economic evidence can be difficult to derive because economic evaluation methods are complex and entail many assumptions (Crowley et al., 2014; Lee and Aos, 2011; Vining and Weimer, 2009a)
From page 84...
... In addition, the best practices for producing high-quality evidence recommended at the end of the chapter are divided into those that can be viewed as "core" and readily implemented in most circumstances and those the committee characterizes as "advancing," to be pursued when feasible. The focus of this chapter extends to highlighting best practices for re 2 In this chapter, the committee discusses at some length both the strengths and limitations of economic evaluations and the economic evidence produced.
From page 85...
... Finally, although much of this chapter is directed at producers of economic evidence, its content should also be of interest to consumers of the evidence. Consumers can benefit from understanding the analytic issues associated with planning for and conducting economic evaluations, the best practices for the production and reporting of economic evidence, and the limitations of economic evaluation methods.
From page 86...
... of interest and the information available. This section highlights the requirements for undertaking a high-quality economic evaluation, beginning with the most general requirements and then focusing on those that are specific to different economic evaluation methods.
From page 87...
... BCA = benefit-cost analysis, BIA = budgetary impact analysis, CA = cost analysis, CEA = costeffectiveness analysis, DALY = disability-adjusted life year, QALY = quality-adjusted life year, ROI = return on investment.
From page 88...
... Without a clear understanding of the base case, the counterfactual, and other contextual factors -- "what is delivered for whom, under what conditions, and relative to what alternative" -- interpretation of the results of economic evaluation will be muddy. Other Requirements for Economic Evaluation Provided that an intervention is well defined and the counterfactual and other contextual factors can be specified, a CA can be performed to understand the economic cost of the resources required for implementation or to provide the foundation for a CEA or BCA.
From page 89...
... These include the perspective for the analysis, the time horizon and discount rate, and several other analytic features. Perspective The perspective for an economic evaluation is determined by the question(s)
From page 90...
... For some economic evaluations, the primary focus may reflect mainly or solely a government perspective, which is just one component of the societal perspective. As noted in Chapter 2, cost-savings analysis is a BCA from the government perspective (Figure 3-1)
From page 91...
... For economic evaluations focused on children, youth, and families, the social discount rate is appropriate (Boardman and Greenberg, 1998)
From page 92...
... A discount rate also may vary by whether one uses a risky or riskless return. Discount rates used in economic evaluation have varied widely, although recommendations in recent years appear to be settling in the range of 3-7 percent (Drummond et al., 2005; Gold et al., 1996; Haddix et al., 2003; Hunink et al., 2001; Office of Management and Budget, 2003; Washington State Institute for Public Policy, 2015)
From page 93...
... Even using a low discount rate, outcomes just a century away would have present values so low that an economic evaluation would likely favor investments that would avoid even small sacrifices in the present, at the cost of potentially significant harm for future generations. Part of the complication here is that discount rates assume investments apply at the margin, so that an extra benefit (valued in dollar terms)
From page 94...
...  the monetary unit and year in which all economic values will be denominated, and (2)  whether to account for the deadweight cost of taxation.6 For the United States, economic evaluations typically use dollars as the currency measure, but any currency is feasible provided resources used and the value of intervention outcomes can be denominated in that currency.
From page 95...
... While economic evaluations often assume no deadweight loss, a few recent evaluations have produced results assuming different levels of deadweight loss as part of a sensitivity analysis (e.g., as in Heckman et al.
From page 96...
... The discussion of best practices in this section draws on a review and synthesis of guidelines for conducting CA in the literature. In so doing, it provides additional support for practices discussed earlier in the chapter that are relevant to economic evaluation methods in general, such as defining the purpose and scope of the analysis and the intervention to be analyzed.
From page 97...
... , and almost always recommend this perspective for BCAs (Calculating the Costs of Child Welfare Services Workgroup, 2013; Commonwealth of Australia, 2006; European Regional Development Fund, 2013; Treasury Board of Canada Secretariat, 2007; World Health Organization, 2006)
From page 98...
... . CONCLUSION: The societal perspective is the most commonly rec ommended perspective for researchers conducting cost analysis (CA)
From page 99...
... The first is a macro, top-down approach that uses total public spending (or individual site budget or expenditure) data to provide gross average estimates of intervention costs.8 The other is a bottom-up approach known as micro costing that relies on identifying all resources required to implement an intervention and then valuing those resources in monetary units to estimate intervention costs.
From page 100...
... Often these costs are shared by more than one intervention or used to create more than one output, or may be defined as expenses that directly benefit the agency (American Humane Association, 2009; Calculating the Costs of Child Welfare Services Workgroup, 2013; Capri et al., 2001; Chatterji et al., 2001; Cisler et al., 1998; Derzon et al., 2005; European Regional Development Fund, 2013; Federal Accounting Standards Advisory Board, 2014; Foster et al., 2003; Graf von der Schulenburg and Hoffmann, 2000; Haute Autorité de Santé, 2012; Institute for Quality and Efficiency in Health Care, 2009; Leonard, 2009; National Center for Environmental Economics, 2010; Pritchard and Sculpher, 2000; Suter, 2010; Task Force on Community Preventive Services, 2005; Treasury Board of Canada Secretariat, 2007)
From page 101...
... . However, it is important to assume that not all management tasks are indirect costs (Calculating the Costs of Child Welfare Services Workgroup, 2013)
From page 102...
... . Examples of the use of shadow prices are the shadow wage rate for adjusting labor prices to account for distortions in the labor market and the shadow price of capital, which is used to adjust the valuation of costs for the effects of government projects on resource allocation in the private sector (European Commission, 2008; National Center for Environmental Economics, 2010; Office of Management and Budget, 1992; World Health Organization, 2006)
From page 103...
... . Recommendations on sensitivity analyses are usually generic and often are centered on the discussion of discount rates.
From page 104...
... Also important is acknowledging that some unit cost estimates are more robust than others. Specifying where data are limited sets the stage for sensitivity analysis of cost estimates based on those variables and creates a research agenda for those implementing the intervention in the future.
From page 105...
... Other issues common in economic evaluations of interventions serving children, youth, and families include how to combine evidence when multiple evaluations of a given intervention exist, which effects to include in a CEA or BCA, and how to handle uncertainty in intervention effect sizes -- all addressed in subsequent sections of this chapter. Research Designs and Evidence of Intervention Impact An underlying premise of CEA and BCA is that the outcomes being subjected to economic evaluation were caused by the intervention.13 Logic models can help articulate an intervention's putative causal mechanisms, but evidence that the intervention caused an outcome comes from certain research designs used in program evaluation.
From page 106...
... 17 As is noted throughout this report, describing the research design and sample on which an economic evaluation is based aids in accurate interpretation of the evaluation results. For regression discontinuity designs, this is particularly important as impacts apply to participants
From page 107...
... Ultimately, analysts conducting economic evaluations need to assess and describe the overall quality of the impact evaluation evidence that forms the basis for the economic evaluation, whether that evidence comes from experimental or quasi-experimental designs or both. For some interventions, strong evidence will come from experimental designs.
From page 108...
... Third, when available evidence is based on nonoptimal designs or impact estimates are likely to be biased, it may be possible to proceed with caution, including acknowledging possible bias and conducting sensitivity analyses on economic findings. Finally, if economic evaluations appear too speculative to be valid, the next best step may be to attempt a higher-quality efficacy analysis that produces higher-quality evidence of impact.
From page 109...
... When the value of the outcomes exceeds the intervention costs (after both have been adjusted as necessary for inflation and discounting) , an intervention can be said to be "costbeneficial." For example, a BCA of an obesity prevention intervention would involve estimating the economic gains, or benefits, from reducing obesity and determining whether they exceeded intervention investments (after both had been adjusted for inflation and discounting)
From page 110...
... Haddix and colleagues (2003) suggest this is why CEA became the dominant analytic method for economic evaluation in health care after the 1980s.
From page 111...
... and are important to the economic evaluation of social programs. 21 Double counting also becomes an issue if some outcomes are inputs for other outcomes.
From page 112...
... A related concern is that randomized controlled trials of interventions by the interventions' developers often collect longitudinal data on scores of outcomes indicated by program logic models, test significance by time period and subgroup, and then produce confidence intervals by time periodsubgroup combination. Statistical test results may or may not be corrected for multiple comparisons and the possibility of chance findings.
From page 113...
... Linked economic impacts are not measured directly in the impact evaluation but are caused by other, direct intervention impacts.24 If a truancy prevention program causes lower absenteeism and higher graduation rates, for example, its linked impacts may be increased college enrollment, better-paying jobs, etc. As another example, expanding Medicaid and the State Children's Health Insurance Program is associated with increased tax receipts (Brown et al., 2015)
From page 114...
... prevention children who received private participated in the Perry Preschool system on grade 12 outcomes observed in intensive foster care services Program compared with children a panel of youth involved in a community compared with children who who did not randomized trial of CTC efficacy received public foster care services Treatment, Quasi-experimental design: Randomized controlled trial: Community-randomized trial: Control, or • Treatment -- Children placed in • Treatment -- Children received • Treatment -- Panel of youth from 12 Comparison Casey Family Programs intensive 1-2 years of intensive preschool communities trained to implement CTC Group foster care services (center-based program, home prevention system • Control -- Children placed in visits, parent groups) • Control -- Panel of youth from 12 public program (matched using • Control -- Children did not communities matched in pairs within propensity scoring methods)
From page 115...
... M    -  ore positive relationships with relatives -- not monetized Program • Physical disorders • Education • Delinquency Outcomes with • Mental health disorders • Earnings and taxes Direct Economic • Crime Impacts • Welfare Linked Economic • Lifetime earnings and taxes -- • Lifetime earnings and taxes -- • Lifetime alcohol disorder, heavy regular Impacts from higher educational from higher educational smoking -- from lower onset of alcohol attainment attainment use, cigarette smoking • Educational attainment -- from lower onset of delinquency Actual or Estimated Combination Estimated Estimated Economic Impacts 115
From page 116...
... In other cases, projections are more complex. For example, the Washington State Institute for Public Policy has developed a sophisticated model for estimating the impact of reducing delinquency and crime that involves the marginal and operating costs for different types of crime, recidivism rates and related costs, victimization costs related to dif
From page 117...
... Valuing Outcomes for Economic Evaluation To estimate the economic benefits of an intervention (e.g., avoided costs, increased income, increased tax revenues) , the analyst must assign prices to its impacts, a process that includes consideration of price changes over time.
From page 118...
... For example, although there is not a direct market for educational attainment, its shadow price is routinely captured in BCAs through higher wages associated with higher levels of attainment (Oreopoulos and Petronijevic, 2013; Washington State Institute for Public Policy, 2015)
From page 119...
... . Among stated preference methods, the discrete choice approach has "become the most frequently applied approach in healthcare" (Johnson et al., 2013)
From page 120...
... . Adolescents consequently take irrational risks, with both their revealed and stated preferences for risk being skewed downward from their preferences at maturity.
From page 121...
... Indirect economic values, or shadow prices, are used to capture economic val ues using various methods, such as linking the outcome of interest to another outcome that can be valued. Revealed and stated preference methods can be used to estimate willingness to pay, potentially enabling both tangible and intangible outcomes to be valued.
From page 122...
... In addition, if the value of qualityof-life impacts is omitted, it is difficult to compare the economic returns on investments in different sectors, such as health interventions versus prevention or other social interventions. Nonetheless, the difficulty of measuring and valuing quality-of-life impacts is a long-standing concern regarding economic evaluations of interventions serving children, youth, and families.
From page 123...
... . To better inform those choices, more than a third of recent cost-utility studies conducted a sensitivity analysis using a cost-effectiveness acceptability curve (CEAC)
From page 124...
... Several generic QALY or DALY scales are commonly used in economic evaluation of health services (McDowell, 2006; Miller, 2000)
From page 125...
... Approaches for this purpose have been developed for use in economic evaluations in other policy areas, such as health and the environment, which may provide promising strategies for use in economic evaluations of social programs. A quality-of-life measure embedded in a willingness-to-pay estimate or one that can be compared with the quality-adjusted life years/disability-adjusted life years measures used in economic evalu ations of health interventions would facilitate the ability to compare across sectors.
From page 126...
... Summary Measures: Cost Analysis Total costs of an intervention are its aggregate costs, calculated by multiplying all resources used by their unit costs and then summing these totals. As noted earlier, if costs are incurred over time, decisions about inflation and discounting are applied.
From page 127...
... Summary Measures: Cost-Effectiveness Analysis The main summary measure in a CEA is the cost-effectiveness (CE) ratio, derived by dividing the intervention costs (discounted and adjusted for inflation, as appropriate)
From page 128...
... Box 3-1, presented earlier, includes discussion of the lack of theoretical justification for any threshold for a maximum acceptable cost per QALY or DALY and the ad hoc approaches used to handle the question. Summary Measures: Benefit-Cost Analysis The preferred summary measure for BCA is net present-value (NPV)
From page 129...
... Economic evaluations producing summary measures based on nonlocal samples and prices may not inform local decisions. Understanding how investments yield benefits depends critically on preintervention problem levels as well as an intervention's reach; that is, context matters (see Chapter 4)
From page 130...
... Together, such estimates can prevent consumers of the evidence derived from BCAs, in particular, from assuming that the intervention with the lowest cost or highest total savings is inherently the better choice. CONCLUSION: The literature supports a number of summary mea sures for economic evaluation: • Cost analysis -- In addition to total cost, informative summary measures include the unit cost of the investment (e.g., cost per participant or average cost)
From page 131...
... found that only 4 of them reported the standard errors of their results. Sensitivity analysis, as alluded to earlier, is used to address uncertainty in an economic evaluation.
From page 132...
... Such analyses, for example, can demonstrate the effects on economic evaluation results of smaller outcomes expected in intervention replication, a shorter time horizon for benefits, omission of an outcome with weak evidence, or an alternative discount rate. One-way sensitivity analyses also can provide information that facilitates comparisons across analyses.
From page 133...
... The literature supports the following practices for addressing uncer tainty in high-quality economic evaluations: • An emerging best practice for providing a comprehensive assess ment of the implications of multiple sources of uncertainty is the use of Monte Carlo methods -- either alone or in combination with one-way sensitivity analyses. • In the case of benefit-cost analyses, a recommended summary mea sure from Monte Carlo simulations is the proportion of trials with positive net benefits.
From page 134...
... Accordingly, some economic evaluations assign differential weights to the costs and benefits accruing to different subgroups affected by an intervention based on the differential means or needs of the subgroups or variation in other socially relevant characteristics. The challenge in defining and applying such weights is to ascertain the appropriate weights to use, as they may vary across different members of the target audience for an economic evaluation.
From page 135...
... . CONCLUSION: Acknowledging equity concerns can enhance the quality and usefulness of economic evaluations.
From page 136...
... It also offers recommended best practices for reporting the results of economic evaluations so as to achieve transparency, consistency, and usefulness to decision makers. Although this discussion is geared toward those producing economic evidence, it also should be helpful to consumers of the evidence, particularly with respect to assessing the quality and completeness of the evidence presented to them.
From page 137...
... However, the core practices will not fully resolve limits on study comparability because of the many possible sources of difference among interventions and economic evaluation methods and assumptions. In other areas, the literature and committee members' views were not as clear-cut.
From page 138...
... and the impacts used for the CEA or BCA. -- Determine the scope of the economic evaluation, including the type of method to be used and the perspective (and any subperspectives)
From page 139...
... Core Practices: -- Determine an explicit rationale for including intervention impacts
From page 140...
... : --  Conduct CEA only when an intervention has been evaluated us ing research designs that can produce unbiased causal estimates of impact. --  Conduct CEA from a societal perspective to produce the most comprehensive economic estimates.
From page 141...
... Recommendation 2 includes best practices for reporting the results of economic evaluations. As in the best practices under Recom
From page 142...
... To avoid overwhelming users with analytic details that could obscure the bottom line, it may be helpful to prepare a brief summary report along with a separate technical appendix detailing assumptions and methods. Producing clear and comprehensive reports will strengthen the credibility of the evidence derived from economic evaluation for users and facilitate its appropriate use by decision makers.
From page 143...
... to which discounted -- Summary measures of the economic evaluation results (see below for each specific method) -- When relevant, results disaggregated by stakeholder -- The approach for addressing uncertainty, details on how the method was implemented, and the associated standard errors or confidence intervals for all summary measures -- Sensitivity analyses performed and associated results*
From page 144...
... -- Fixed, variable, and marginal costs -- The implications of methods (e.g., omission of resources, prices applied) for under- or overestimating intervention costs In Addition to the Elements for All Methods and for CA, for a CEA Also Report: --  Which impacts measured in the evaluation are valued in the CEA and which are not*
From page 145...
... Olympia: Washington State Institute for Public Policy. Argyle, M
From page 146...
... . Federal Agency Benefit-Cost Analysis Principles and Standards for Social Programs.
From page 147...
... . Engaging program staff in economic evaluation: Lessons learned and recommendations for practice.
From page 148...
... . Methods for Incorporating Equity into Economic Evaluation of So cial Investments.
From page 149...
... . Common methodological flaws in economic evaluations.
From page 150...
... . Issues in the economic evaluation of preven tion programs.
From page 151...
... . Choices in Methods for Economic Evaluation.
From page 152...
... Tentative guidelines for us ing clinical and economic evaluations. Canadian Medical Association Journal, 146(4)
From page 153...
... . Economic Evaluation of Public Health Laws and Their Enforcement.
From page 154...
... . Circular A-94: Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs.
From page 155...
... . General Guidelines for Economic Evaluations from the Pharmaceutical Benefits Board.
From page 156...
... . Economic Evaluation of a Community-Based, Family-Skills Prevention Program.
From page 157...
... . Guidelines on Health Economic Evaluation.
From page 158...
... . Workbook 8: Economic Evaluations.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.