Skip to main content

Currently Skimming:

4 Standards for Synthesizing the Body of Evidence
Pages 155-194

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 155...
... should use prespecified methods; include a qualitative synthesis based on essential characteristics of study quality (risk of bias, consistency, precision, directness, reporting bias, and for observa tional studies, dose–response association, plausible confounding that would change an observed effect, and strength of association) ; and make an explicit judgment of whether a meta-analysis is appropriate.
From page 156...
... As it did elsewhere in this report, the committee developed this chapter's standards and elements of performance based on available evidence and expert guidance from the Agency for Healthcare Research and Quality (AHRQ) Effective Health Care Program, the Centre for Reviews and Dissemination (CRD, part of University of York, UK)
From page 157...
... Moreover, the overall assessment of the body of evidence cannot be done until the syntheses are complete. In the context of CER, SRs are produced to help consumers, clinicians, developers of clinical practice guidelines, purchasers, and policy makers to make informed healthcare decisions (Federal Coordinating Council for Comparative Effectiveness Research, 2009; IOM, 2009)
From page 158...
... Because this report focuses on SRs for the purposes of CER and clinical decision making, the committee uses the term "quality of the body of evidence" to describe the extent to which one can be con fident that the estimate of an intervention's effectiveness is correct. This terminology is designed to support clinical decision making and is similar to that used by GRADE and adopted by the Cochrane Collaboration and other organizations for the same purpose (Guyatt et al., 2010; Schünemann et al., 2008, 2009)
From page 159...
... Required element: 4.3.1 Explain why a pooled estimate might be useful to decision makers Standard 4.4 If conducting a meta-analysis, then do the following: Required elements: 4.2.1 Use expert methodologists to develop, execute, and peer review the meta-analyses 4.2.2 Address the heterogeneity among study effects 4.2.3 Accompany all estimates with measures of statistical uncertainty 4.2.4 Assess the sensitivity of conclusions to changes in the pro tocol, assumptions, and study selection (sensitivity analysis) NOTE: The order of the standards does not indicate the sequence in which they are carried out.
From page 160...
... As research documented the variable quality of trials and widespread reporting bias in the publication of trial findings, it became clear that such hierarchies are too simplistic because they do not assess the extent to which the design and implementation of RCTs (or other study designs) avoid biases that may reduce confidence in the measures of effectiveness (Atkins et al., 2004b; Coleman et al., 2009; Harris et al., 2001)
From page 161...
... High American Randomized controlled trials (RCTs) without important limitations or overwhelming evidence College of from observational studies.
From page 162...
... of randomized trials or n-of-1 trial. For rare harms: SR of case-control studies, or studies revealing dramatic effects.
From page 163...
... High-quality case control or cohort studies with a very low risk of confounding or bias and a high probability that the relationship is causal. 2– Case control or cohort studies with a high risk of confounding or bias and a significant risk that the relationship is not causal.
From page 164...
... These characteristics -- risk of bias, consistency, precision, directness, and reporting bias, and for observational stud ies, dose–response association, plausible confounding that would change an observed effect, and strength of association -- are used by GRADE; the Cochrane Collaboration, which has adopted the GRADE approach; and the AHRQ Effective Health Care Program, which adopted a modified version of the GRADE approach (Owens et al., 2010; Balshem et al., 2011; Falck-Ytter et al., 2010; Schünemann et al., 2008)
From page 165...
... Five characteristics can lower the quality rating for the body of evidence: • Limitations in study design and conduct • Inconsistent results across studies • Indirectness of evidence with respect to the study design, popula tions, interventions, comparisons, or outcomes • Imprecision of the estimates of effect • Publication bias Three factors can increase the quality rating for the body of evidence because they raise confidence in the certainty of estimates (particularly for observational studies) : • Large magnitude of effect • Plausible confounding that would reduce the demonstrated effect • Dose–response gradient SOURCES: Atkins et al.
From page 166...
... Evaluation components in all systematic reviews: • Risk of bias in the design and conduct of studies • Consistency in the estimates of effect across studies • Directness of the evidence in linking interventions to health outcomes • Precision or degree of certainty about an estimate of effect for an outcome • Applicability of the evidence to specific contexts and populations Other considerations (particularly with respect to observational studies) : • Dose–response association • Publication bias • Presence of confounders that would diminish an observed effect • Strength of association (magnitude of effect)
From page 167...
... When differences in estimates across studies reflect true differences in a treatment's effect, then inconsistency provides the opportunity to understand and characterize those differences, which may have important implications for clinical practice. If the inconsistency results from biases in study design or improper study execution, then a thorough assessment of these differences may inform future study design.
From page 168...
... This is especially important because numerous clinically relevant factors dis tinguish clinical trial participants from most patients, such as health status and comorbidities as well as age, gender, race, and ethnicity (Pham et al., 2007; Slone Survey, 2006; Vogeli et al., 2007)
From page 169...
... Plausible Confounding That Would Change an Observed Effect Although controlled trials generally minimize confounding by randomizing subjects to intervention and control groups, obser
From page 170...
... Generally, confounding results in effect sizes that are overestimated. However, sometimes, particularly in observational studies, confounding factors may lead to an underestimation of the effect of an intervention.
From page 171...
... and greater risk of bias compared to controlled trials, the design, execution, and statistical analyses in each study should be assessed carefully to determine the influence of potential confounding factors on the observed effect. Strength of association refers to the likelihood that a large observed effect in an observational study is not due to bias from potential confounding factors.
From page 172...
... Standard 4.1 is presented first to reflect the committee's recommendation that the SR specifies its methods a priori in the research protocol.7 Standard 4.1 -- Use a prespecified method to evaluate the body of evidence Required elements: 4.1.1 For each outcome, systematically assess the fol lowing characteristics of the body of evidence: •  Risk of bias •  Consistency •  Precision •  Directness •  Reporting bias 4.1.2 For bodies of evidence that include observational research, also systematically assess the following characteristics for each outcome: •  Dose–response association •    lausible  confounding  that  would  change  the  P observed effect •  Strength of association 4.1.3 For each outcome specified in the protocol, use con sistent language to characterize the level of confi dence in the estimates of the effect of an intervention Rationale If an SR is to be objective, it should use prespecified, analytic methods. If the SR's assessment of the quality of a body of evidence is to be credible and true to scientific principles, it should be based on agreed-on concepts of study quality.
From page 173...
... The committee uses the term "qualitative synthesis" to refer to an assessment of the body of evidence that goes beyond factual descriptions or tables that, for example, simply detail how many studies were assessed, the reasons for excluding other studies, the range of study sizes and treatments compared, or quality scores of each study as measured by a risk of bias tool. While an accurate description of the body of evidence is essential, it is not sufficient (Atkins, 2007; Mulrow and Lohr, 2001)
From page 174...
... To describe what actually A description of the actual care and experience of the study participants (in contrast with the original happened to subjects during study protocol)
From page 175...
... To describe how the SR Sometimes commonly held notions about an intervention or a type of study design are not supported by findings contrast with the body of evidence. If this occurs, the qualitative synthesis should clearly explain how the SR findings conventional wisdom differ from the conventional wisdom.
From page 176...
... RECOMMENDED STANDARDS FOR QUALITATIVE SYNTHESIS The committee recommends the following standard and elements of performance for conducting the qualitative synthesis. Standard 4.2 -- Conduct a qualitative synthesis Required elements: 4.2.1 Describe the clinical and methodological charac teristics of the included studies, including their size, inclusion or exclusion of important sub groups, timeliness, and other relevant factors
From page 177...
... To give readers a clear understanding of how the evidence applies to real-world clinical circumstances and specific patient populations, SRs should describe -- in easy-to-understand language -- the clinical and methodological characteristics of the individual studies, including their strengths and weaknesses and their relevance to particular populations and clinical settings. META-ANALYSIS This section of the chapter presents the background and rationale for the committee's recommended standards for conducting a meta-analysis: first, considering the issues that determine whether a meta-analysis is appropriate, and second, exploring the fundamental considerations in undertaking a meta-analysis.
From page 178...
... . Fundamentally, a meta-analysis provides a weighted average of treatment effects from the studies in the SR.
From page 179...
... In this role, all the potential decision-making errors in clinical trials (e.g., Type 1 and Type 2 errors or excessive subgroup analyses) 8 apply to meta-analyses as well.
From page 180...
... Second, are the studies alike methodologically in study design, conduct, and quality? Third, are the observed treatment effects statistically similar?
From page 181...
... . This situation may occur in studies comparing the effect of an intervention on a variety of important patient outcomes such as pain, mental health status, or pulmonary function.
From page 182...
... Meth odological diversity encompasses variability in study design, con duct, and quality, such as blinding and concealment of allocation. Statistical heterogeneity, relating to the variability in observed treatment effects across studies, may occur because of random chance, but may also arise from real clinical and methodological diversity and bias.
From page 183...
... NOTE: Weights are from random effects analysis .2 .5 1 2 Method of combining studies favors ICD favors control Test for heterogeneity Indicator of scale, and direction For each study, central diamond of effect indicates mean effect, line represents 95% CI, and grey square reflects that study's weight in the pooling FIGURE 4-1 Forest plot. 183 SOURCE: Schriger et al.
From page 184...
... . Instead of eliminating heterogeneity by restricting study inclusion criteria or scope, which can limit the utility of the review, heterogeneity of effect sizes can be quantified, and related to aspects of study populations or design features through statistical techniques such as meta-regression, which associates the size of treatment effects with effect modifiers.
From page 185...
... Box 4-4 describes some of the research trends in meta-analysis and provides relevant references for the interested reader. Sensitivity of Conclusions Meta-analysis entails combining information from different studies; thus, the data may come from very different study designs.
From page 186...
... See for example: Berlin and Ghersi, 2004, 2005; Ghersi et al., 2008; The Cochrane Collaboration, 2010. Meta-regression -- In this method, potential sources of heterogeneity are represented as predictors in a regression model, thereby enabling estimation of their relationship with treatment effects.
From page 187...
... Standard 4.3 -- Decide if, in addition to a qualitative analy sis, the systematic review will include a quantitative analysis (meta-analysis) Required element: 4.3.1 Explain why a pooled estimate might be useful to decision makers Standard 4.4 -- If conducting a meta-analysis, then do the following: Required elements: 4.4.1 Use expert methodologists to develop, execute, and peer review the meta-analyses 4.4.2 Address heterogeneity among study effects 4.4.3 Accompany all estimates with measures of statisti cal uncertainty 4.4.4 Assess the sensitivity of conclusions to changes in the protocol, assumptions, and study selection (sensitivity analysis)
From page 188...
... 2007. Creating and synthesizing evidence with decision makers in mind: Integrating evidence from clinical trials and other study designs.
From page 189...
... 2009. Grading quality of evidence and strength of recommendations in clinical practice guidelines: Part 1 of 3.
From page 190...
... 2008. Systematic review of the empirical evidence of study publication bias and outcome reporting bias.
From page 191...
... Green, Cochrane handbook for systematic reviews of interventions. Chichester, UK: John Wiley & Sons.
From page 192...
... 2010. The impact of outcome reporting bias in randomised con trolled trials on a cohort of systematic reviews.
From page 193...
... 2010. Forest plots in reports of systematic reviews: A cross-sectional study reviewing current prac tice.
From page 194...
... 2010. Comparative effectiveness review methods: Clinical heterogeneity.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.