Skip to main content

Currently Skimming:

4 Systematic Reviews: The Central Link Between Evidence and Clinical Decision Making
Pages 81-120

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 81...
... Iain Chalmers Academia's Failure to Support Systematic Reviews (Chalmers, 2005) Abstract: This chapter provides the committee's findings and recommendations for conducting systematic evidence reviews under the aegis of a proposed national clinical effectiveness assessment program ("the Program")
From page 82...
... In a series of meta-analyses examining the treatment of myocardial infarction, the researchers concluded that clinicians need better access to syntheses of the results of existing studies to formulate clinical recommendations. Today, systematic reviews of the available evidence remain an often undervalued scientific discipline.
From page 83...
... . The ultimate purposes of systematic reviews vary and include health coverage decisions, practice guidelines, regulatory approval of new pharmaceuticals or medical devices, clinical research or program planning.
From page 84...
... At Hayes, Inc., for example, subscriptions range from $10,000 to $300,000, depending on the size of the subscribing organizations and the types of products licensed. The Cochrane Collaboration is an international effort that produces systematic reviews of health interventions; 11 percent (nearly 1,700 individuals) of its active contributors are in the United States (Allen and Clarke, 2007)
From page 85...
... first used the term "meta-analysis" to describe what is now referred to as systematic review. Textbooks describing the concept and methods of systematic reviews (Cooper and Rosenthal, 1980; Glass et al., 1981; Hedges and Olkin, 1985; Light and Pillemer, 1984; Rosenthal, 1978; Sutton et al., 2000)
From page 86...
... . The early implementers of systematic reviews were those who conducted clinical trials and who saw the need to summarize data from multiple effectiveness trials, many of them with very small sample sizes (Yusuf et al., 1985)
From page 87...
... The following sections of the chapter describe the fundamentals of conducting a scientifically rigorous systematic review and then provide the committee's findings on current efforts. FUNDAMENTALS OF A SYSTEMATIC REVIEW Although researchers use a variety of terms to describe the building blocks of a systematic review, the fundamentals are well established (AHRQ EPC Program, 2007; Counsell, 1997; EPC Coordinating Center, 2005; Haynes et al., 2006; Higgins and Green, 2006; Khan and Kleijnen, 2001; Khan et al., 2001a,b; West et al., 2002)
From page 88...
... Imaging tests for Alzheimer's disease may lead to the early diagnosis of the condition, but patients and the patients' caregivers may be particularly interested in whether an early diagnosis improves cognitive outcomes or quality of life. Many researchers suggest that decision makers be directly involved in formulating the question to ensure that the systematic review is relevant and can inform decision making (Lavis et al., 2005; Schünemann et al., 2006)
From page 89...
... Preventive Services Task Force. NOTE: Generic analytic framework for screening topics.
From page 90...
... , preventive services, diagnostic or therapeutic interventions, and intermediate and health outcomes to be considered (Harris et al., 2001)
From page 91...
... . In these cases, the systematic reviews consider observational studies because, at a minimum, noting the available evidence helps to delineate what is known and what is not known about the effectiveness of the intervention in question.
From page 92...
... control studies aSystematic reviews of the "best" evidence are more reliable than evidence from a single study, regardless of the clinical question being asked. bPET = positron emission tomography.
From page 93...
... . Indeed, the evidence base on the effectiveness of most health services is sparse (BCBSA, 2007; Congressional Budget Office, 2007; The Health Industry Forum, 2006; IOM, 2007; Medicare Payment Advisory Commission, 2007; Wilensky, 2006)
From page 94...
... "Fair" indicates sufficient evidence to determine effect on health outcomes but the evidence is limited by the number, quality, or consistency of the individual studies. "Poor" indicates insufficient evidence on effects on health outcomes because of a limited number of studies or the weak power of the studies, flaws in study design or conduct, or lack of information on important health outcomes.
From page 95...
... Step 3: Conduct a Comprehensive Search for Evidence The search for the evidence is arguably the most important step in conducting a high-quality systematic review. In a human research study, selection of the appropriate group to be studied is widely understood to be
From page 96...
... 96 KNOWING WHAT WORKS IN HEALTH CARE BOX 4-2 Prevention Topics with Insufficient Evidence for One or More Population Subgroups • Behavioral counseling in primary care to promote a healthy diet • Behavioral counseling in primary care to promote physical activity • Breast-feeding • Counseling to prevent skin cancer • Counseling to prevent tobacco use and tobacco-caused disease • Interventions in primary care to reduce alcohol misuse • Lung cancer screening • Newborn hearing screening • Prevention of dental caries in preschool-age children • Primary care interventions to prevent low back pain in adults • Routine vitamin supplementation to prevent cancer and cardiovascular disease • Screening and behavioral counseling • Screening and interventions for overweight in children and adolescents • Screening for bacterial vaginosis in pregnancy • Screening for breast cancer • Screening for cervical cancer • Screening for chlamydial infection • Screening for coronary heart disease • Screening for dementia • Screening for depression • Screening for family and intimate partner violence • Screening for gestational diabetes mellitus • Screening for glaucoma • Screening for gonorrhea • Screening for hepatitis C in adults • Screening for high blood pressure • Screening for lipid disorders in adults • Screening for obesity in adults • Screening for oral cancer • Screening for prostate cancer • Screening for skin cancer • Screening for suicide risk • Screening for thyroid disease • Screening for Type II diabetes mellitus in adults NOTE: Each clinical topic or preventive service that the USPSTF has reviewed may lead to one or more separate population-specific recommendations. The USPSTF rates the strength of its recommendations as "I" for "insufficient" when evidence on whether the service is ef fective is lacking, of poor quality, or conflicting and the balance of benefits and harms cannot be determined.
From page 97...
... Publication biases also relate to where a study is published, as some sources are more accessible than others. Some systematic reviewers find it difficult to readily identify studies published in non-English-language journals, the gray literature, and certain specialty journals.
From page 98...
... The key types of bias that may affect the validity of a systematic review are as follows: • Reporting bias -- systematic differences may exist between reported and non reported studies (e.g., a higher proportion of studies with positive findings than studies with null or negative findings may be published ["publication bias"]
From page 99...
... . Researchers at McMaster University and elsewhere have extensively tested search strategies to determine those strategies that are optimal for detecting reports on RCTs and other types of studies used in systematic reviews (Wieland and Dickersin, 2005; Wilczynski et al., 2005)
From page 100...
... In a recent systematic review of 34 studies comparing the sensitivity of hand searches with that of electronic searches, Hopewell and colleagues (2007a) found that hand searches identified 92 to 100 percent of the total number of reports of randomized trials.
From page 101...
... examined how 78 English-language systematic reviews analyzed the quality of the original observational studies. All the reviews were published in peer-reviewed journals from 2003 to 2004.
From page 102...
... It was unknown whether quality was assessed for three of the reviews. Investigators have also identified data extraction errors in many systematic reviews, including Cochrane Collaboration and other standards-based reviews (Gøtzsche et al., 2007; Jones et al., 2005)
From page 103...
... Class II = prospective matched group cohort study in a representative population with masked outcome assessment that meets all four Class I criteria (a to d) or an RCT in a representative population that lacks one of the Class I criteria Veterans Health Administration Level I At least one properly conducted randomized controlled trial *
From page 104...
... . The AHRQ Effective Health Care Program is currently developing a methods manual for systematic reviews that focuses on comparative effectiveness (AHRQ, 2007a)
From page 105...
... Journal Standards for Reporting Systematic Reviews In the past decade, researchers, clinicians, epidemiologists, statisticians, and editors have collaborated to develop standards for the reporting of findings from clinical trials and meta-analyses of randomized and nonrandomized studies in journals. The collaboration arose from concerns that study quality was poorly reflected in the manuscripts that present study findings.
From page 106...
... TABLE 4-6  Comparison of Reporting Standards in CONSORT, QUOROM, and MOOSE 106 Required Information System, Study Type Title and Abstract Introduction Methods Results Discussion CONSORT, For the abstract, identify Scientific Participants, interventions, Participant flow, Interpretation, trials the method of selection background and objectives, outcomes, sample recruitment, baseline generalizability, overall of participants explanation of size, randomization, blinding, data, numbers evidence rationale and statistical methods analyzed, outcomes and estimation, ancillary analyses, and adverse events QUOROM, For the title, identify Clinical problem, Searching, selection, validity Trial flow, study Key findings, meta-analyses the report as a meta- biological assessment, data abstraction, characteristics, and clinical inferences, of trials analysis (or a systematic rationale for the study characteristics, quantitative data interpretation of review) of RCTs; for the intervention, quantitative data, and synthesis synthesis results, potential abstract, use a structured and rationale for biases, and future abstract format that review research agenda describes the objectives, data sources, review methods, results, and conclusions MOOSE, meta- For the abstract, identify Clinical problem, Search strategy, qualification Graph and table Assessment of bias, analyses of the type of study and use hypothesis, of searchers, software and summarizing the justification of observational structured abstract outcome of databases used, potential results, sensitivity exclusion, quality studies study, exposure biases, rationale for studies testing, and of studies included, or intervention and selection of data used, indication of alternative explanation, used, study assessment of confounding, the statistical generalization, future design used, and study quality and heterogeneity, uncertainty of the research, and funding study population and statistical methods findings source
From page 107...
... (2000) manuscript manuscript Annals of Internal ✓ ✓ ✓ ✓ Medicine Archives of General ✓ ✓ Psychiatry British Medical ✓ ✓ ✓ ✓ Journal CANCER ✓ ✓ CHEST ✓ ✓ Circulation ✓ ✓ ✓ Diabetes ✓ Hypertension ✓ ✓ JAMA ✓ ✓ ✓ ✓ Journal of Clinical ✓ ✓ Oncology Journal of the ✓ ✓ National Cancer Institute Lancet ✓ ✓ New England ✓ ✓ Journal of Medicine Obstetrics and ✓ ✓ ✓ Gynecology Pediatrics ✓ ✓ Radiology ✓ ✓ ✓ Reviews in Clinical Gerontology Spine ✓ ✓ NOTE: For systematic reviews, the International Committee of Medical Journal Editors (ICMJE)
From page 108...
... RECOMMENDATIONS As noted earlier, this chapter's recommendations are intended to guide the conduct of systematic reviews produced under the aegis of a national clinical effectiveness assessment program ("the Program")
From page 109...
... Recommendation: The Program should assess the capacity of the re search workforce to meet the Program's needs, and, if deemed appro priate, it should expand training opportunities in systematic review and comparative effectiveness research methods. Research Workforce It is not known how many researchers in the United States are adequately trained and qualified to conduct systematic reviews on the effectiveness of health care services.
From page 110...
... • Inclusion of interstudy variability into displays of results • How best to display findings and their reliability for users • Methods and validity of indirect comparisons Interpreting Results • Understanding why reviews on similar topics may yield different results • Updating systematic reviews • Frequency of updates SOURCE: Cochrane Collaboration (2007) ; Higgins and Green (2006)
From page 111...
... Thus, it appears that the failure to update systematic reviews and guidelines within a few years could easily result in patient care that is not evidence based and, worse, care that is not as effective as possible or potentially dangerous. New and Emerging Technologies Although this chapter has focused on comprehensive, systematic reviews, the committee recognizes that some decision makers have a legitimate need for objective advisories on new and emerging technologies in order to respond to coverage requests when few, if any, high-quality studies or systematic reviews exist.
From page 112...
... Typically, the reviews include a brief description of an intervention; its relevance to clinical care; a short, preliminary list of the relevant research citations that have been identified; two- to three-paragraph summaries of selected research abstracts; and details on the methods used to search the literature. The Program should consider producing brief advisories on new and emerging technologies in addition to full systematic reviews.
From page 113...
... 2006. Single data extraction generated more errors than double data extraction in systematic reviews.
From page 114...
... 1998. The Cochrane Collaboration: Evaluation of health care and services using systematic reviews of the results of randomized controlled trials.
From page 115...
... 2001. Rationale, potentials, and promise of systematic reviews.
From page 116...
... 2005. High preva lence but low impact of data extraction and reporting errors were found in Cochrane systematic reviews.
From page 117...
... 2006. Quality assessment of observational studies is not commonplace in systematic reviews.
From page 118...
... 2005. Challenges in systematic reviews that evaluate drug efficacy or effectiveness.
From page 119...
... 2002. A comparison of the quality of Cochrane reviews and systematic reviews published in paper-based journals.
From page 120...
... 2005. A systematic review finds that diagnostic reviews fail to incorporate quality despite avail able tools.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.