National Academies Press: OpenBook

Clinical Practice Guidelines We Can Trust (2011)

Chapter: 6 Promoting Adoption of Clinical Practice Guidelines

« Previous: 5 Current Best Practices and Standards for Development of Trustworthy CPGs: Part II, Traversing the Process
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

6
Promoting Adoption of Clinical Practice Guidelines

Abstract: Promoting uptake and use of clinical practice guidelines (CPGs) at the point of care delivery represents a final translation hurdle to move scientific findings into practice. Characteristics of the intended users and context of practice are as important as guideline attributes for promoting adoption of CPG recommendations. The committee’s recommendations for individual and organizational interventions for CPG implementation are as follows: Effective multifaceted implementation strategies targeting both individuals and healthcare systems should be employed by implementers to promote adherence to trustworthy CPGs. Increased adoption of electronic health records and clinical decision support (CDS) will open new opportunities to rapidly move CPGs to the patient encounter. The committee recommends that guideline developers and implementers take the following actions to advance this aim. Guideline developers should structure the format, vocabulary, and content of CPGs (e.g., specific statements of evidence, the target population) to facilitate ready implementation of electronic clinical decision support (CDS) by end-users. CPG developers, CPG implementers, and CDS designers should collaborate in an effort to align their needs with one another. In considering legal issues affecting CPG implementation, the committee suggests clinicians will be more likely to adopt guidelines if they believe they offer malpractice litigation protection. The committee

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

also suggests courts will be more likely to adopt guidelines that are trustworthy and urges them, given reliance on CPGs, to use those deemed trustworthy when available.

INTRODUCTION

Clinical practice guidelines (CPGs) draw on synthesized research findings to set forth recommendations for state-of-the-art care. Trustworthy CPGs are critical to improving quality of care, but many CPGs are not developed for ready use by clinicians. They are typically lengthy documents of written prose with graphical displays (e.g., decision trees or flow charts) making them difficult for clinical use at the point of care delivery. Furthermore, recommendations from CPGs must be applied to patient specific data to be useful, and often, data required for a given guideline either are not available or require too much time to ascertain in a useful form during a typical patient encounter (Mansouri and Lockyer, 2007). Passive dissemination (e.g., distribution) of CPGs has little effect on practitioner behaviors and thus, active implementation (e.g., opinion leaders) efforts are required.

Even with the exponential growth in publicly available CPGs (NGC, 2010), easy access to high quality, timely CPGs is out of reach for many clinicians. Large gaps remain between recommended care and that delivered to patients. A 2003 study by McGlynn et al. of adults living in 12 metropolitan areas of the United States found participants received recommended care 54.9 percent of the time. The proportion of those receiving recommended care varied only slightly among adults in need of preventive care (54.9 percent), acute care (53.5 percent) and care for chronic conditions (56.1 percent). Yet, when McGlynn et al. (2003) inspected particular medical conditions, they noticed a substantial difference in received recommended care, ranging from 10.5 percent for alcohol dependence to 78.7 percent for senile cataract. In an observational study of 10 Dutch guidelines, Grol et al. concluded that general practitioners followed guideline recommendations in only 61 percent of relevant situations (Grol et al., 1998). Furthermore, in an analysis of 41 studies of the implementation of mental health CPGs—including depression, schizophrenia, and addiction—Bauer found that physicians adhered to guidelines only 27 percent of the time in both cross-sectional and pre-post studies and 67 percent of the time in controlled trials (Bauer, 2002; Francke et al., 2008). Of course, not all quality measures are valid and reliable, nor should all CPGs necessarily be adhered to; how-

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

ever, those CPGs that meet standards proposed herein should be associated with high levels of adherence.

This chapter focuses on a variety of strategies to promote adoption of CPGs. The first section describes how adoption is affected by a number and variety of factors, and presents several individual and organizational implementation strategies for developers and implementers. The second section discusses use of the electronic health record (EHR) and computer-aided decision supports to promote use of CPGs in practice. The third section discusses legal issues related to CPGs that could affect their implementation.

STRATEGIES FOR IMPLEMENTATION OF CPG RECOMMENDATIONS

Promoting uptake and use of CPGs at the point of care delivery represents a final translation hurdle to move scientific findings into practice. The field of translation research is a relatively young science, and addressing this final step of bringing research findings into the mainstream of typical practice is an important challenge (Avorn, 2010). A body of knowledge in implementation science is growing and provides an empirical base for promoting adoption of CPGs (Bradley et al., 2004b; Brooks et al., 2009; Carter et al., 2006; Chin et al., 2004; Demakis et al., 2000; Eccles and Mittman, 2006; Feldman et al., 2005; Grimshaw et al., 2004c, 2006a; Horbar et al., 2004; Hysong et al., 2006; Irwin and Ozer, 2004; Jamtvedt et al., 2006b; Jones et al., 2004; Katz et al., 2004a; Levine et al., 2004; Loeb et al., 2004; McDonald et al., 2005; Murtaugh et al., 2005; Shiffman et al., 2005; Shojania and Grimshaw, 2005; Shojania et al., 2006; Solberg et al., 2000; Solomon et al., 2001; Stafford et al., 2010; Titler et al., 2009). An emerging principle for promoting adoption of CPGs is that attributes of the CPG (e.g., ease of use, strength of the evidence) as perceived by users and stakeholders are neither stable features nor isolated determinants of adoption. Rather it is the interaction among characteristics of the CPG (e.g., specificity, clarity), the intended users (physicians, nurses, pharmacists), and a particular context of practice (e.g., inpatient, ambulatory, long-term care setting) that determines the rate and extent of adoption (Greenhalgh et al., 2005b).

A number of conceptual models have been tested and are used to guide implementation of CPG recommendations (Damschroder et al., 2009; Davies et al., 2010; Dobbins et al., 2009; Rycroft-Malone and Bucknall, 2010). The Implementation Model, illustrated in Figure 6-1, is used here as an organizing framework where the rate and extent of adoption of CPGs are influenced by the nature of the CPG (e.g.,

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
FIGURE 6-1 Implementation model.

FIGURE 6-1 Implementation model.

NOTE: EBP = evidence-based practice.

SOURCE: Titler and Everett (2001).

complexity, type, and strength of the evidence) and how it is communicated (e.g., academic detailing, audit and feedback) to users of the evidence-based practice (e.g., physicians, nurses, pharmacists) of a social system/context of practice (e.g., clinic, inpatient unit, health system) (Kozel et al., 2003; Titler and Everett, 2001; Titler et al., 2009). Although discussion of implementation strategies is organized by these four areas (nature of the CPG, communication, members, context), these categories are not independent of one another.

CPG Characteristics

Characteristics of a CPG that influence the extent to which it can be implemented include clarity, specificity, strength of the evidence, perceived importance, relevance to practice, and simplicity versus complexity of the medical condition it is addressing. For example, CPGs on relatively simple healthcare practices (e.g., influenza vaccines for older adults) are more easily adopted in less time than those that are more complex (e.g., acute pain management for hospitalized older adults). To foster use of trustworthy CPGs, developers must consider organization of content, layout of key messages within the CPG, specificity of practice recommendations, and length of the CPG prose. Additionally, CPGs typically focus on one medical condition (e.g., heart failure), thereby making it challenging to use CPGs for patients with multiple comorbidities. (This topic is discussed further in Chapter 5.)

Implementation strategies that address the process of integrating essential content from CPGs to the local practice context and

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

workflow include clinical reminders, quick reference guides, and decision aids (Balas et al., 2004; BootsMiller et al., 2004; Bradley et al., 2004b; Fung et al., 2004; Loeb et al., 2004; Wensing et al., 2006). One-page quick reference guides, depicted pictorially as flow diagrams or algorithms, are attractive from the busy provider’s perspective (Baars et al., 2010; Boivin et al., 2009; Chong et al., 2009). A number of one-page quick reference guides related to prevention and treatment of cardiovascular diseases have been published (Coronel and Krantz, 2007; Krantz et al., 2005; Smith et al., 2008), though data on widespread acceptability and effectiveness warrants further study. Reminders have a small to moderate effect on adoption of CPGs when used alone or in association with other interventions, primarily regarding the use of preventive health care such as screening tests, immunizations, test ordering, and medication prescribing (Dexheimer et al., 2008; Grimshaw et al., 2004a; Shojania et al., 2009). Reminders are likely more effective for simple (e.g., ordering a lipid test [Mehler et al., 2005]) than complex actions.

Ultimately, incorporation of reminders and clinical care algorithms into electronic decision support systems holds great promise to promote use of CPGs and is discussed in further detail in the section on Electronic Interventions for CPG Implementation. Electronic decision support systems can also address adoption of recommendations from multiple CPGs in the care of individuals with multiple comorbidities.

Communication Strategies

Methods of communication and forms of communication channels influence adoption of CPGs (Greenhalgh et al., 2005b). Implementation strategies discussed in this section are education and mass media, academic detailing, and opinion leaders.

Education and Mass Media

Printed educational materials are one of the most common forms of communicating guidelines through dissemination of complete guideline documents and abridged summaries or concise reference cards. Based on an evidence review of 23 studies, the impact of printed educational materials on changing processes of care is small (median absolute increase of 4.3 percent for categorical processes to 13.6 percent for continuous processes) when compared to no intervention (Farmer et al., 2008). Given the low cost and high feasibility of printed materials, it may be reasonable to consider them as one

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

part of a multifaceted implementation intervention, given both gaps in adoption and diversity of implementation barriers (e.g., for a brand new practice or a change in established practice).

Forsetlund summarized 81 trials on continuing medical education didactic lectures and workshops and found consistent but small effects, with a mean 6 percent absolute increase in desired clinical practices from educational meetings used alone or as a component of multifaceted interventions (Forsetlund et al., 2009). Meta-regression results suggested educational interventions were more effective when attendance was higher, when interactive sessions were mixed with didactic, and when clinical outcomes of intended actions were more serious. Education alone did not appear effective for more complex practice changes.

A review by Grilli et al. (2002) of 20 studies using interrupted time-series designs demonstrated that mass media (e.g., television, radio, newspapers, leaflets, posters, and pamphlets), targeted at the population level (providers, patients, and general public), has some effect on the use of health services for the targeted behavior (e.g., colorectal cancer screening), including providers’ use. These channels of communication have an important role in influencing use of healthcare interventions; those engaged in promoting uptake of research evidence in clinical practice should consider mass media as one of the tools that may encourage use of effective services and discourage those of unproven effectiveness. However, little empirical evidence is available to guide design of mass communication messages to achieve the intended change (Grilli et al., 2002).

Opinion Leaders

An opinion leader is from the local peer group, viewed as a respected source of influence, considered by colleagues as technically competent, and trusted to judge the fit between the evidence base of practice and the local situation (Berner et al., 2003; Grimshaw et al., 2006b; Harvey et al., 2002; Soumerai et al., 1998). Opinion leadership is multifaceted and complex, with role functions varying by circumstances (e.g., nature of the CPG, clinical setting, clinician), but few successful projects to implement recommended practices in healthcare organizations have managed without the use of opinion leaders (Greenhalgh et al., 2005b; Kozel et al., 2003; Watson, 2004).

Several studies have demonstrated that opinion leaders are effective in changing behaviors of healthcare practitioners (Berner et al., 2003; Cullen, 2005; Dopson et al., 2001; Greenhalgh et al., 2005b; Irwin and Ozer, 2004; Locock et al., 2001; Redfern and Christian, 2003), especially when used in combination with aca-

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

demic detailing or performance feedback (discussed hereafter). A Cochrane Review summarized 12 studies engaging opinion leaders with or without other interventions (Doumit et al., 2007). Most studies focused on inpatient settings with an absolute increase of 10 percent in desired behaviors. Challenges to application of these strategies include identification of opinion leaders and high resource levels for deployment.

Academic Detailing

Academic detailing, or educational outreach, as applied to CPGs, involves interactive face-to-face education of individual practitioners in their practice setting by an educator (usually a clinician) with expertise in a particular topic (e.g., cancer pain management), and is one means of changing practice to better align with provision of CPG recommendations. Academic detailers are able to explain the research foundations of CPG recommendations and respond convincingly to specific questions, concerns or challenges that a practitioner might raise. An academic detailer also might deliver feedback on provider or team performance with respect to a selected CPG (e.g., frequency of pain assessment) or CPG-based quality measure (Avorn, 2010; O’Brien et al., 2007).

Multiple studies have demonstrated that academic detailing promotes positive changes in practice behaviors of clinical practitioners (Avorn et al., 1992; Feldman et al., 2005; Greenhalgh et al., 2005a; Hendryx et al., 1998; Horbar et al., 2004; Jones et al., 2004; Loeb et al., 2004; McDonald et al., 2005; Murtaugh et al., 2005; O’Brien et al., 2007; Solomon et al., 2001; Titler et al., 2009). In a review of 69 studies, academic detailing was found to produce a median absolute increase in desired clinical practice of 6 percent. Improvements were highly consistent for prescribing (median absolute increase of 5 percent), and varied for other types of professional performance (median absolute increase of 4 to 16 percent). A few head-to-head studies also suggest academic detailing has a slightly larger impact than audit and feedback (O’Brien et al., 2007). Academic detailing is more costly than other interventions; one analysis found that it is cost-effective (Soumerai and Avorn, 1986) while a more recent analysis concluded that it was not (Shankaran et al., 2009).

Members of Social System (CPG Users)

Intended users of a CPG must be clearly delineated to promote use of CPG recommendations at the point of care delivery. CPGs are likely to impact the practice of multiple players and types of clini-

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

cians involved in delivery of care. Those promoting adoption of a CPG must understand the work and challenges of these multiple stakeholders. Members of a social system (e.g., nurses, physicians, clerical staff) influence how quickly and widely CPGs are adopted (Greenhalgh et al., 2005b). In addition to communication strategies for implementation discussed in the previous section, implementation strategies targeted to users of a CPG include audit and feedback (A/F), performance gap assessment (PGA), and financial incentives. PGA and A/F consistently have shown positive effects on changing provider practice behavior of providers (Bradley et al., 2004b; Horbar et al., 2004; Hysong et al., 2006; Jamtvedt et al., 2006a).

Performance Gap Assessment

PGA applies performance measures to provide information and discussion of current practices relative to recommended CPG practices at the beginning of a clinical practice change (Horbar et al., 2004; Titler et al., 2009). This implementation strategy is used to engage clinicians in discussions of practice issues and formulation of steps or system-level strategies to promote alignment of their practices with CPG recommendations. Specific practice indicators selected for PGA are derived from CPG recommendations. Studies have shown improvements in performance when PGA is part of a multifaceted implementation intervention (Horbar et al., 2004; Titler et al., 2009), but use of this approach by itself is unlikely to result in improved adoption of CPG recommendations (Buetow and Roland, 1999). Yano (2008) discusses the essential nature of performance gap assessment in CPG implementation in the Veterans Affairs Quality Enhancement Research Initiative (VA QUERI) program (Yano, 2008).

Audit and Feedback

Audit and feedback is a continuous process of measuring performance (both process and outcome), aggregating data into reports, and discussing findings with practitioners (Greenhalgh et al., 2005b; Horbar et al., 2004; Jamtvedt et al., 2006a; Katz et al., 2004a,b; Titler et al., 2009). This strategy helps clinicians see how their efforts to improve care processes (e.g., pain assessment every 4 hours) and patient outcomes (e.g., lower pain intensity) are progressing. There is not clear empirical evidence for how to provide audit and feedback, although findings from several studies and systematic reviews suggest that effects may be larger when clinicians are active participants in implementing change and discussion of data audits rather

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

than being passive recipients of feedback reports (Hysong et al., 2006; Jamtvedt et al., 2006a; Kiefe et al., 2001).

A Cochrane review compared audit and feedback uniquely or with other interventions based on 118 studies (Jamtvedt et al., 2006a). Results of audit and feedback varied substantially, with a small median effect of a 5 percent absolute increase in performance. Audit and feedback seemed most effective when baseline performance was low and feedback intensive. A meta-analysis of 19 studies demonstrated that specific suggestions for improving care, written feedback, and more frequent feedback strengthened the effect (Hysong, 2009). Qualitative studies provide some insight into use of audit and feedback (Bradley et al., 2004a; Hysong et al., 2006). One study on use of data feedback for improving treatment of acute myocardial infarction found that (1) feedback data must be perceived by physicians as important and valid; (2) the data source and timeliness of data feedback are critical to perceived validity; (3) it takes time to establish credibility of data within a hospital; (4) benchmarking improves the validity of data feedback; and (5) physician leaders can enhance the effectiveness of data feedback. The literature also supports that data feedback profiling an individual physician’s practices can be effective, but may be perceived as punitive; data feedback must persist to sustain improved performance; and effectiveness of data feedback is intertwined with the organizational context, including physician leadership and organizational culture (Bradley et al., 2004a). Hysong and colleagues (2006) found that high-performing institutions provided timely, individualized, nonpunitive feedback to providers whereas low performers were more variable in their timeliness and nonpunitiveness and relied more on standardized, facility-level reports (Hysong et al., 2006). The concept of actionable feedback emerged as the core concept shared across timeliness, individualization, nonpunitiveness, and customizability.

Financial Incentives

Financial incentives have been evaluated for impact on provider performance and quality of care measures, including appropriate prescribing for specific conditions such as heart failure and appropriate delivery of preventive services (Werner and Dudley, 2009). Medicare, other insurers, and integrated health plans have begun tying reimbursement rates to targets for performance or improvement. Many “pay for performance” interventions have targeted hospitals or physician groups, in part because of the need to have sufficient numbers to measure performance reliably. Integrated health plans have

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

employed incentives targeting individual clinicians. Limited literature on individual-level incentives suggests generally positive effects, targeting measures of preventive care, diabetes, asthma, and heart failure (Christianson et al., 2008; Giuffrida et al., 1999; Greene and Nash, 2009; Petersen et al., 2006). Petersen’s review reported that five of six studies of physician-level incentives and seven of nine studies of group-level incentives found partial or positive effects on quality of care process measures (e.g., cervical cancer screening, mammography, and hemoglobin A1c testing) (Petersen et al., 2006). Obstacles associated with incentives also have been documented: physicians may try to “game” measures by excluding certain patients; improvements may reflect better documentation rather than practice changes; and performance targets and payment strategies must be tailored to goals of the incentive program and participating practices’ performance variations (Christianson et al., 2008; Werner and Dudley, 2009).

Social System/Context of Practice

Clearly, the social system or context of care delivery matters when implementing CPGs (Anderson et al., 2005; Batalden et al., 2003; Cummings et al., 2007; Estabrooks et al., 2008; Fleuren et al., 2004; Fraser, 2004; Greenhalgh et al., 2005a; Kirsh et al., 2008; Kochevar and Yano, 2006; Kothari et al., 2009; Litaker et al., 2008; Redfern and Christian, 2003; Rubenstein and Pugh, 2006; Scott-Findlay and Golden-Biddle, 2005; Scott et al., 2008; Stetler, 2003; Stetler et al., 2009; Titler et al., 2009; Yano, 2008). Implementation strategies described above are instituted within a system of care delivery. Strategies that focus on organizational factors alter the clinical practice environment by systematizing work processes and involving physicians and others (e.g., nurses, physical therapists) in guideline implementation. The underlying principle of organization implementation strategies is creating systems of practice that make it easier to consistently adopt guideline recommendations.

Factors within and across healthcare systems that foster use of CPGs include overall size and complexity of the healthcare system, infrastructure support (e.g., absorptive capacity for new knowledge; assessing and structuring workflow), multihealth system collaboratives, and professional associations. Each is described briefly in the following sections.

Healthcare Systems

Type (e.g., public, private) and complexity of healthcare organizations influence adoption of CPG recommendations. For example,

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Vaughn et al. (2002) demonstrated that organizational resources, physician full-time equivalents per 1,000 patient visits, organizational size, and urbanicity affected use of evidence in the VA healthcare system. Aarons et al. (2009) demonstrated in a large multisite study that providers working in private organizations had more positive attitudes toward evidence-based practices and their organizations provided more support for implementing CPG recommendations (Aarons et al., 2009; Yano, 2008).

Large, mature, functionally differentiated organizations (e.g., divided into semiautonomous departments and units) that are specialized, with a focus of professional knowledge, available resources to channel into new projects, decentralized decision making, and low levels of formalization will more readily adopt innovations such as new CPG-based practices. Larger organizations are generally more innovative because size increases the likelihood that other predictors of CPG adoption will be present, such as financial and human resources and role differentiation (Greenhalgh et al., 2005a; Yano, 2008). Establishing semiautonomous teams is associated with successful implementation of CPGs, and thus should be considered in managing organizational units (Adler et al., 2003; Grumbach and Bodenheimer, 2004; Shojania et al., 2006; Shortell, 2004).

Infrastructure Support

Infrastructure support to promote use of CPG recommendations is defined in a variety of ways, but usually includes absorptive capacity, leadership, and technology infrastructure (discussed in the section on Electronic Interventions for CPG Implementation) to support application of CPG recommendations at the point of care delivery. Absorptive capacity is the knowledge and skills to enact CPG recommendations, remembering that strength of evidence alone will not promote adoption. An organization that is able to systematically identify, capture, interpret, share, reframe, and recodify new knowledge, then use it appropriately will be better able to assimilate CPG recommendations (BootsMiller et al., 2004; Ferlie et al., 2001; Stetler et al., 2009; Wensing et al., 2006). Variation in capacity for change affects sustained implementation of evidence-based preventive service delivery in community-based primary care practices (Litaker et al., 2008).

A learning culture and proactive leadership that promotes knowledge sharing are important components of building absorptive capacity for new knowledge (Estabrooks, 2003; Horbar et al., 2004; Lozano et al., 2004; Nelson et al., 2002). Components of a receptive context include strong leadership, clear strategic vision,

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

good managerial relations, visionary staff in key positions, a climate conducive to experimentation and risk taking, and effective data capture systems. Leadership is critical in encouraging organizational members to break out of the convergent thinking and routines that are the norm in large, well-established organizations (Greenhalgh et al., 2005b; Hagedorn et al., 2006; Litaker et al., 2008; Stetler et al., 2006a; Ward et al., 2006).

An organization may be generally amenable to adopting new practices, but not ready or willing to assimilate particular CPG recommendations. Elements of system readiness include tension for change; CPG-system fit; assessment of implications, support and advocacy for a CPG; dedicated time and resources; and capacity to evaluate the impact of a CPG during and following implementation (Greenhalgh et al., 2005a; Hagedorn et al., 2006).

Structuring workflow to fit with CPG recommendations is an important component of fostering adoption. If implications of a CPG are fully assessed, anticipated, and planned, the recommendations are more likely to be adopted (Kochevar and Yano, 2006; Stetler et al., 2006b; Yano, 2008). If supporters of a specific CPG outnumber and are more strategically placed within the organizational power base than opponents, the CPG is more likely to be adopted by the organization (Bradley et al., 2004b; Hagedorn et al., 2006; Solberg, 2009).

Leadership support is important for promoting use of CPG recommendations (Cullen, 2005; Katz et al., 2004a,b; Scott-Findlay and Golden-Biddle, 2005; Solberg, 2009; Stetler et al., 2009). This support is expressed verbally and by providing necessary resources, materials, and time to fulfill assigned responsibilities. Senior leaders of health systems need to do the following tasks: (1) create an organizational mission and strategic plan that incorporates use of CPG recommendations; (2) implement staff performance expectations that include using CPG recommendations; (3) integrate the work of CPG implementation into the governance structure of the healthcare system; (4) demonstrate the value of CPGs through administrative behaviors; and (5) establish explicit expectations that leaders will create microsystems that value and support clinical inquiry (Cullen, 2005; Solberg, 2009; Titler et al., 2002).

A review of organizational interventions to implement CPGs examined five major modalities, and suggests that revision of professional roles (changing responsibilities and work of health professionals, such as expanding roles of nurses and pharmacists) improved processes of care, but questions remain regarding effects on patient outcomes. Multidisciplinary teams (collaborations of physicians,

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

nurses, and allied health professionals) resulted in improved patient outcomes, mostly in prevalent chronic diseases. Integrated care services (e.g., disease management and case management) resulted in improved patient outcomes and cost savings. Interventions aimed at knowledge management (optimal organization of knowledge within an organization principally via use of technology to support patient care) resulted in improved adherence to CPG recommendations and patient outcomes. The last category, quality management, had the fewest studies available for analysis leading to mixed findings of effectiveness. A number of organizational interventions were not included in this review (e.g., leadership, process redesign, organizational learning), and the authors note that the lack of a widely accepted taxonomy of organizational interventions hinders examination of effectiveness across investigations (Wensing et al., 2006).

An example of an effective organizational infrastructure for implementation is detailed in Hyatt and colleagues’ description of Kaiser Permanente of Southern California’s diabetes guideline intervention (Hyatt et al., 2002). Kaiser’s multicomponent intervention included the following:

  1. Development of an electronic registry and tracking system, automatically including and updating all clinical information about all patients with diabetes and organizing them into risk levels associated with specific guidelines

  2. Care management summary sheets sent to clinicians the day of a scheduled patient visit that provided organized overtime data with embedded guideline recommendations

  3. Outreach letters provided to patients regarding missing tests or immunizations that serve as orders

  4. Automated telephone reminders to patients

  5. Summary and detailed feedback reports, termed “Physician-specific panel reports,” available online and mailed to primary care physicians and diabetologists twice per year

  6. Standing orders for tests, immunizations, emergency department visits, and hospital discharge

  7. Pharmacist counseling

  8. Care management protocols for nurses

  9. Guideline-incorporated telephone patient reminders about diabetes and its care (e.g., test results and/or advice for follow-up care)

This comprehensive organizational strategy was associated with large changes in select relevant performance measures over time.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

For example, microalbuminuria testing and lipid testing increased from 10 to 55 percent and 44 to 65 percent, respectively, from 1994 to 2001. For both measures, gains were relatively modest in the years immediately after the guidelines were released, but accelerated in 1998 when implementation strategies were enacted, such as patient outreach letters and computer-generated, physician-specific panel reports. Across all other outcome measures (i.e., lipid control, lipid medication use, and HbA1c control) over time, improvements were not detected, with the exception of hospitalization rates, which can be a proxy for morbidity (Hyatt et al., 2002). Although specific intervention strategies might have benefitted from organizational size, most have been implemented in much smaller clinic settings without electronic technology (Solberg et al., 2006). Leader and staff commitment to implementing change was a major contribution in success.

An organizational implementation strategy receiving more recent attention is tailored interventions to overcome barriers to change (Baker et al., 2010; Hagedorn et al., 2006; Kochevar and Yano, 2006). This type of intervention focuses on assessing needs regarding factors contributing to gaps between current practices and CPG recommendations; discussions regarding behaviors and/or system mechanisms requiring change; discussions about organizational units, and persons appropriate for inclusion; and identification of ways to facilitate change. This information is then used in tailoring an intervention for the setting that will promote use of specified CPG recommendations. Based on a recent systematic review of 26 studies, effectiveness of tailored implementation interventions is modest, and shows wide variation across studies (Baker et al., 2010). The tailored implementation approach has not yet been developed to the point where there is wide agreement about design and components of the constituent elements (Baker et al., 2010). There is insufficient empirical understanding of how to link barriers and facilitators of change to effective interventions (Baker et al., 2010; Wallin, 2009).

Collaboratives

Collaborations across health systems are another mechanism for implementation of CPGs (Graham et al., 2009). The work of the Institute for Clinical Systems Improvement (ICSI) is an illustrative example. ICSI was formed in 1993 in Minnesota to encourage cooperative development of evidence-based clinical guidelines by HealthPartners, Mayo Clinic, and Park Nicollet Health Services, and shortly moved to a CPG implementation focus, based largely

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

on organizational change strategies (Allen, 2008; Farley et al., 2003). ICSI today is composed of 57 medical groups representing about 85 percent of Minnesota physicians (ICSI, 2010).

ICSI developed an organizational strategy to implement one of its earliest CPGs for simple urinary tract infections (UTIs) in women. ICSI recommended treating uncomplicated cystitis in females ages 18–64 with selected antibiotics for 3 days in the absence of a urine culture. The existing practice was to treat for 10 days, after confirming infection with urine culture results that requiring several days to obtain results. This guideline did not specify who should perform the recommendation, so several medical groups delegated UTI cases to a registered nurse (RN) to handle by telephone. Explicit guideline recommendations directed the RN to rule out more complex cases, and triage them to a physician. O’Connor et al. (1996) studied this approach in 5 clinics, identifying 441 guideline-eligible patients, and found the adoption of a 3-day course of treatment increased from 28 to 52 percent and urine culture rates dropped from 70 to 37 percent. There was no evidence of clinical harm in guideline-treated cases, and cost of care declined by 35 percent per case. Notably, improved guideline adherence was only found for cases managed by nurses. Although RNs treated patients with cystitis symptoms, physician visits occurred if a patient insisted, or if appointment secretaries failed to elicit symptoms. O’Connor and colleagues found no significant change in the use of 3-day treatment or urine cultures when cystitis patients were managed by physicians (O’Connor et al., 1996).

Professional Associations

Guideline implementation is facilitated by many professional associations. For example, the American College of Cardiology developed the Guidelines Applied in Practice (GAP) project in 2000, starting with its guideline for management of patients with acute myocardial infarction (ACC, 2010). Like most large-scale implementation efforts, GAP focuses on specific organizational strategies, facilitated by a tool kit that includes a template for orders, a critical pathway, patient information, a discharge form, chart stickers, performance charts, and a pocket guide. The GAP quality improvement project measured implementation of improvement strategies in 10 acute care hospitals in southeast Michigan. Mehta and colleagues found adherence to key treatments increased in administration of aspirin (81 percent vs. 87 percent; P = .02), and beta-blockers (65 percent vs. 74 percent; P = .04) at admission, and use of aspirin (84 percent vs. 92 percent; P = .002) and smoking cessation counseling (53

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

percent vs. 65 percent; P = .02) at discharge. The authors observed insignificant, but, favorable trends toward adherence to treatment goals for remaining indicators (Mehta et al., 2002). GAP’s counterpart, the American Heart Association’s Get with the Guidelines (GWTG), is based on a similar tool kit containing order sets, clinical pathways, web-based patient management tools, decision support tools, registries, regional workshops, teleconferences, and patient education aids (AHA, 2010). Fonarow et al.’s (2010) evaluation of GWTG programming from 2003 to 2009 in 1,256 hospitals concluded that ischemic stroke treatment rates improved significantly over time for ischemic stroke patients, based on selected performance measures. Improvements were realized in all age groups, narrowing age-related treatment gaps.

In summary, multiple organizational factors influence implementation of CPG recommendations. While allowance for alternatives to CPG recommendations is necessary given patient variation and preferences as well as contrasting guideline implementation processes across clinical topics and actions, implementation strategies at the organizational level are critical.

Multifaceted Interventions

Multifaceted implementation strategies are needed to promote use of research evidence in clinical and administrative healthcare decision making (Bertoni et al., 2009; Feldman et al., 2005; Greenhalgh et al., 2005b; Katz et al., 2004a,b; Murtaugh et al., 2005; Nieva et al., 2005; Rubenstein and Pugh, 2006; Solberg et al., 2000; Titler et al., 2009). Grimshaw’s 2004 review of implementation interventions included 61 studies comparing various combinations of interventions to a control, most frequently printed materials or educational meetings (Grimshaw et al., 2004b). More intensive educational efforts (including outreach) appeared to be more effective than simple, and the addition of reminders to educational interventions was more effective than educational measures alone. The Leeds Castle international roundtable and several other recent syntheses of systematic reviews are complementary in concluding that multifaceted implementation interventions are more effective than single modalities (Francke et al., 2008; Grimshaw et al., 2001, 2003; Gross, 2000; Gross et al., 2001; Prior et al., 2008). Given this evidence asserting the relative effectiveness of multifaceted intervention strategies and their dependence on organizations, it seems that implementation of CPGs requires multifaceted strategies including both individual and organizational strategies (Sales et al., 2010). Fundamen-

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

tally, for trustworthy guidelines to affect quality of care and patient outcomes, they must be implemented; hence, the committee offers the following recommendation:


RECOMMENDATION: INDIVIDUAL AND ORGANIZATIONAL INTERVENTIONS FOR CPG IMPLEMENTATION

Effective multifaceted implementation strategies targeting all relevant populations affected by CPGs should be employed by implementers to promote adherence to trustworthy CPGs.

ELECTRONIC INTERVENTIONS FOR CPG IMPLEMENTATION

Data and Systems’ Challenges

The federal government’s recent appropriation of $19 billion to promoting adoption and use of health information technology and particularly electronic health records in the 2009 stimulus bill (Blumenthal, 2009), combined with the growing number of large, integrated delivery systems (e.g., Geisinger, GroupHealth Cooperative of Puget Sound, Kaiser) adopting multifunctional health information systems, has convinced many health policy professionals that guidelines must become electronically compatible to have any hope for influencing future practice. The following sections explore the current state of electronic clinical decision support (CDS) and directions for moving the digital application of CPGs forward.

Computer-aided clinical decision support, often based on translation of CPGs, should facilitate a more personalized and timely form of guideline-based care. Diagnostic decision support, preventive care reminders, disease management or protocols for bundles of reminders, and drug dosing and prescribing protocols are all examples of interactive, point-of-care CDS (Garg et al., 2005). Interactive, point-of-care CDS relies on inputting structured patient data, which then are processed by knowledge-based rules, or statistical algorithms, to generate output in support of a clinical decision (Berg, 1997; Berner, 2009). Empirical support for guideline-based CDS interventions is mixed. Positive results include a 1999 evaluation of EHR-generated physician reminders to follow post-fracture osteoporosis guidelines at a Pacific Northwest health maintenance organization. At 6 months post-fracture, 51.9 percent of patients of physicians exposed to the electronic reminder intervention received CPG-recommended Bone

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Mass Density (BMD) measurement or osteoporosis medication, compared to 5.9 percent of patients of physician controls. The study also evaluated use of patient educational mailings in addition to the EHR physician advice, but found no statistical difference relative to provider EHR advice alone (Feldstein et al., 2006).

Another successful CDS intervention involved an Internet-based decision-support system for applying American Thoracic Society and Centers for Disease Control and Prevention Guidelines for Tuberculosis preventive therapy. The web tool offered patient-tailored recommendations based on patient-specific input data supplied by physicians. A randomized controlled trial (RCT) including general internal medicine residents found that 95.8 percent who had access to the web tool correctly applied recommended therapy compared with 56.6 percent of the group with access to only written resources (Dayton et al., 2000). Furthermore, a 2006 RCT evaluating AsthmaCritic, a guideline-based critiquing system in 32 Dutch general practices, found the system altered (to more closely follow asthma and chronic obstructive pulmonary disorder guidelines) the way physicians monitored and, to a lesser extent, treated their patients (Kuilboer et al., 2006).

In 2007, Kaiser Permanente’s Southern California Region developed the Proactive Office Encounter (POE) program to improve consistency of preventive care and quality of care for chronic conditions. The POE sought to engage staff in both primary care and specialty departments to assist physicians by using standard work flows and electronic tools to identify gaps in patient care. POE was more comprehensive and successful than earlier attempts, such as Care Management Summary Sheets (mentioned earlier in this chapter), to address preventive and chronic care needs, “Since its inception, POE has contributed to sharp improvement in the Southern California Region’s clinical quality performance, including double digit improvements in colorectal cancer screening, advice to quit smoking, and blood pressure control” (Kanter et al., 2010).

A 2008 Cochrane review evaluated 26 comparisons to assess whether computerized advice on drug dosing has beneficial effects on provider prescribing and dosing of drugs (Durieux et al., 2008). Findings showed that the computerized advice for drug dosage (1) increases the initial dose of drug and tends to increase serum concentrations; (2) leads to more rapid therapeutic control; (3) decreases hospital length of stay; and (4) decreases toxic drug levels, but has no effect on adverse reactions. A Cochrane review of 28 studies reporting 32 comparisons of on-screen, point-of-care computer reminders found that computerized reminders achieved small to modest

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

improvements (< 10 percent) in provider behaviors. No specific reminder or contextual features were significantly associated with magnitude of effect (Shojania et al., 2010).

Reporting of a small number of additional individual CDS interventions offers contrasting results. A randomized trial of electronic clinical reminders to improve diabetes and coronary artery disease (CAD) care among primary care physicians resulted in limited effectiveness. Although reminders increased odds that participants followed recommended diabetes and CAD care, adherence to quality measures remained low and significant variability in practice persisted (Sequist et al., 2005). A 2004 German evaluation of a guideline-based computerized educational tool found no significant difference in guideline knowledge between physician groups with and without access to the tool (Butzlaff et al., 2004). And, an English 2002 evaluation of the use of a CDS to aid implementation of CPGs for the management of asthma and angina by primary care practitioners, found that CDS had no significant effect on consultation rates, process of care measures (including prescribing), or any patient reported outcomes for either condition (Eccles et al., 2002).

Explaining and Enhancing the State of the Art

An emergent literature sheds some light on possible underlying explanations for CDS successes and failures, and carries implications for enhancing the state of the art. Wright et al. offer a taxonomy for interactive, point-of-care CDS composed of four functional features: (1) triggers that cause decision support rules to be invoked (e.g., prescribing a drug); (2) input data elements used by a rule to make patient inferences (e.g., medication orders); (3) interventions, or the possible actions a decision support module can take (e.g., displaying a relevant medication guideline); and (4) offered choices, or the options available to a decision support user when a rule is invoked (e.g., change a medication order) (Wright et al., 2007). Table 6-1 elaborates on Wright’s functional features across several examples of guideline-based CDS.

Problems arise with Wright’s framework in real-world situations. CDS needs to be complemented by easily accessible patient input data, largely EHR, to be of value to clinicians and patients. A recent study estimated that less than 10 percent of U.S. hospitals have a basic EHR (ability to record patient demographic and health data; manage prescription order entry, laboratory and imaging results) and less than 2 percent have a comprehensive EHR (increased order entry management and CDS capabilities). In the

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

TABLE 6-1 CDS Types and Features

CDS Type

Goal of CDS

CDS Specificity

CDS Features

Trigger

Osteoporosis CDS

Deliver patient-specific guideline advice to primary care physician via Electronic Health Record (EHR) message

Generic

Search of electronic databases for patients meeting criteria for increased osteoporosis risk

Highly tailored

 

Academic information platform for CPG use in practice

Improve guideline-recommended osteoporosis care using EHR reminders

Generic

Physician volition (i.e., no EHR-based trigger)

Internet-based decision support for tuberculosis therapy

Improve physician knowledge of guidelines

Generic

Physician volition (i.e., no EHR-based trigger)

Clinical reminders for diabetes, coronary heart disease

Improve quality of care for diabetes and heart disease using EHR reminders

Generic

Highly tailored

Physician opens medical record

Asthma-Critic

Provide patient-specific asthma treatment feedback using EHR data

Generic

Highly tailored

Automatic when record is open and asthma-specific data are entered

SOURCE: Jones et al. (2010).

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

 

 

 

Input Data

Intervention

Offered Choices

 

Tailored inbox message in EHR that links to patient record

Inbox message lists internal and external guideline resources that provide detailed information on osteoporosis evaluation and management

Demographic and diagnostic information from the EHR used to identify patients requiring management

 

 

None

Availability of web-based or CD-ROM–based access to text of guidelines for dementia, Chronic Heart Failure, Urinary Track Infection, and colorectal carcinoma

None; guidelines are read-only

Physician-provided data on patient characteristics and clinical reaction to diagnostic test

Web-based implementation of hierarchical decision tree for administering preventive therapy

Guideline-based recommendations for treatment

Care recommendation; reminders were actionable, but did not require acknowledgment or link to intervention

EHR data (lab, radiology results, problem list, medication list, allergy list)

Reminders list in the EHR in the context of other patient data

Physician-entered data on diagnosis and treatment

On-screen, patient-specific comments presented to physician, tailored to current clinical situation

Physician presented with “critiquing comments” related to treatment decisions; can drill down to view guidelines to understand reason for comment

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

outpatient setting, a national survey placed the estimate at 17 and 4 percent for basic and comprehensive EHR, respectively (DesRoches et al., 2008). Theoretically, a great volume of input data is available directly from patients. However, collection of patient-reported data (PRD) in routine practice has been limited by operational challenges (Jones et al., 2007). The emergence of web-based technologies hopefully will allow for greater capture and real-time use of structured PRD. Yet even when EHR systems are in place, input data may be poorly represented (e.g., U.S. Preventive Services Task Force gonorrhea guideline requirement for sexual activity assessment). Fundamentally, PRD will not be useful in translation of CPGs to practice unless data are captured in valid, reliable, and actionable form.

Furthermore, devising appropriate guideline-based CDS interventions poses other obstacles due to treatment diversity. For example, Geisinger health system has developed an EHR-based CDS model (“eDiabetes”) for expert treatment guidance and management of HbA1c in diabetes. Four input variables are used to identify patient-specific treatment advice from 93 therapeutic alternatives. Notably, each additional input variable increases veracity of output and specificity of advice offered, but exponentially inflates the size and complexity of the CDS database (Miller et al., 2001).

All in all, the current state of CDS is far from ideal, largely because data necessary to support Wright’s four functional features are not easily obtainable, or systems lack sophistication to handle them. Where guidelines have been applied, CDS interventions usually are idiosyncratic to a given healthcare setting. Initial implementation of an EHR is followed, often rapidly, by naïve attempts to implement rudimentary forms of CDS (e.g., alerts of potential drug–drug interactions). Many providers find these alerts interruptive, unhelpful, and unsatisfying, often termed “alert fatigue” by the literature (Sittig et al., 2009a; Wright et al., 2009). More robust forms of CDS require translation of “knowledge” (e.g., as embodied by guidelines) to a structured, computer-ready form before use in an EHR CDS protocol. Implementation of these CDS types is even more daunting. Lack of accepted standards for clinical vocabularies, CDS formats, clinical workflow applications, and clinical and patient-reported data further limit electronic use of CPGs.

Furthermore, despite numerous attempts, there is no universally accepted means of translating guidelines into CDS-related protocols. A number of guideline representation approaches, allowing for translation of CPG knowledge to a structured form prior to use in an EHR CDS protocol, are actively being developed; a few (e.g., Arden Syntax, the Guideline Interchange Format, Guideline Ele-

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

ments Model) have been accepted into routine use by different organizations (e.g., American Society for Testing and Materials [ASTM], Health Level-7), but this is not indicative of use in practice outside of research settings (Open Clinical, 2010).

Generalizability Challenges

Increased adoption of EHRs and CDS will offer unique opportunities to rapidly move clinical knowledge from the scientific literature to the patient encounter. Earlier we discussed many data and systems-driven challenges inherent to CDS and its application to guidelines implementation. As alluded to above, there is yet another realm in which substantial advances are required before CDS-based implementation may be realized: standardization and codification of CPGs for uniform adoption across the diversity of care settings.

In an effort to derive generalizable principles for CPG implementation via CDS, the CDS Consortium (CDSC), funded by the Agency for Healthcare Research and Quality, has studied CDS practices at five institutions (Partners HealthCare) in Boston, Wishard Health System/Regenstrief Institute and Roudebush VA Medical Center in Indianapolis, the Mid-Valley Independent Physicians Association in Salem, Oregon, and the University of Medicine and Dentistry of New Jersey in New Brunswick, New Jersey with both commercially developed and internally developed EHR and CDS systems. From this effort arose guidance for enhancement of CDS-driven CPG implementation founded on locally extant knowledge and systems that were applicable to the universe of clinical practice environments. Overall, CDSC emphasizes that to be more actionable in a digital environment, CPG structure (format, lexicon, and content) should facilitate simple and efficient adoption by health systems organizations. Specifically, CPGs will be of greater use if they are structured to identify clinical and administrative data triggers according to Wright’s model (i.e., define relevant patient subgroup triggers and/or input data, intervention options, and offered choices) and guide physicians and patients in making optimal, evidence-based decisions. Furthermore, guideline developers should minimize the ambiguity of their recommendations to facilitate incorporation in a computer-executable form. Whenever possible, guidelines should state explicitly when particular CDS rules apply in a clinical context. For example, allowing rules to be “turned off” when not warranted will assist in reducing “alert fatigue.”

CDS protocols need to accommodate needs of end-users, the designation of appropriate personnel, and insertion points in the

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

clinical workflow. Physicians are trained to complete cognitively demanding tasks, process complex information, and make judgments in the face of uncertainty. Accordingly, they may not be effective or efficient in performing rudimentary tasks better suited for less skilled staff or complete automation. Where strong evidence indicates when and for whom a care process or treatment should be implemented (e.g., pneumovax in older patients), it may be sensible to prompt the clinical action 100 percent of the time (Dexter et al., 2001). For example, all Type II diabetics without a recent HbA1c should have this laboratory test completed at appropriate intervals, but neither the decision nor the completion of the test requires physician involvement. Thus, where strength of CPG recommendations is high, related actions can be implemented easily (e.g., management of hypertension, hyperlipidemia, etc.), and when risks are low, as much care oversight as possible should be shifted to nonphysicians, within limits of common sense. Where risk of confusion and of making the “wrong decision” increase (e.g., as decision complexity increases), decision support tools may become increasingly important for providers involved in care processes at all levels. Further detailed advice extending from CDSC’s research is provided in Table 6-2.

If clinical guideline developers adopt this counsel, the CDSC believes a greater number of healthcare organizations could develop and implement basic CDS features necessary to transfer clinical knowledge from the literature to point of care and begin to transform radically both the quality and safety of the current health system (Sittig et al., 2009b).

Over the next several years, the CDSC anticipates new insights respecting CDS–CPG interrelationships applicable to the universe of clinical practice, extending from a number of demonstration projects, including

  1. examination of more than 50 different CDS intervention types to elucidate factors important to their integration within existing Electronic Medical Records (EMR) systems;

  2. development of a service-oriented approach to creating CDS interventions that can be used across existing EHR systems; and

  3. formulation of a “starter set” of CDS interventions to be shared among members of the CDS consortium (Sittig et al., 2009b).

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

TABLE 6-2 CDSC Guidance for CPG Development Activities

Number

Recommendation

Description

Rationale

1

Identify standard data triggers

Guidelines should explicitly identify clinical or administrative data required to initiate any of the electronic Clinical Decision support (CDS) interventions included in the guideline

Required data need to be captured and stored in structured and coded fields so they can be used by CDS systems

1.1

Review access to existing input data

Commonly available input data for use by CDS logic (e.g., for alerts) include laboratory test results, patient demographics, and the problem list; CPGs should specify only coded data types that are currently or soon will be available in certified EHRs

Input data that are not available in certified Electronic Health Records (EHRs) will result in guidelines that cannot be incorporated in a computable manner within EHRs

2

Work on increasing clarity and internal consistency of all clinical logic included in guidelines

CPGs should minimize the ambiguity of their recommendations (e.g., include threshold values for blood pressure rather than stating “if the patient’s blood pressure is high then…”)

Logic in CPGs must be able to be incorporated in a computer-executable form

3

Suggest appropriate personnel and best insertion points in the clinical workflow for CDS interventions to be delivered

CPGs should specify how the EHR can route recommended actions to the appropriate person or role, at the right time and in the right place, based on logic included with the CDS intervention

Increase CDS utility, efficiency, and integration with clinic workflows

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Number

Recommendation

Description

Rationale

4

Guidelines should facilitate selective filtering or tailoring of rules

Specify explicitly when particular rules either apply or don’t apply in the rule’s logic description

Allow rules to be turned off when they do not apply to a clinical context (e.g., specific practices, physicians, specialties, or clinical situations)

5

Guidelines should support the Health Level (HL7) Infobutton standard

Specific definitions of items such as clinical problems, medications, and laboratory tests should be clearly defined using standardized data types

Allow EHRs to link to specific sections of a guideline and provide context-sensitive explanations

6

Composition of guideline development groups

CPG development groups/committees should include well-trained and experienced clinical informaticians

CPGs will be easier to transform into computer-executable forms

SOURCE: Jones et al. (2010).

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

The committee recommends the following for advancing electronic methods for CPG implementation:


RECOMMENDATION: ELECTRONIC INTERVENTIONS FOR CPG IMPLEMENTATION

  • Guideline developers should structure the format, vocabulary, and content of CPGs (e.g., specific statements of evidence, the target population) to facilitate ready implementation of computer-aided clinical decision support (CDS) by end-users.

  • CPG developers, CPG implementers, and CDS designers should collaborate in an effort to align their needs with one another.

DECISION ANALYTIC MODELING AND CPG IMPLEMENTATION

A frontier of evidence-based medicine is decision analytic modeling in health care alternatives’ assessment. Through discussions with leaders in the field, David Eddy and Wiley Chan, the committee explored potential applications of decision analysis to development and implementation of CPGs. The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Health Sciences Committee Taskforce on Decision Analytic Modeling wrote, “The purpose of modeling is to structure evidence on clinical and economic outcomes in a form that can help to inform decisions about clinical practices and healthcare resource allocations. Models synthesize evidence on health consequences and costs from many different sources, including data from clinical trials, observational studies, insurance claim databases, case registries, public health statistics, and preference surveys” (Weinstein et al., 2003, pp. 9–10). Though the ISPOR Taskforce found model-based evaluations to be a valuable resource for health care decision-makers, they cautioned that models are to be taken as aides to decision-making rather than scientific fact. They also advocated the continual assessment of models against real scientific data, and encouraged modelers to firmly communicate that their conclusions are always conditional and based on assumptions and secondary data. Hence, any flaws in the original studies will necessarily transfer to the model’s evaluations (Weinstein et al., 2003). Although the field is currently fraught with controversy, the committee acknowledges it as exciting and potentially promising, however, decided the state of the art is not ready for direct comment.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

LEGAL ISSUES AFFECTING CPG IMPLEMENTATION

Medical malpractice is a pervasive issue in health care, one that is both influenced by the use of CPGs and could influence future use of CPGs. Product liability suits and disputes over what is or what should be covered by insurance policies and how to interpret “medical necessity” can also involve CPGs. The following section discusses some of the legal issues related to use or nonuse of guidelines.

All physicians are affected by medical malpractice, whether they have been sued by a patient or not, through insurance premiums they pay. Although the costs of malpractice and defensive medicine are difficult to calculate, and estimates in the past have varied depending on study methods, the most recent study of the U.S. medical liability system estimates costs of $55.6 billion in 2008 dollars, including the cost of defensive medicine (Mello, 2001). Because total health spending was $2.3 trillion, malpractice is an estimated 2.4 percent of the healthcare dollar. Data for estimating the cost of defensive medicine were extremely limited and the authors relied heavily on older studies, assumptions, and extrapolations to conservatively estimate a total of $45.6 billion in hospital, physician, and clinic services. Another study of defensive medicine (Thomas et al., 2010) based an estimate of defensive medicine costs and potential savings from tort reform on an analysis of 400 million paid medical and pharmaceutical claims from CIGNA HealthCare from 2004 to 2006 from cases in selected specialties. The authors concluded that “the magnitude of savings that could be realized [from a 10 percent reduction in malpractice premiums] is small, accounting for less than 1 percent of all medical care costs in every specialty” (Thomas et al., 2010, p. 1582).

In addition to costing the health system, medical liability is a cost and concern for most physicians. Defensive medicine is also a cost and quality concern of health insurers and policy makers. In 2009 a national survey mailed to a random sample of 2,416 eligible physicians drawn from the American Medical Association’s Physician Master File in primary care, nonsurgical (medical), surgical, and other specialties produced a 50 percent response rate. Of the respondents, 91 percent agreed that “Doctors order more tests and procedures than patients need to protect themselves against malpractice suits” and 90.7 percent agreed that “Unnecessary use of diagnostic tests will not decrease without protections for physicians against unwarranted malpractice suits” (Bishop et al., 2010, p. 1081). The authors interpret their findings to “suggest that proposals to promote cost-effective care, such as the promulgation of guidelines from a national comparative effectiveness center, could be limited by

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

physicians’ fears of malpractice unless such protections are ensured. Malpractice reforms should focus on ways of offering assurance to physicians that they will have protection against malpractice if they competently practice the standard of care” (Bishop et al., 2010, pp. 1081–1082). Another recent study, based on a 2008 Health Tracking Survey of 4,720 physicians with a 62 percent response rate by the Center for Health System Change, highlights the need for that assurance. The survey asked physicians about their level of concern about malpractice litigation and whether they used some defensive practices (Carrier et al., 2010). The authors compared those data to specialty and state liability environments and found that physicians had high levels of concern about risks of malpractice cases across specialty, fee-for-service, or health maintenance practice settings, and geographic areas. A high level of concern was expressed by the physicians surveyed, even if they were in relatively low-risk malpractice environments.

It has been suggested that CPGs could be used as a “liability shield” to define a national standard of care, rather than local customary practice, and protect physicians who follow it. Alternatively, CPGs could be used as a “liability sword” against physicians who commit errors of misuse, underuse, or overuse with resultant complications, when not following the appropriate CPG (Rosoff, 2001). The evidentiary acceptability of CPGs is an issue; expert witnesses can introduce CPGs as legal evidence; direct introduction of written CPGs is limited by hearsay rules. Another limiting factor is that most malpractice litigation occurs in state courts, not federal ones. Currently most states permit defendants to escape liability if their procedure reflects customary care, even if it is not necessarily optimal care (Avraham and Sage, 2010). CPGs attempting to establish a new standard of practice might reflect latest evidence, but not the lagging customary care in the community. Thus, CPGs might not be fully used by the courts. That might also be attributed to uncertainty about what CPGs represent. According to Mello, “judicial and academic statements of what CPGs are meant to represent are characterized by confusion and overgeneralization. There exists little agreement as to whether CPGs represent a minimum baseline, a not-yet-attained ideal, or a customary practice that lies somewhere in between these two extremes” (Mello, 2001, p. 19). In fact, courts seldom even acknowledge the distinction between evidence-based and consensus-based CPGs (Avraham and Sage, 2010).

Because information on specifically how the courts and lawyers use CPGs is limited, authors recently have updated a 1995 study surveying case law (Hyams et al., 1995). The original study identified 37

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

published cases involving CPGs. Of these cases, 28 used CPGs successfully, 22 as swords (inculpatory), and 6 as shields (exculpatory). The Avraham and Sage (2010) update reviewed judicial decisions published between January 2000 and March 2010 and found that courts continue to use guidelines only occasionally and largely conservatively. Of 28 new cases found with parties employing guidelines in some form, 16 (57 percent) involved use by plaintiffs (as “swords”), compared to 78 percent in the Hyams et al. study, and 12 (43 percent) involved use by defendants (as “shields”), compared to 22 percent in the Hyams et al. study (Avraham and Sage, 2010), The success rates of users of guidelines were lower than in the Hyams report.

Avraham and Sage (2010) also cite historical experiments related to malpractice reform and guideline use in Maine, Florida, and Minnesota1 during the 1990s (Florida Agency for Health Care Administration, 1998; LeCraw, 2007). Although the structure of each project differed and the link of guidelines to malpractice protection also varied, none of the projects showed a substantial positive impact on physician practice behavior and professional liability claims, settlement costs, or malpractice premiums, or they failed before an impact could be recorded (Avraham and Sage, 2010).

Overall, the application of CPGs to medical malpractice have had varying practical influence. And from a larger policy view, reliance upon CPGs in medical malpractice implies potential advantages and disadvantages, respectively, including enhanced efficiency in establishment of the standard of care; and the inordinate authority of CPGs in physician decision making discretion (LeCraw, 2007). Further, some CPG proponents worry that if courts use guidelines as standards of care in malpractice suits, CPG developers may be more reluctant to write strong clinical recommendations (and instead water down recommendations with weasel words and disclaimers) for fear of their legal repercussions. Yet, given an emergent trend to apply CPGs in the courts, the notion of trustworthiness may be increasingly relevant to that setting. However, mandating courts to rely on CPGs or some other enforcement mechanism is well beyond the scope of this committee and would be more appropriately considered in the context of major malpractice reform.

1

State of Minnesota, 1995. Minnesota Care Act of 1992, Chapter 549 (HF No. 2800).

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

REFERENCES

Aarons, G. A., D. H. Sommerfeld, and C. M. Walrath-Greene. 2009. Evidence-based practice implementation: The impact of public versus private sector organization type on organizational support, provider attitudes, and adoption of evidence-based practice. Implementation Science 4:83.

ACC (American College of Cardiology). 2010. CardioSource. http://www.cardio source.org/ (accessed July 8, 2010).

Adler, P. S., S.-W. Kwon, and J. M. K. Singer. 2003. The “six-west” problem: Professionals and the intraorganizational diffusion of innovations, with particular reference to the case of hospitals. In Working paper 3–15. Los Angeles, CA: Marshall School of Business, University of Southern California.

AHA (American Heart Association). 2010. GWTG supporting guidelines. http://www.americanheart.org/presenter.jhtml?identifier=3043013 (accessed July 8, 2010).

Allen, J. 2008. Crossing the quality chasm: Taking the lead as ICSI turns 15. Minnesota Physician XXII(2)(1):12–13.

Anderson, R. A., B. F. Crabtree, D. J. Steele, and R. R. McDaniel, Jr. 2005. Case study research: The view from complexity science. Quality Health Research 15(5): 669–685.

Avorn, J. 2010. Transforming trial results into practice change: The final translational hurdle: Comment on “Impact of the ALLHAT/JNC7 Dissemination Project on thiazide-type diuretic use.” Archives of Internal Medicine 170(10):858–860.

Avorn, J., S. B. Soumerai, D. E. Everitt, D. Ross-Degnan, M. H. Beers, D. Sherman, S. R. Salem-Schatz, and D. Fields. 1992. A randomized trial of a program to reduce the use of psychoactive drugs in nursing homes. New England Journal of Medicine 327(3):168–173.

Avraham, R., and W. M. Sage. 2010. Legal models for assuring quality of CPGs. In Committee on Standards for Trustworthy Clinical Practice Guidelines commissioned paper.

Baars, J. E., T. Markus, E. J. Kuipers, and C. J. van der Woude. 2010. Patients’ preferences regarding shared decision-making in the treatment of inflammatory bowel disease: Results from a patient-empowerment study. Digestion 81(2):113–119.

Baker, R., J. Camosso-Stefinovic, C. Gillies, E. J. Shaw, F. Cheater, S. Flottorp, and N. Robertson. 2010. Tailored interventions to overcome identified barriers to change: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 3:CD005470.

Balas, E. A., S. Krishna, R. A. Kretschmer, T. R. Cheek, D. F. Lobach, and S. A. Boren. 2004. Computerized knowledge management in diabetes care. Medical Care 42(6):610–621.

Batalden, P. B., E. C. Nelson, W. H. Edwards, M. M. Godfrey, and J. J. Mohr. 2003. Microsystems in health care: Developing small clinical units to attain peak performance. Joint Commission of the Journal of Quality and Safety 29(11):575–585.

Bauer, M. S. 2002. A review of quantitative studies of adherence to mental health clinical practice guidelines. Harvard Review of Psychiatry 10(3):138–153.

Berg, M. 1997. Rationalizing medical work: Decision-support techniques and medical practices. Cambridge, MA: MIT Press.

Berner, E. S. 2009. Clinical decision support systems: State of the art. Rockville, MD: Agency for Healthcare Research and Quality.

Berner, E. S., C. S. Baker, E. Funkhouser, G. R. Heudebert, J. J. Allison, C. A. Fargason, Jr., Q. Li, S. D. Person, and C. I. Kiefe. 2003. Do local opinion leaders augment hospital quality improvement efforts? A randomized trial to promote adherence to unstable angina guidelines. Medical Care 41(3):420–431.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Bertoni, A. G., D. E. Bonds, H. Chen, P. Hogan, L. Crago, E. Rosenberger, A. H. Barham, C. R. Clinch, and D. C. Goff, Jr. 2009. Impact of a multifaceted intervention on cholesterol management in primary care practices: Guideline adherence for heart health randomized trial. Archives of Internal Medicine 169(7):678–686.

Bishop, T. F., A. D. Federman, and S. Keyhani. 2010. Physicians’ views on defensive medicine: A national survey. Archives of Internal Medicine 170(12):1081–1083.

Blumenthal, D. 2009. Stimulating the adoption of health information technology. New England Journal of Medicine 360(15):1477–1479.

Boivin, A., J. Green, J. van der Meulen, F. Legare, and E. Nolte. 2009. Why consider patients’ preferences?: A discourse analysis of clinical practice guideline developers. Medical Care 47(8):908–915.

BootsMiller, B. J., J. W. Yankey, S. D. Flach, M. M. Ward, T. E. Vaughn, K. F. Welke, and B. N. Doebbeling. 2004. Classifying the effectiveness of Veterans Affairs guideline implementation approaches. American Journal of Medical Quality 19(6):248–254.

Bradley, E. H., E. S. Holmboe, J. A. Mattera, S. A. Roumanis, M. J. Radford, and H. M. Krumholz. 2004a. Data feedback efforts in quality improvement: lessons learned from U.S. hospitals. Quality and Safety in Health Care 13(1):26–31.

Bradley, E. H., M. Schlesinger, T. R. Webster, D. Baker, and S. K. Inouye. 2004b. Translating research into clinical practice: Making change happen. Journal of the American Geriatrics Society 52(11):1875–1882.

Brooks, J. M., M. G. Titler, G. Ardery, and K. Herr. 2009. Effect of evidence-based acute pain management practices on inpatient costs. Health Services Research 44(1):245–263.

Buetow, S. A., and M. Roland. 1999. Clinical governance: Bridging the gap between managerial and clinical approaches to quality of care. Quality Health Care 8(3):184–190.

Butzlaff, M., H. Vollmar, B. Floer, N. Koneczny, J. Isfort, and S. Lange. 2004. Learning with computerized guidelines in general practice?: A randomized controlled trial. Family Practice 21(2):183–188.

Carrier, E. R., J. D. Reschovsky, M. M. Mello, R. C. Mayrell, and D. Katz. 2010. Physicians’ fears of malpractice lawsuits are not assuaged by tort reforms. Health Affairs 29(9):1585–1592.

Carter, B. L., A. Hartz, G. Bergus, J. D. Dawson, W. R. Doucette, J. J. Stewart, and Y. Xu. 2006. Relationship between physician knowledge of hypertension and blood pressure control. Journal of Clinical Hypertension (Greenwich) 8(7):481–486.

Chin, M. H., S. Cook, M. L. Drum, L. Jin, M. Guillen, C. A. Humikowski, J. Koppert, J. F. Harrison, S. Lippold, and C. T. Schaefer. 2004. Improving diabetes care in midwest community health centers with the health disparities collaborative. Diabetes Care 27(1):2–8.

Chong, C. A., I. J. Chen, G. Naglie, and M. D. Krahn. 2009. How well do guidelines incorporate evidence on patient preferences? Journal of General Internal Medicine 24(8):977–982.

Christianson, J. B., S. Leatherman, and K. Sutherland. 2008. Lessons from evaluations of purchaser Pay-for-Performance programs: A review of the evidence. Medical Care Research Review 65(6 Suppl):5S–35S.

Coronel, S., and M. J. Krantz. 2007. Medical therapy for symptomatic heart failure: A contemporary treatment algorithm. Critical Pathways in Cardiology: A Journal of Evidence-Based Medicine 6(1):15–17.

Cullen, L. 2005. Evidence-based practice: Strategies for nursing leaders. In Leadership and nursing care management, 3rd ed., edited by D. Huber. Philadelphia, PA: Elsevier. Pp. 461–478.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Cummings, G. G., C. A. Estabrooks, W. K. Midodzi, L. Wallin, and L. Hayduk. 2007. Influence of organizational characteristics and context on research utilization. Nursing Research 56(4 Suppl):S24–S39.

Damschroder, L., D. Aron, R. Keith, S. Kirsh, J. Alexander, and J. Lowery. 2009. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science 4(1):50.

Davies, P., A. Walker, and J. Grimshaw. 2010. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implementation Science 5(1):14.

Dayton, C. S., J. Scott Ferguson, D. B. Hornick, and M. W. Peterson. 2000. Evaluation of an Internet-based decision-support system for applying the ATS/CDC guidelines for tuberculosis preventive therapy. Medical Decision Making 20(1):1–6.

Demakis, J. G., L. McQueen, K. W. Kizer, and J. R. Feussner. 2000. Quality Enhancement Research Initiative (QUERI): A collaboration between research and clinical practice. Medical Care 38(6 Suppl 1):I17–I25.

DesRoches, C. M., E. G. Campbell, S. R. Rao, K. Donelan, T. G. Ferris, A. Jha, R. Kaushal, D. E. Levy, S. Rosenbaum, A. E. Shields, and D. Blumenthal. 2008. Electronic health records in ambulatory care—A national survey of physicians. New England Journal of Medicine 359(1):50–60.

Dexheimer, J. W., T. R. Talbot, D. L. Sanders, S. T. Rosenbloom, and D. Aronsky. 2008. Prompting clinicians about preventive care measures: A systematic review of randomized controlled trials. Journal of the American Medical Informatics Association 15(3):311–320.

Dexter, P. R., S. Perkins, J. M. Overhage, K. Maharry, R. B. Kohler, and C. J. McDonald. 2001. A computerized reminder system to increase the use of preventive care for hospitalized patients. New England Journal of Medicine 345(13):965–970.

Dobbins, M., S. E. Hanna, D. Ciliska, S. Manske, R. Cameron, S. L. Mercer, L. O’Mara, K. DeCorby, and P. Robeson. 2009. A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implementation Science 4:61.

Dopson, S., L. Locock, D. Chambers, and J. Gabbay. 2001. Implementation of evidence-based medicine: Evaluation of the Promoting Action on Clinical Effectiveness programme. Journal of Health Services Research and Policy 6(1):23–31.

Doumit, G., M. Gattellari, J. Grimshaw, and M. A. O’Brien. 2007. Local opinion leaders: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews (1):CD000125.

Durieux, P., L. Trinquart, I. Colombet, J. Nies, R. Walton, A. Rajeswaran, M. Rege Walther, E. Harvey, and B. Burnand. 2008. Computerized advice on drug dosage to improve prescribing practice. Cochrane Database of Systematic Reviews (3):CD002894.

Eccles, M. P., and B. S. Mittman. 2006. Welcome to implementation science. Implementation Science 1:7:1–6.

Eccles, M., E. McColl, N. Steen, N. Rousseau, J. Grimshaw, and D. Parkin. 2002. Effect of computerised evidence-based guidelines on management of asthma and angina in adults in primary care: Cluster randomised controlled trial. BMJ 325:941–948.

Estabrooks, C. A. 2003. Translating research into practice: Implications for organizations and administrators. Canadian Journal of Nursing Research 35(3):53–68.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Estabrooks, C. A., L. Derksen, C. Winther, J. N. Lavis, S. D. Scott, L. Wallin, and J. Profetto-McGrath. 2008. The intellectual structure and substance of the knowledge utilization field: A longitudinal author co-citation analysis, 1945 to 2004. Implementation Science 3:49.

Farley, D. O., M. C. Haims, D. J. Keyser, S. S. Olmsted, S. V. Curry, and M. Sorbero. 2003. Regional health quality improvement coalitions: Lessons across the life cycle. Santa Monica, CA: RAND Health.

Farmer, A. P., F. Legare, L. Turcot, J. Grimshaw, E. Harvey, J. L. McGowan, and F. Wolf. 2008. Printed educational materials: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews (3):CD004398.

Feldman, P. H., C. M. Murtaugh, L. E. Pezzin, M. V. McDonald, and T. R. Peng. 2005. Just-in-time evidence-based e-mail “reminders” in home health care: Impact on patient outcomes. Health Services Research 40(3):865–885.

Feldstein, A., P. J. Elmer, D. H. Smith, M. Herson, E. Orwoll, C. Chen, M. Aickin, and M. C. Swain. 2006. Electronic medical record reminder improves osteoporosis management after a fracture: A randomized, controlled trial. Journal of the American Geriatrics Society 54(3):450–457.

Ferlie, E., J. Gabbay, L. Fitzgerald, L. Locock, and S. Dopson. 2001. Evidence-based medicine and organisational change: An overview of some recent qualitative research. In Organisational behavior and organisational studies in health care: Reflections on the future, edited by L. Ashburner. Basingstoke: Palgrave.

Fleuren, M., K. Wiefferink, and T. Paulussen. 2004. Determinants of innovation within health care organizations: Literature review and Delphi study. International Journal of Quality Health Care 16(2):107–123.

Florida Agency for Health Care Administration 1998. Practice guidelines as affirmative defense: The Cesarean Demonstration Project Report.

Fonarow, G. C., M. J. Reeves, X. Zhao, D. M. Olson, E. E. Smith, J. L. Saver, and L. H. Schwamm. 2010. Age-related differences in characteristics, performance measures, treatment trends, and outcomes in patients with ischemic stroke. Circulation 121(7):879–891.

Forsetlund, L., A. Bjorndal, A. Rashidian, G. Jamtvedt, M. A. O’Brien, F. Wolf, D. Davis, J. Odgaard-Jensen, and A. D. Oxman. 2009. Continuing education meetings and workshops: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews (2):CD003030.

Francke, A. L., M. C. Smit, A. J. de Veer, and P. Mistiaen. 2008. Factors influencing the implementation of clinical guidelines for health care professionals: A systematic meta-review. BMC Medical Informatics and Decision Making 8:38.

Fraser, I. 2004. Organizational research with impact: Working backwards. Worldviews Evidence Based Nursing 1(Suppl 1):S52–S59.

Fung, C. H., J. N. Woods, S. M. Asch, P. Glassman, and B. N. Doebbeling. 2004. Variation in implementation and use of computerized clinical reminders in an integrated healthcare system. American Journal of Managed Care 10(11 Pt 2):878–885.

Garg, A. X., N. K. J. Adhikari, H. McDonald, M. P. Rosas-Arellano, P. J. Devereaux, J. Beyene, J. Sam, and R. B. Haynes. 2005. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: A systematic review. JAMA 293(10):1223–1238.

Giuffrida, A., H. Gravelle, and M. Roland. 1999. Measuring quality of care with routine data: Avoiding confusion between performance indicators and health outcomes. BMJ 319(7202):94–98.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Graham, I. D., J. Tetroe, and M. Gagnon. 2009. Lost in translation: Just lost or beginning to find our way? Annals of Emergency Medicine 54(2):313–314; discussion 314.

Greene, S. E., and D. B. Nash. 2009. Pay for Performance: An overview of the literature. American Journal of Medical Quality 24(2):140–163.

Greenhalgh, T., A. Collard, and N. Begum. 2005a. Sharing stories: Complex intervention for diabetes education in minority ethnic groups who do not speak English. BMJ 330(7492):628.

Greenhalgh, T., G. Robert, P. Bate, F. Macfarlane, and O. Kyriakidou. 2005b. Diffusion of innovations in health service organisations: A systematic literature review. Malden, MA: Blackwell Publishing Ltd.

Grilli, R., C. Ramsay, and S. Minozzi. 2002. Mass media interventions: Effects on health services utilisation. Cochrane Database of Systematic Reviews (1):CD000389.

Grimshaw, J. M., L. Shirran, R. Thomas, G. Mowatt, C. Fraser, L. Bero, R. Grilli, E. Harvey, A. Oxman, and M. A. O’Brien. 2001. Changing provider behavior: An overview of systematic reviews of interventions. Medical Care 39(8 Suppl 2): II2–II45.

Grimshaw, J., L. M. McAuley, L. A. Bero, R. Grilli, A. D. Oxman, C. Ramsay, L. Vale, and M. Zwarenstein. 2003. Systematic reviews of the effectiveness of quality improvement strategies and programmes. Quality and Safety in Health Care 12(4):298–303.

Grimshaw, J., M. Eccles, and J. Tetroe. 2004a. Implementing clinical guidelines: Current evidence and future implications. The Journal of Continuing Education in the Health Professions 24(Suppl 1):S31–S37.

Grimshaw, J., R. Thomas, G. MacLennan, C. Fraser, C. Ramsay, L. Vale, P. Whitty, M. Eccles, L. Matowe, L. Shirran, M. Wensing, R. Dijkstra, and C. Donaldson. 2004b. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technology Assessment 8(6):1–72.

Grimshaw, J. M., R. E. Thomas, G. MacLennan, C. Fraser, C. R. Ramsay, L. Vale, P. Whitty, M. P. Eccles, L. Matowe, L. Shirran, M. Wensing, R. Dijkstra, and C. Donaldson. 2004c. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technology Assessment 8(6):iii–iv, 1–72.

Grimshaw, J., M. Eccles, R. Thomas, G. MacLennan, C. Ramsay, C. Fraser, and L. Vale. 2006a. Toward evidence-based quality improvement. Journal of General Internal Medicine 1(Suppl 2):S14–S20.

Grimshaw, J. M., M. P. Eccles, J. Greener, G. Maclennan, T. Ibbotson, J. P. Kahan, and F. Sullivan. 2006b. Is the involvement of opinion leaders in the implementation of research findings a feasible strategy? Implementation Science 1:3.

Grol, R., J. Dalhuijsen, S. Thomas, C. Veld, G. Rutten, and H. Mokkink. 1998. Attributes of clinical guidelines that influence use of guidelines in general practice: Observational study. BMJ 317(7162):858–861.

Gross, P. A. 2000. Implementing Evidence-Based Recommendations for Health Care: A Roundtable Comparing European and American Experiences. Joint Commission Journal on Quality and Patient Safety 26:547–553.

Gross, P. A., S. Greenfield, S. Cretin, J. Ferguson, J. Grimshaw, R. Grol, N. Klazinga, W. Lorenz, G. S. Meyer, C. Riccobono, S. C. Schoenbaum, P. Schyve, and C. Shaw. 2001. Optimal methods for guideline implementation: Conclusions from Leeds Castle meeting. Medical Care 39(8 Suppl 2):II85–II92.

Grumbach, K., and T. Bodenheimer. 2004. Can health care teams improve primary care practice? JAMA 291(10):1246–1251.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Hagedorn, H., M. Hogan, J. Smith, C. Bowman, G. Curran, D. Espadas, B. Kimmel, L. Kochevar, M. Legro, and A. Sales. 2006. Lessons learned about implementing research evidence into clinical practice. Journal of General Internal Medicine 21(0):S21–S24.

Harvey, G., A. Loftus-Hills, J. Rycroft-Malone, A. Titchen, A. Kitson, B. McCormack, and K. Seers. 2002. Getting evidence into practice: The role and function of facilitation. Journal of Advanced Nursing 37(6):577–588.

Hendryx, M. S., J. F. Fieselmann, M. J. Bock, D. S. Wakefield, C. M. Helms, and S. E. Bentler. 1998. Outreach education to improve quality of rural ICU care. Results of a randomized trial. American Journal of Respiratory and Critical Care Medicine 158(2):418–423.

Horbar, J. D., R. F. Soll, G. Suresh, J. Buzas, M. B. Bracken, P. E. Plsek. 2004. Evidence-based surfactant therapy for preterm infants. In Final progress report to AHRQ. Burlington: University of Vermont.

Hyams, A. L., J. A. Brandenburg, S. R. Lipsitz, D. W. Shapiro, and T. A. Brennan. 1995. Practice guidelines and malpractice litigation: A two-way street. Annals of Internal Medicine 122(6):450–455.

Hyatt, J. D., R. P. Benton, and S. F. Derose. 2002. A multifaceted model for implementing clinical practice guidelines across the continuum of care. Journal of Clinical Outcomes Management 9(4):199–206.

Hysong, S. J. 2009. Meta-analysis: Audit and feedback features impact effectiveness on care quality. Medical Care 47(3):356–363.

Hysong, S., R. Best, and J. Pugh. 2006. Audit and feedback and clinical practice guideline adherence: Making feedback actionable. Implementation Science 1(1):9.

ICSI (Institute for Clinical Systems Improvement). 2010. ICSI history. http://www.icsi.org/about/icsi_history/ (accessed July 8, 2010).

Irwin, C., and E. M. Ozer. 2004. Implementing adolescent preventive guidelines. In Final progress report to AHRQ. San Francisco: University of California–San Francisco Division of Adolescent Medicine.

Jamtvedt, G., J. M. Young, D. T. Kristoffersen, M. A. O’Brien, and A. D. Oxman. 2006a. Audit and feedback: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews (2):CD000259.

Jamtvedt, G., J. M. Young, D. T. Kristoffersen, M. A. O’Brien, and A. D. Oxman. 2006b. Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback. Quality and Safety in Health Care 15(6):433–436.

Jones, K. R., R. Fink, C. Vojir, G. Pepper, E. Hutt, L. Clark, J. Scott, R. Martinez, D. Vincent, and B. K. Mellis. 2004. Translation research in long-term care: Improving pain management in nursing homes. Worldviews Evidence Based Nursing 1(Suppl 1):S13–S20.

Jones, J., C. Snyder, and A. Wu. 2007. Issues in the design of Internet-based systems for collecting patient-reported outcomes. Quality of Life Research 16(8):1407–1417.

Jones, J. B., W. F. Stewart, J. Darer, and D. F. Sittig. 2010. Beyond the threshold: Real time use of evidence in practice. In Committee on Standards for Trustworthy Clinical Practice Guidelines commissioned paper.

Kanter, M., O. Martinez, G. Lindsay, K. Andrews, and C. Denver. 2010. Proactive office encounter: A systematic approach to preventive and chronic care at every patient encounter. The Permanente Journal 14(3):38–43.

Katz, D. A., R. B. Brown, D. R. Muehlenbruch, M. C. Fiore, and T. B. Baker. 2004a. Implementing guidelines for smoking cessation: Comparing the efforts of nurses and medical assistants. American Journal of Preventive Medicine 27(5):411–416.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Katz, D. A., D. R. Muehlenbruch, R. L. Brown, M. C. Fiore, and T. B. Baker. 2004b. Effectiveness of implementing the Agency for Healthcare Research and Quality smoking cessation clinical practice guideline: A randomized, controlled trial. Journal of National Cancer Institute 96(8):594–603.

Kiefe, C. I., J. J. Allison, O. D. Williams, S. D. Person, M. T. Weaver, and N. W. Weissman. 2001. Improving quality improvement using achievable benchmarks for physician feedback: A randomized controlled trial. JAMA 285(22):2871–2879.

Kirsh, S. R., R. H. Lawrence, and D. C. Aron. 2008. Tailoring an intervention to the context and system redesign related to the intervention: A case study of implementing shared medical appointments for diabetes. Implementation Science 3:34.

Kochevar, L. K., and E. M. Yano. 2006. Understanding health care organization needs and context. Beyond performance gaps. Journal of General Internal Medicine 21(Suppl 2):S25–S29.

Kothari, A., N. Edwards, N. Hamel, and M. Judd. 2009. Is research working for you? Validating a tool to examine the capacity of health organizations to use research. Implementation Science 4:46.

Kozel, C. T., W. M. Kane, E. M. Rogers, J. E. Brandon, M. T. Hatcher, M. J. Hammes, and R. E. Operhall. 2003. Exploring health promotion agenda-setting in New Mexico: Reshaping health promotion leadership. Promoting Education 10(4):171–177, 198, 209.

Krantz, M. J., S. Cornel, and W. R. Hiatt. 2005. Use of ankle brachial index screening for selecting patients for antiplatelet drug therapy. Pharmacotherapy 25(12):1826–1828.

Kuilboer, M. M., M. A. van Wijk, M. Mosseveld, E. van der Does, J. C. de Jongste, S. E. Overbeek, B. Ponsioen, and J. van der Lei. 2006. Computed critiquing integrated into daily clinical practice affects physicians’ behavior: A randomized clinical trial with AsthmaCritic. Methods of Information in Medicine 45(5):431–437.

LeCraw, L. L. 2007. Use of clinical practice guidelines in medical malpractice litigation. Oncology Practice (3):254.

Levine, R. S., B. A. Husaini, N. Briggs, V. Cain, T. Cantrell, C. Craun.. 2004. Translating prevention research into practice. In Final progress report to AHRQ. Nashville: Meharry Medical College/Tennessee State University.

Litaker, D., M. Ruhe, S. Weyer, and K. Stange. 2008. Association of intervention outcomes with practice capacity for change: Subgroup analysis from a group randomized trial. Implementation Science 3(1):25.

Locock, L., S. Dopson, D. Chambers, and J. Gabbay. 2001. Understanding the role of opinion leaders in improving clinical effectiveness. Social Science and Medicine 53(6):745–757.

Loeb, M., K. Brazil, A. McGeer, K. Stevenson, S. D. Walter, L. Lohfeld. 2004. Optimizing antibiotic use in long term care. In Final progress report to AHRQ. Hamilton, Ontario, Canada: McMaster University.

Lozano, P., J. A. Finkelstein, V. J. Carey, E. H. Wagner, T. S. Inui, A. L. Fuhlbrigge, S. B. Soumerai, S. D. Sullivan, S. T. Weiss, and K. B. Weiss. 2004. A multisite randomized trial of the effects of physician education and organizational change in chronic-asthma care: Health outcomes of the Pediatric Asthma Care Patient Outcomes Research Team II Study. Archives of Pediatric Adolescent Medicine 158(9):875–883.

Mansouri, M., and J. Lockyer. 2007. A meta-analysis of continuing medical education effectiveness. Journal of Continuing Education in the Health Professions 27(1):6–15.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

McDonald, M. V., L. E. Pezzin, P. H. Feldman, C. M. Murtaugh, and T. R. Peng. 2005. Can just-in-time, evidence-based “reminders” improve pain management among home health care nurses and their patients? Journal of Pain Symptom Management 29(5):474–488.

McGlynn, E. A., S. M. Asch, J. Adams, J. Keesey, J. Hicks, A. DeCristofaro, and E. A. Kerr. 2003. The quality of health care delivered to adults in the United States. New England Journal of Medicine 348(26):2635–2645.

Mehler, P. S., M. J. Krantz, R. A. Lundgren, R. O. Estacio, T. D. MacKenzie, L. Petralia, and W. R. Hiatt. 2005. Bridging the quality gap in diabetic hyperlipidemia: A practice-based intervention. American Journal of Medicine 118(12):1414.

Mehta, R. H., C. K. Montoye, M. Gallogly, P. Baker, A. Blount, J. Faul, C. Roychoudhury, S. Borzak, S. Fox, M. Franklin, M. Freundl, E. Kline-Rogers, T. LaLonde, M. Orza, R. Parrish, M. Satwicz, M. J. Smith, P. Sobotka, S. Winston, A. A. Riba, and K. A. Eagle. 2002. Improving quality of care for acute myocardial infarction: The Guidelines Applied in Practice (GAP) initiative. JAMA 287(10):1269–1276.

Mello, M. M. 2001. Of swords and shields: The role of clinical practice guidelines in medical practice litigation. University of Pennsylvania Law Review 149 U. Pa. L. Rev. 645.

Miller, P. L., S. J. Frawley, and F. G. Sayward. 2001. Maintaining and incrementally revalidating a computer-based clinical guideline: A case study. Journal of Biomedical Informatics 34(2):99–111.

Murtaugh, C. M., L. E. Pezzin, M. V. McDonald, P. H. Feldman, and T. R. Peng. 2005. Just-in-time evidence-based e-mail “reminders” in home health care: Impact on nurse practices. Health Services Research 40(3):849–864.

Nelson, E. C., P. B. Batalden, T. P. Huber, J. J. Mohr, M. M. Godfrey, L. A. Headrick, and J. H. Wasson. 2002. Microsystems in health care: Learning from high-performing front-line clinical units. Joint Commission Journal of Quality Improvement 28(9):472–493.

NGC (National Guideline Clearinghouse). 2010. National Guideline Clearinghouse. http://www.guideline.gov/ (accessed April 7, 2010).

Nieva, V., R. Murphy, N. Ridley, N. Donaldson, J. Combes, P. Mitchell. 2005. From science to service: A framework for the transfer of patient safety research into practice, advanced in patient safety: From research to implementation. Rockville, MD: Agency for Healthcare Research and Quality.

O’Brien, M. A., S. Rogers, G. Jamtvedt, A. D. Oxman, J. Odgaard-Jensen, D. T. Kristoffersen, L. Forsetlund, D. Bainbridge, N. Freemantle, D. A. Davis, R. B. Haynes, and E. L. Harvey. 2007. Educational outreach visits: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviewvs (4):CD000409.

O’Connor, P. J., L. I. Solberg, J. Christianson, G. Amundson, and G. Mosser. 1996. Mechanism of action and impact of a cystitis clinical practice guideline on outcomes and costs of care in an HMO. Joint Commission Journal of Quality Improvement 22(10):673–682.

Open Clinical. 2010. Guideline modeling methods summaries. http://www.openclinical.org/gmmsummaries.html (accessed July 8 2010).

Petersen, L. A., L. D. Woodard, T. Urech, C. Daw, and S. Sookanan. 2006. Does pay-for-performance improve the quality of health care? Annals of Internal Medicine 145(4):265–272.

Prior, M., M. Guerin, and K. Grimmer-Somers. 2008. The effectiveness of clinical guideline implementation strategies—A synthesis of systematic review findings. Journal of Evaluation in Clinical Practice 14(5):888–897.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Redfern, S., and S. Christian. 2003. Achieving change in health care practice. Journal of Evaluation in Clinical Practice 9(2):225–238.

Rosoff, A. J. 2001. Evidence-based medicine and the law: The courts confront clinical practice guidelines. Journal of Health Politics, Policy & Law 26(2):327–368.

Rubenstein, L. V., and J. Pugh. 2006. Strategies for promoting organizational and practice change by advancing implementation research. Journal of General Internal Medicine 21 (Suppl 2):S58–S64.

Rycroft-Malone, J., and T. Bucknall. 2010. Models and frameworks for implementing evidence-based practice: Linking evidence to action. Evidence-based Nursing Series. Chichester, West Sussex, UK, and Ames, IA: Wiley-Blackwell.

Sales, A., D. Atkins, M. J. Krantz, and L. Solberg. 2010. Issues in implementation of trusted clinical practice guidelines. In Committee on Standards for Developing Trustworthy Clinical Practice Guidelines commissioned paper.

Scott, S. D., R. C. Plotnikoff, N. Karunamuni, R. Bize, and W. Rodgers. 2008. Factors influencing the adoption of an innovation: An examination of the uptake of the Canadian Heart Health Kit (HHK). Implementation Science 3:41.

Scott-Findlay, S., and K. Golden-Biddle. 2005. Understanding how organizational culture shapes research use. Journal of Nursing Administration 35(7–8):359–365.

Sequist, T. D., T. K. Gandhi, A. S. Karson, J. M. Fiskio, D. Bugbee, M. Sperling, E. F. Cook, E. J. Orav, D. G. Fairchild, and D. W. Bates. 2005. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. Journal of the American Medical Informatics Association 12(4):431–437.

Shankaran, V., T. H. Luu, N. Nonzee, E. Richey, J. M. McKoy, J. Graff Zivin, A. Ashford, R. Lantigua, H. Frucht, M. Scoppettone, C. L. Bennett, and S. Sheinfeld Gorin. 2009. Costs and cost effectiveness of a health care provider-directed intervention to promote colorectal cancer screening. Journal of Clinical Oncology 27(32):5370–5375.

Shiffman, R., J. Dixon, C. Brandt, A. Essaihi, A. Hsiao, G. Michel, and R. O’Connell. 2005. The guideline implementability appraisal (glia): Development of an instrument to identify obstacles to guideline implementation. BMC Medical Informatics and Decision Making 5(1):23.

Shojania, K. G., and J. M. Grimshaw. 2005. Evidence-based quality improvement: The state of the science. Health Affairs 24(1):138–150.

Shojania, K. G., S. R. Ranji, K. M. McDonald, J. M. Grimshaw, V. Sundaram, R. J. Rushakoff, and D. K. Owens. 2006. Effects of quality improvement strategies for Type 2 diabetes on glycemic control: A meta-regression analysis. JAMA 296(4):427–440.

Shojania, K. G., A. Jennings, A. Mayhew, C. R. Ramsay, M. P. Eccles, and J. Grimshaw. 2009. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database of Systematic Reviews (3):CD001096.

Shojania, K. G., A. Jennings, A. Mayhew, C. Ramsay, M. Eccles, and J. Grimshaw. 2010. Effect of point-of-care computer reminders on physician behaviour: A systematic review. Canadian Medical Association Journal 182(5):E216–E225.

Shortell, S. M. 2004. Increasing value: A research agenda for addressing the managerial and organizational challenges facing health care delivery in the United States. Medical Care Research Reviews 61(3 Suppl):12S–30S.

Sittig, D., A. Wright, J. S. Ash, and B. Middleton. 2009a. A set of preliminary standards recommended for achieving a national repository of clinical decision support interventions. AMIA Annual Symposium Proceedings 2009: 614–618.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Sittig, D. F., A. Wright, J. S. Ash, and B. Middleton. 2009b. A set of preliminary standards recommended for achieving a national repository of clinical decision support interventions. Paper presented at AMIA 2009 Symposium, San Francisco, CA.

Smith, C. S., M. G. Harbrecht, S. M. Coronel, and M. J. Krantz. 2008. State consensus guideline for the prevention of cardiovascular disease in primary care settings. Critical Pathways in Cardiology: A Journal of Evidence-Based Medicine 7:122–125.

Solberg, L. 2009. Lessons for non-VA care delivery systems from the U.S. Department of Veterans Affairs Quality Enhancement Research Initiative: QUERI Series. Implementation Science 4(1):9.

Solberg, L. I., M. L. Brekke, C. J. Fazio, J. Fowles, D. N. Jacobsen, T. E. Kottke, G. Mosser, P. J. O’Connor, K. A. Ohnsorg, and S. J. Rolnick. 2000. Lessons from experienced guideline implementers: Attend to many factors and use multiple strategies. Joint Commission Journal of Quality Improvement 26(4):171–188.

Solberg, L. I., M. C. Hroscikoski, J. M. Sperl-Hillen, P. G. Harper, and B. F. Crabtree. 2006. Transforming medical care: Case study of an exemplary, small medical group. Annals of Family Medicine 4(2):109–116.

Solomon, D. H., L. Van Houten, R. J. Glynn, L. Baden, K. Curtis, H. Schrager, and J. Avorn. 2001. Academic detailing to improve use of broad-spectrum antibiotics at an academic medical center. Archives of Internal Medicine 161(15):1897–1902.

Soumerai, S. B., and J. Avorn. 1986. Economic and policy analysis of university-based drug “detailing.” Medical Care 24(4):313–331.

Soumerai, S. B., T. J. McLaughlin, J. H. Gurwitz, E. Guadagnoli, P. J. Hauptman, C. Borbas, N. Morris, B. McLaughlin, X. Gao, D. J. Willison, R. Asinger, and F. Gobel. 1998. Effect of local medical opinion leaders on quality of care for acute myocardial infarction: A randomized controlled trial. JAMA 279(17):1358–1363.

Stafford, R. S., L. K. Bartholomew, W. C. Cushman, J. A. Cutler, B. R. Davis, G. Dawson, P. T. Einhorn, C. D. Furberg, L. B. Piller, S. L. Pressel, and P. K. Whelton. 2010. Impact of the ALLHAT/JNC7 Dissemination Project on thiazide-type diuretic use. Archives of Internal Medicine 170(10):851–858.

Stetler, C. B. 2003. Role of the organization in translating research into evidence-based practice. Outcomes Management 7(3):97–103; quiz 104–105.

Stetler, C. B., M. W. Legro, J. Rycroft-Malone, C. Bowman, G. Curran, M. Guihan, H. Hagedorn, S. Pineros, and C. M. Wallace. 2006a. Role of “external facilitation” in implementation of research findings: A qualitative evaluation of facilitation experiences in the Veterans Health Administration. Implementation Science 1:23.

Stetler, C. B., M. W. Legro, C. M. Wallace, C. Bowman, M. Guihan, H. Hagedorn, B. Kimmel, N. D. Sharp, and J. L. Smith. 2006b. The role of formative evaluation in implementation research and the QUERI experience. Journal of General Internal Medicine 21(Suppl 2):S1–S8.

Stetler, C. B., J. A. Ritchie, J. Rycroft-Malone, A. A. Schultz, and M. P. Charns. 2009. Institutionalizing evidence-based practice: An organizational case study using a model of strategic change. Implementation Science 4:78.

Thomas, J. W., E. C. Ziller, and D. A. Thayer. 2010. Low costs of defensive medicine, small savings from tort reform. Health Affairs 29(9):1578–1584.

Titler, M. G., and L. Q. Everett. 2001. Translating research into practice. Considerations for critical care investigators. Critical Care Nursing Clinics of North America 13(4):587–604.

Titler, M. G., L. Cullen, and G. Ardery. 2002. Evidence-based practice: An administrative perspective. Reflections Nursing Leadership 28(2):26–27, 45, 46.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

Titler, M. G., K. Herr, J. M. Brooks, X. J. Xie, G. Ardery, M. L. Schilling, J. L. Marsh, L. Q. Everett, and W. R. Clarke. 2009. Translating research into practice intervention improves management of acute pain in older hip fracture patients. Health Services Research 44(1):264–287.

Vaughn, T. E., K. D. McCoy, B. J. BootsMiller, R. F. Woolson, B. Sorofman, T. Tripp-Reimer, J. Perlin, and B. N. Doebbeling. 2002. Organizational predictors of adherence to ambulatory care screening guidelines. Medical Care 40(12):1172–1185.

Wallin, L. 2009. Knowledge translation and implementation research in nursing. International Journal of Nursing Studies 46(4):576–587.

Ward, M. M., T. C. Evans, A. J. Spies, L. L. Roberts, and D. S. Wakefield. 2006. National Quality Forum 30 safe practices: Priority and progress in Iowa hospitals. American Journal of Medical Quality 21(2):101–108.

Watson, N. M. 2004. Advancing quality of urinary incontinence evaluation and treatment in nursing homes through translational research. Worldviews Evidence Based Nursing 1(Suppl 1):S21–S25.

Weinstein, M. C., B. O’Brien, J. Hornberger, J. Jackson, M. Johannesson, C. McCabe, and B. R. Luce. 2003. Principles of good practice for decision analytic modeling in health-care evaluation: Report of the ISPOR Task Force on Good Research Practices—Modeling Studies. Value Health 6(1):9–17.

Wensing, M., H. Wollersheim, and R. Grol. 2006. Organizational interventions to implement improvements in patient care: A structured review of reviews. Implementation Science 1:2.

Werner, R. M., and R. A. Dudley. 2009. Making the “Pay” matter in Pay-For-Performance: Implications for payment strategies. Health Affairs 28(5):1498–1508.

Wright, A., H. Goldberg, T. Hongsermeier, and B. Middleton. 2007. A description and functional taxonomy of rule-based decision support content at a large integrated delivery network. Journal of the American Medical Informatics Association 14(4):489–496.

Wright, A., D. F. Sittig, J. S. Ash, S. Sharma, J. E. Pang, and B. Middleton. 2009. Clinical decision support capabilities of commercially-available clinical information systems. Journal of the American Medical Informatics Association 16(5):637–644.

Yano, E. M. 2008. The role of organizational research in implementing evidence-based practice: QUERI Series. Implementation Science 3:29.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×

This page intentionally left blank.

Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 145
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 146
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 147
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 148
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 149
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 150
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 151
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 152
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 153
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 154
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 155
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 156
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 157
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 158
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 159
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 160
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 161
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 162
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 163
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 164
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 165
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 166
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 167
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 168
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 169
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 170
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 171
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 172
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 173
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 174
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 175
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 176
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 177
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 178
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 179
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 180
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 181
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 182
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 183
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 184
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 185
Suggested Citation:"6 Promoting Adoption of Clinical Practice Guidelines." Institute of Medicine. 2011. Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press. doi: 10.17226/13058.
×
Page 186
Next: 7 Development, Identification, and Evaluation of Trustworthy Clinical Practice Guidelines »
Clinical Practice Guidelines We Can Trust Get This Book
×
Buy Paperback | $59.00 Buy Ebook | $47.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Advances in medical, biomedical and health services research have reduced the level of uncertainty in clinical practice. Clinical practice guidelines (CPGs) complement this progress by establishing standards of care backed by strong scientific evidence. CPGs are statements that include recommendations intended to optimize patient care. These statements are informed by a systematic review of evidence and an assessment of the benefits and costs of alternative care options. Clinical Practice Guidelines We Can Trust examines the current state of clinical practice guidelines and how they can be improved to enhance healthcare quality and patient outcomes.

Clinical practice guidelines now are ubiquitous in our healthcare system. The Guidelines International Network (GIN) database currently lists more than 3,700 guidelines from 39 countries. Developing guidelines presents a number of challenges including lack of transparent methodological practices, difficulty reconciling conflicting guidelines, and conflicts of interest. Clinical Practice Guidelines We Can Trust explores questions surrounding the quality of CPG development processes and the establishment of standards. It proposes eight standards for developing trustworthy clinical practice guidelines emphasizing transparency; management of conflict of interest ; systematic review—guideline development intersection; establishing evidence foundations for and rating strength of guideline recommendations; articulation of recommendations; external review; and updating.

Clinical Practice Guidelines We Can Trust shows how clinical practice guidelines can enhance clinician and patient decision-making by translating complex scientific research findings into recommendations for clinical practice that are relevant to the individual patient encounter, instead of implementing a one size fits all approach to patient care. This book contains information directly related to the work of the Agency for Healthcare Research and Quality (AHRQ), as well as various Congressional staff and policymakers. It is a vital resource for medical specialty societies, disease advocacy groups, health professionals, private and international organizations that develop or use clinical practice guidelines, consumers, clinicians, and payers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!