Skip to main content
Skip main navigationClose Drawer MenuOpen Drawer Menu

ACC/AHA Special Report: Clinical Practice Guideline Implementation Strategies: A Summary of Systematic Reviews by the NHLBI Implementation Science Work Group: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice GuidelinesFree Access

ACC/AHA Special Report

JACC, 69 (8) 1076–1092
Sections

Abstract

Background:

In 2008, the National Heart, Lung, and Blood Institute convened an Implementation Science Work Group to assess evidence-based strategies for effectively implementing clinical practice guidelines. This was part of a larger effort to update existing clinical practice guidelines on cholesterol, blood pressure, and overweight/obesity.

Objectives:

Review evidence from the published implementation science literature and identify effective or promising strategies to enhance the adoption and implementation of clinical practice guidelines.

Methods:

This systematic review was conducted on 4 critical questions, each focusing on the adoption and effectiveness of 4 intervention strategies: (1) reminders, (2) educational outreach visits, (3) audit and feedback, and (4) provider incentives. A scoping review of the Rx for Change database of systematic reviews was used to identify promising guideline implementation interventions aimed at providers. Inclusion and exclusion criteria were developed a priori for each question, and the published literature was initially searched up to 2012, and then updated with a supplemental search to 2015. Two independent reviewers screened the returned citations to identify relevant reviews and rated the quality of each included review.

Results:

Audit and feedback and educational outreach visits were generally effective in improving both process of care (15 of 21 reviews and 12 of 13 reviews, respectively) and clinical outcomes (7 of 12 reviews and 3 of 5 reviews, respectively). Provider incentives showed mixed effectiveness for improving both process of care (3 of 4 reviews) and clinical outcomes (3 reviews equally distributed between generally effective, mixed, and generally ineffective). Reminders showed mixed effectiveness for improving process of care outcomes (27 reviews with 11 mixed and 3 generally ineffective results) and were generally ineffective for clinical outcomes (18 reviews with 6 mixed and 9 generally ineffective results). Educational outreach visits (2 of 2 reviews), reminders (3 of 4 reviews), and provider incentives (1 of 1 review) were generally effective for cost reduction. Educational outreach visits (1 of 1 review) and provider incentives (1 of 1 review) were also generally effective for cost-effectiveness outcomes. Barriers to clinician adoption or adherence to guidelines included time constraints (8 reviews/overviews); limited staffing resources (2 overviews); timing (5 reviews/overviews); clinician skepticism (5 reviews/overviews); clinician knowledge of guidelines (4 reviews/overviews); and higher age of the clinician (1 overview). Facilitating factors included guideline characteristics such as format, resources, and end-user involvement (6 reviews/overviews); involving stakeholders (5 reviews/overviews); leadership support (5 reviews/overviews); scope of implementation (5 reviews/overviews); organizational culture such as multidisciplinary teams and low-baseline adherence (9 reviews/overviews); and electronic guidelines systems (3 reviews).

Conclusion:

The strategies of audit and feedback and educational outreach visits were generally effective in improving both process of care and clinical outcomes. Reminders and provider incentives showed mixed effectiveness, or were generally ineffective. No general conclusion could be reached about cost effectiveness, because of limitations in the evidence. Important gaps exist in the evidence on effectiveness of implementation interventions, especially regarding clinical outcomes, cost effectiveness and contextual issues affecting successful implementation.

Methodology Members

Graciela Castillo, MPH

Susan K.R. Heil, PhD

Jennifer Stephens, MPH

Julie C. Jacobson Vann, PhD, MS, RN

American Institutes for Research conducted the systematic review under a contract with the National Heart, Lung, and Blood Institute.

ACC/AHA Task Force Members

Jonathan L. Halperin, MD, FACC, FAHA, Chair

Glenn N. Levine, MD, FACC, FAHA, Chair-Elect

Sana M. Al-Khatib, MD, MHS, FACC, FAHA

Kim K. Birtcher, PharmD, MS, AACC

Biykem Bozkurt, MD, PhD, FACC, FAHA

Ralph G. Brindis, MD, MPH, MACC

Joaquin E. Cigarroa, MD, FACC

Lesley H. Curtis, PhD, FAHA

Lee A. Fleisher, MD, FACC, FAHA

Federico Gentile, MD, FACC

Samuel Gidding, MD, FAHA

Mark A. Hlatky, MD, FACC

John Ikonomidis, MD, PhD, FAHA

José Joglar, MD, FACC, FAHA

Susan J. Pressler, PhD, RN, FAHA

Duminda N. Wijeysundera, MD, PhD

Table of Contents

1.

Introduction1078

2.

Methods1079

2.1.

Critical Questions1079

2.2.

Inclusion and Exclusion Criteria1080

2.3.

The Process1080

2.4.

Data Analysis1080

3.

Results1080

3.1.

Critical Question 11081

3.2.

Critical Question 21081

3.3.

Critical Question 31081

3.4.

Critical Question 41082

3.5.

Summary1082

4.

Discussion1082

4.1.

Common Themes in the Evidence and Practice Implications1082

4.2.

Report Limitations1082

4.3.

Research Gaps1083

5.

Perspectives1084

5.1.

Translation Outlook 11084

5.2.

Translation Outlook 21084

5.3.

Translation Outlook 31084

Figures and Tables1086

Figure 1

Multilevel Model of CPG Implementation Strategies1086

Figure 2

Selection of Articles for Inclusion in the Report1087

Table 1

Definition of Intervention Strategies1086

Table 2

Overall Effectiveness Across All Included Studies by Intervention, Type, and Outcome1087

Table 3

CQ 1: Summary of Systematic Reviews by Intervention, Effectiveness Rating, and Quality1088

Table 4

CQ 3: Contextual Factors That Appear to Hinder the Success of the Intervention Strategies1088

Table 5

CQ 4: Contextual Factors That Appear to Facilitate the Success of the Intervention Strategies1088

Table 6

Suggested Actions to Address Key Research Needs1089

Appendix 1

  • Author Relationships With Industry and Other Entities1090

Appendix 2

  • Reviewer Relationships With Industry and Other Entities1092

References1084

1 Introduction

The National Heart, Lung, and Blood Institute (NHLBI) began to sponsor development of clinical practice guidelines (CPGs) in the 1970s to promote application of research findings for prevention, detection, and treatment of cardiovascular, lung, and blood diseases. In 2008, the NHLBI established expert panels to update the guidelines for high blood cholesterol, high blood pressure, and overweight/obesity using rigorous, systematic evidence reviews. Concurrently, 3 crosscutting work groups were formed to address risk assessment, lifestyle, and implementation. In 2013, the NHLBI initiated collaboration with the American College of Cardiology (ACC) and American Heart Association (AHA) to work with other organizations to complete, publish, and widely disseminate these guidelines. Beginning in 2014, the ACC/AHA Task Force on Clinical Practice Guidelines began updating these guidelines with collaborating organizations as an ongoing process to incorporate emerging evidence.

The uneven implementation of evidence-based CPGs is widely recognized as a continuing challenge to improving public health (1,2). Consistent with the new collaborative partnership model for developing guidelines based on NHLBI-sponsored systematic evidence reviews (3), the Implementation Science Work Group (ISWG) systematically reviewed the evidence from translation research to identify strategies shown to be effective or promising for improving the delivery of evidence-based care. The ISWG focused on healthcare delivery at both clinician and systems levels, while considering various intervention approaches, settings, contexts, and barriers commonly seen in healthcare systems. Although patient adherence to guideline recommendations is essential to achieve meaningful clinical outcomes, in this report, the NHLBI focused on the critical first steps of provider adoption and adherence. The NHLBI commissioned this report to advance the field of implementation science and inform the knowledge translation process.

2 Methods

The ISWG developed a conceptual framework—based on the Multilevel Approaches Toward Community Health (MATCH) model (4)—to define 4 levels where guideline implementation strategies can be initiated: the policy level, clinical institution level, provider level, and patient level. This conceptual framework is illustrated in Figure 1. Superimposed onto the strategies derived from the MATCH model is the current taxonomy of interventions aimed at achieving practice change used by the Cochrane Effective Practice and Organisation of Care (EPOC) Group (5): Professional Interventions, Financial Interventions, Organizational Interventions (with subcategories for Provider-oriented, Patient-oriented, and Structural interventions), and Regulatory Interventions. In Figure 1, this taxonomy is denoted in parentheses next to extant elements of the model.

Figure 1.
Figure 1.

Multilevel Model of CPG Implementation Strategies

Figure 1 presents strategies to implement guidelines recommendations at 4 levels—policy, clinical institution, provider, and patient—to improve patient health. CPG indicates clinical practice guidelines; FI-P, financial intervention-provider; OI-P, organizational intervention-provider; OI-Pt, organizational intervention-patient; OI-S, organizational intervention-structural; PI, professional intervention; and RI, regulatory intervention.

The ISWG used the existing Rx for Change database of systematic reviews on healthcare intervention strategies, compiled by the Canadian Agency for Drugs and Technologies in Health (6) for its initial scoping review to identify promising guideline implementation interventions aimed at providers. The results clearly identified 3 intervention strategies aimed at providers with some evidence of effectiveness: academic detailing, audit and feedback, and provider reminders. A fourth intervention strategy—provider incentives—was also selected because of evidence of effectiveness in Europe and its increasing use in U.S. healthcare systems. Evaluation was limited to these 4 interventions because, beyond the intervention strategies themselves, ISWG was keenly interested in cost effectiveness, effect on clinical outcomes, and contextual issues affecting the success of the interventions. Additionally, given the practical considerations (e.g., cost, time, training) associated with implementation interventions, the 4 strategies also likely vary in the resources and infrastructure required to make them both viable and successful in applied settings. Such considerations are likely to be of interest to stakeholders interested in supporting widespread adoption of the guidelines. The 4 strategies were mapped to their EPOC equivalent as defined in Table 1. Hereafter, the EPOC terminology will be used.

Table 1. Definition of Intervention Strategies

InterventionEPOC EquivalentEPOC Definition
Provider RemindersRemindersPatient or encounter specific information, provided verbally, on paper or on a computer screen, which is designed or intended to prompt a health professional to recall information. This would usually be encountered through their general education; in the medical records or through interactions with peers, and so remind them to perform or avoid some action to aid individual patient care. Computer-aided decision support and drugs dosage are included.
Academic DetailingEducational Outreach VisitsUse of a trained person who met with providers in their practice settings to give information with the intent of changing the provider’s practice. The information given may have included feedback on the performance of the provider(s).
Audit and FeedbackAudit and FeedbackAny summary of clinical performance of health care over a specified period of time. The summary may also have included recommendations for clinical action. The information may have been obtained from medical records, computerized databases, or observations from patients.
Pay for PerformanceProvider IncentivesProvider received direct or indirect financial reward or benefit for doing specific action. (Provider here means an individual. This is distinct from the EPOC term “institution incentives,” which is defined as: institution or group of providers received direct or indirect financial rewards or benefits for doing specific action.)

EPOC indicates Effective Practice and Organisation of Care.

As shown in Figure 1, beyond the clinical institution, the clinician, and the patient, policy-level factors and the social, cultural, and physical environment influence guideline implementation. Three of the interventions that are the focus of this report are strategies classified by EPOC as Professional Interventions (i.e., educational outreach visits, audit and feedback, and reminders), all falling into the “clinical institution” box of the MATCH model. The fourth intervention—provider incentives—represents an EPOC Financial Intervention, but it too falls into the clinical institution box of the MATCH model. Thus, the scope of this report is limited to a subset of interventions intended to affect providers through the clinical institution. Most of the evidence assessed the impact of the interventions on “Clinician Intermediate Outcomes” (Figure 1), although several reviews reported “Patient Intermediate Outcomes” (particularly patient risk factors) and some “Patient Hard Outcomes” (e.g., mortality).

In early 2012, with the adult cardiovascular disease (CVD) risk reduction guidelines in the final stages and over budget, the NHLBI decided to use systematic reviews (SRs) instead of primary studies for the implementation science systematic report. Accordingly, the NHLBI contracted with the American Institutes for Research to conduct the initial SR, which included 48 reviews published through 2012. A supplemental search to 2015 identified 7 additional reviews. This evaluation of SRs and overviews of synthesized evidence incorporated information that focused primarily on 3 distinct outcome categories: process-of-care; clinical effectiveness; and cost effectiveness. Although less-frequently reported, patient satisfaction and clinician satisfaction also were explored. The report focused on the 4 intervention strategies selected by the ISWG.

See the Online Data Supplement for additional details on the process and methods.

2.1 Critical Questions

Directed by the NHLBI, and with support from the SR contractor, the ISWG constructed critical questions (CQs) most relevant to identifying effective strategies to improve the delivery of evidence-based care. The 4 critical questions were:

CQ1.

Does the evidence support the effectiveness of the selected intervention strategies (i.e., educational outreach visits, reminders, audit and feedback, and provider incentives) in particular practice settings or for specific categories of health professionals?

CQ2.

What are the cost considerations of implementing the selected intervention strategies (i.e., educational outreach visits, reminders, audit and feedback, and provider incentives)?

CQ3.

What are the contextual barriers—financial, organizational, and regulatory—that hinder or limit clinician adherence to and the adoption of CPGs, as encouraged by the selected intervention strategies?

CQ4.

What policy or regulatory, organizational, and financial characteristics or factors influence the success of the selected clinical-institution level intervention strategies (i.e., educational outreach visits, reminders, audit and feedback, and provider incentives) in achieving the implementation of guidelines and affecting professional practice behaviors?

2.2 Inclusion and Exclusion Criteria

Inclusion and exclusion criteria were developed (a priori) for each CQ. Reviews were excluded if they did not focus on CPGs or on the implementation of a clinical practice that directly affected patient care. Reviews were also excluded if they did not include interventions aimed at clinicians or focused on the implementation of administrative practices.

For CQs 1 and 2, the ISWG selected SRs that focused on the implementation of CPGs or a clinical practice directly affecting patient care and aimed at clinicians. For CQs 3 and 4, we selected both SRs and overviews of SRs that focused on contextual issues affecting guideline implementation.

The ISWG included any health condition or disease, setting, outcome, or population. Studies could include process-of-care (e.g., medication ordering, lab ordering), clinical effectiveness (e.g., blood pressure reduction), or other types of outcomes (e.g., cost and utilization and clinician satisfaction). Studies that focused solely on interventions targeting patients, such as those examining patient education or patient reminders, were excluded.

The search was limited to English-language resources.

2.3 The Process

The ISWG maintained a separation of the collection and compilation of the evidence and the final conclusions. The NHLBI contractor conducted the initial systematic search of the published literature up to 2012 from relevant bibliographic databases (i.e., the Cochrane Library, PubMed, and other National Library of Medicine sources, such as the Health Services and Technology Assessment Texts and research summaries, reviews, and reports from the Agency for Healthcare Research and Quality evidence-based practice centers) for each critical question. Two independent reviewers (G.C., J.S.) screened the returned citations to identify relevant SRs and overviews, and the rigorous validation procedures were applied to ensure that the selected articles met the preestablished inclusion and exclusion criteria. Pairs of independent raters (G.C., J.S., J.J.V., and S.H.) determined the quality of each included SR, using the Assessment of Multiple Systematic Reviews (AMSTAR) tool (7). With oversight from a paired senior researcher (G.C. or J.J.V.), 2 research analysts abstracted relevant information from the included SRs. A second senior researcher (JS) examined 20% of the abstractions to ensure consistency and quality. A senior researcher (G.C.) constructed summary evidence tables with review by a principal researcher (S.H.) for quality control. The tables display the evidence in a manageable format to answer specific parts of the CQ. The contractor also prepared a draft analytic report.

The supplemental search (2012–2015), study selection, and study quality rating was conducted by an independent contractor procured by the ACC and the AHA. The lead NHLBI staff (G.C.B.) extracted relevant information from the included SRs and constructed summary evidence tables.

Using the draft report and summary evidence tables, the ISWG reviewed the consistency of the findings with the strength of the evidence and finalized the report.

2.4 Data Analysis

For CQs 1 and 2, the ISWG used an approach that determined the effectiveness of interventions in each SR based on a count of studies with positive outcomes regardless of statistical significance (8). They used these following 3 categories to characterize the effectiveness of the interventions on each outcome in each review:

1.

Generally effective: More than two thirds of the reviewed studies had positive intervention effects.

2.

Mixed effectiveness: One third to two thirds of the reviewed studies showed positive intervention effects.

3.

Generally ineffective: Less than one third of the reviewed studies showed positive intervention effects.

The assessment of overall effectiveness was derived from the preponderance of effectiveness estimates in the individual reviews. Statistical significance of the effect is not implied in this categorization. This classification scheme is used to provide a sense of the proportion of studies showing a positive effect.

For CQs 3 and 4, conclusions are drawn from the contractor’s qualitative coding of included reviews during article abstraction for a variety of categories of contextual factors identified a priori. Themes were identified and summarized in post hoc analyses to develop general observations about the contextual factors that might support or hinder the implementation of guidelines.

3 Results

Two independent reviewers screened 826 articles and 55 were selected and were abstracted for this report. Included were 39 SRs, and 16 overviews of SRs. The SRs were rated using the 11-point AMSTAR tool—23 received a score of ≥8 and were considered good-quality and 16 received a score of 7 to 4 and were consider fair-quality. Seven other SRs were rated “poor” with scores ≤3 and were excluded and not used for answering the critical questions. Figure 2 illustrates the selection process.

Figure 2.
Figure 2.

Selection of Articles for Inclusion in the Report

Figure 2 presents the study selection process from the initial search returns through title and abstract review and full-text review to select the 55 systematic reviews and overviews used in this report. ISWG indicates Implementation Science Work Group.

3.1 Critical Question 1

Does the evidence support the effectiveness of the selected intervention strategies (i.e., educational outreach visits, reminders, audit and feedback, and provider incentives) in particular practice settings or for specific categories of health professionals?

SRs rated “good” and “fair” were used to answer CQ 1. Table 2 shows the classification of the overall effectiveness of each intervention for process-of-care outcomes and clinical effectiveness outcomes across the full set of included reviews. Table 3 provides expanded detail, summarized from available information by study quality of the effectiveness of each intervention for process-of-care outcomes and clinical effectiveness outcomes.

Table 2. Overall Effectiveness Across All Included Studies by Intervention, Type, and Outcome

InterventionProcess-of-Care OutcomesClinical Effectiveness Outcomes
Educational outreach visitsGenerally effectiveGenerally effective
Audit and feedbackGenerally effectiveGenerally effective
RemindersMixed effectivenessGenerally ineffective
Provider incentivesMixed effectivenessMixed effectiveness

Table 3. CQ 1: Summary of Systematic Reviews by Intervention, Effectiveness Rating, and Quality

InterventionProcess of CareClinical Effectiveness
Generally EffectiveMixed EffectivenessGenerally IneffectiveGenerally EffectiveMixed EffectivenessGenerally Ineffective
Educational Outreach VisitsGood (9,21,28,29)
Fair (10,12,23,25,30–33)
Fair (11)N/AGood (9)
Fair (12,25,33)
Fair (10)Good (28)
Audit and FeedbackGood (9,29,34–36)
Fair (12,23–26,30–33,37)
Good (28,38)
Fair (10,11,13,39)
N/AGood (9,28,40)
Fair (23,25,30,37)
Good (36)
Fair (13,26,33)
Good (41)
RemindersGood (20,29,36,42)
Fair (10,17,24–26,30,32,33,37)
Good (9,15,18,22,35,38,43–45)
Fair (13,14)
Good (16,28)
Fair (11)
Good (36)
Fair (25,30)
Good (9,20,45)
Fair (13,26,37)
Good (16,18,22,43,44)
Fair (14,17,24,33)
Provider IncentivesGood (19)Good (36,38,46)N/AGood (19)Good (36)Good (9)

CQ indicates clinical question; and N/A, not applicable.

In summary, educational outreach visits showed general effectiveness in 12 of 13 SRs for process-of-care outcomes, particularly in prescribing behaviors. Five SRs reported clinical effectiveness outcomes for educational outreach visits. Three of 5 SRs and 14 of the 19 included studies showed clinical effectiveness. A good-quality SR on hypertension (9) found that educational outreach visits improved both process of care and clinical outcomes (reductions in median systolic and diastolic blood pressure). When only the included CVD risk reduction studies were considered, 1 fair-quality SR (10) showed general effectiveness, and 1 fair-quality SR (11) showed mixed effectiveness for process of care outcomes.

Audit and feedback interventions were considered in 23 SRs (9 good quality) and showed general effectiveness for both process-of-care outcomes, particularly in clinician adherence to guidelines, and for clinical outcomes. Audit and feedback showed improved process of care and clinical outcomes for the management of hypertension (9). Four fair-quality SRs also included some studies on CVD risk reduction and 3 of these reviews (10–12) showed general effectiveness for process of care. Conversely, the fourth fair-quality SR (13) showed general ineffectiveness in improving CVD process of care outcomes.

Reminders were considered in 27 SRs—15 were good quality. These SRs showed mixed effectiveness for process-of-care outcomes overall but general effectiveness for prescribing behaviors. However, reminders were generally ineffective for clinical outcomes. The results were similar when only the CVD risk reduction studies were considered in 8 SRs (9,11,13–18). However, reminders were generally effective in improving clinical outcomes for hypertension (9).

Provider incentive interventions were included in 5 good-quality SRs and showed mixed effectiveness for both process-of-care and clinical outcomes—most of the positive outcomes were related to diabetes mellitus and asthma. When CVD risk reduction studies were analyzed separately, 1 good-quality SR (19) found general effectiveness for both process of care and clinical outcomes. However, provider incentives were generally ineffective for improving clinical outcomes for hypertension in another good-quality SR (9).

3.2 Critical Question 2

What are the cost considerations of implementing the selected intervention strategies (i.e., educational outreach visits, reminders, audit and feedback, and provider incentives)?

SRs rated “good” and “fair” were also used to answer CQ 2. Cost considerations refer to cost reduction and cost-effectiveness outcomes based on utilization measures resulting from implementing the selected intervention strategies. The studies in the SRs differ in the way they examined cost. Some calculated the amount saved per physician, cost per prescription, prescribing costs, per-patient cost avoidance, patient out-of-pocket costs, and hospitals’ return on investment. The SRs also differed in the utilization measures they examined. Some measured length of stay, the use of preventive services, or visits to health professionals. Most of the cost-effectiveness assessments consisted of >1 intervention versus a nonintervention control, or they compared interventions. In combination, all these factors made it difficult to reach conclusions about the cost effectiveness of different interventions.

Five good-quality SRs (18–22) and 3 fair-quality SRs (23–25) provided information about intervention costs or cost reductions. Four good-quality SRs (19–22) included studies that reported cost-effectiveness outcomes but none conducted a cost-effectiveness study as a main component of the review (often because of a lack of data).

Educational outreach visits were generally effective in reducing costs in 2 reviews (21,25) and showed cost effectiveness in 1 good-quality review (21). Two fair-quality reviews (23,25) reported cost-reduction findings (length of stay and lab costs) for audit and feedback interventions and the results showed mixed effectiveness. Reminders were generally effective in reducing cost in 3 reviews (18,24,25) and showed mixed effectiveness in another (20). Reminders were also cost effective in 1 review (22) and the results showed mixed effectiveness in another (20). Although based only on 1 good-quality review (19), provider incentive interventions reduced costs and were cost effective.

3.3 Critical Question 3

What are the contextual barriers—financial, organizational, and regulatory—that hinder or limit clinician adherence to and the adoption of CPGs, as encouraged by the selected intervention strategies?

Table 4 summarizes several barriers that were reported to influence clinician adoption or adherence to CPGs.

Table 4. CQ 3: Contextual Factors That Appear to Hinder the Success of the Intervention Strategies

ContextKey Barriers
Organizational

Time (16,17,30,47–51)

Human resources (48,49)

Clinician Knowledge, Attitudes, and Beliefs

Skepticism—concern about evidence base of guidelines, lack of universal acceptance of recommendations, implied rationing of services, fear of litigation (24,47,49–51)

Lack of knowledge of guidelines (24,32,49,50)

Age—older or more experienced clinicians less inclined to use (48)

Workflow and TimingTiming and effectiveness—barrier to effectiveness if further away from point of decision making (42,52–55)

CQ indicates clinical question.

3.4 Critical Question 4

What policy or regulatory, organizational, and financial characteristics or factors influence the success of the selected clinical-institution level intervention strategies (i.e., educational outreach visits, reminders, audit and feedback, and provider incentives) in achieving the implementation of guidelines and affecting professional practice behaviors?

Table 5 presents the evidence for several factors that appear to facilitate the success of the intervention strategies. Three reviews (21,24,26) assessed the effect of various interventions alone compared with combinations of interventions. These reviews concluded that multifaceted interventions are more likely to be effective than single interventions in influencing process of care outcomes.

Table 5. CQ 4: Contextual Factors That Appear to Facilitate the Success of the Intervention Strategies

ContextKey Facilitators
Characteristics of GuidelinesShort and simple format (47)
Provide patient pamphlets (47)
Easy to understand and use (48)
Minimal resources needed to implement (48)
Involving end-users in guidelines development, implementation, and testing (15,48,50,52)
Use of computerized guidelines in practice settings (15,16)
Involving StakeholdersInvolvement in planning, developing, or leading interventions designed to influence practice patterns and clinical outcomes (19,30,34,40,56)
LeadershipLeader’s social influence is recognized (30)
Formal leadership (40)
Local management support and enthusiasm (24,51)
Adequate time to promote new practice (24)
Scope of ImplementationProvider incentives—more broadly implemented in the United Kingdom with more consistent results than in the United States (19)
Multifaceted interventions are more likely to be effective than single interventions (24,26,53,54)
Organizational CultureMultidisciplinary teams, coordination of care, pace of change, a blame-free culture, and a history of quality improvement (9,19,28,38)
Low-baseline adherence (19,29,34,38,40,46,55)
Workflow and TimingElectronic guidelines systems

Integration with computers used in practice (16,17)

Reminders automatic—clinicians not required to seek information (42)

CQ indicates clinical question.

3.5 Summary

This summary of SRs and overview of reviews found general effectiveness for 2 of the 4 selected implementation interventions (educational outreach visits and audit and feedback) for improving process of care and clinical outcomes. Regarding the impact of characteristics of the interventions, multifaceted interventions appeared to be more effective. However, the paucity of controlled head-to-head comparisons and limitations in the evidence allowed only an estimate of general effectiveness, without the ability to determine whether the overall effects of the interventions were statistically significant, or more importantly, clinically meaningful.

No conclusions can be drawn regarding the effectiveness of the intervention strategies to improve process of care and clinical outcomes related to the treatment of CVD risk factors since most reviews did not focus on or include studies on these conditions. However, 1 good-quality review focused on hypertension and 4 fair-quality reviews included some studies on hypertension and dyslipidemia. The results from these few reviews suggest that implementation interventions are potentially as effective in CVD risk reductions as in other areas.

No general conclusion could be reached about the cost of implementing the selected intervention strategies. Although good-quality reviews generally reported cost-savings associated with an intervention, many of the interventions were multifaceted in nature; thus, the total cost associated with any component of an intervention was difficult to discern. Furthermore, cost effectiveness was not explicitly evaluated.

4 Discussion

4.1 Common Themes in the Evidence and Practice Implications

The evidence generally showed greater increase in CPG adherence in practices with low-baseline adherence. Given the success of multifaceted interventions, and the beneficial impact of stakeholder involvement in developing the intervention and a priori assessment of local needs, implementation efforts should emphasize the need for implementers to understand their current practices and how their organizations’ practices may vary from forthcoming CPG recommendations. A self-assessment toolkit could be an important aid to practices when determining which of several implementation strategies might best suit their particular needs, context, and goals.

4.2 Report Limitations

Data used in this report were not retrieved from the primary studies, thus limiting information on the details of the interventions and results to that reported by the review authors. Second, this report used a qualitative synthesis of the evidence, which does not allow an assessment of the size of any expected benefits from the implementation of an implementation strategy. The report also relied heavily on the judgments of the authors of the SRs and the quality of the reviews. Third, analysis in this report is limited to 4 interventions aimed at providers and did not explore systems-level implementation. Other interventions might have shown effectiveness if they had been included. Fourth, the implementation of the 4 intervention strategies varied within reviews. Some reviews assessed single interventions, whereas others assessed multifaceted interventions. Fifth, many evaluations did not report sufficient contextual information to assess their potential influence on implementation efforts (e.g., patient demographics, comorbid conditions, insurance coverage). Another major concern is that only a small number of the included studies provided information about clinical effectiveness and cost outcomes, and only a few provided comparisons of cost effectiveness.

Finally, in reviews of SRs, there is always the risk that an included study may appear in multiple reviews and the overlap presents the potential for double counting the results from individual studies. The ISWG addressed this potential risk in answering CQ 1 (process and clinical outcomes) and CQ 2 (cost) primarily by using only SRs where the included studies were clearly referenced and could be checked across reviews and did not include SRs that were updated by more recent reviews. For reviews with overlapping studies, the ISWG first considered whether counting or not counting the overlap would change the assessment of effectiveness of the interventions in this report. If it would not change the effectiveness, we counted the study in both reviews. However, if counting the overlap would change the effectiveness, we first considered the quality of the reviews, and if the overlapping reviews were of equal quality, counted the study in the most recent review. For example, if a study appeared in a good-quality review and a fair-quality review, we counted the study in the good-quality review and not in the fair-quality review. Finally, in SRs that updated a component (i.e., interventions aimed at people with diabetes mellitus) of an SR, we counted the studies from the latest review and the studies minus the updated component from the older SR. The overlap was substantial for CQ 3 (barriers) and CQ 4 (facilitators), where SRs were combined with overviews of SRs. However, this overlap was inconsequential because the findings for CQs 3 and 4 were not based on study counts.

4.3 Research Gaps

Future research in CPG implementation interventions should address important design limitations in current studies and key gaps in the evidence base (Table 6). An important design limitation is the lack of explicit declaration or standardized terminology for the implementer and target of the interventions. Evidence is sorely needed on more tangible outcomes, such as clinical outcomes and cost effectiveness, in addition to intermediate or process outcomes. Simply demonstrating an effective implementation in one setting is not a guarantee that the same results will be found in other settings. Thus, additional SRs and empirical research are needed to better understand the effectiveness of implementation strategies with differing characteristics, in a variety of settings, with different types of clinicians, and targeting specific types of diseases or conditions—especially the control of CVD risk factors. Although multifaceted interventions rather than single interventions appear to be effective strategies for increasing CPG implementation, identifying the combinations of strategies that are most effective and in which contexts is important.

Table 6. Suggested Actions to Address Key Research Needs

Suggested ActionsResearch Needs
Address Study Design Issues

Clear descriptions of study methods and the interventions

Explicit implementer and target of intervention

Standardized measures of outcomes and descriptions of practice settings

Conduct New Research to Test the Effectiveness of Interventions

Effect on clinical outcomes, rather than intermediate outcomes

Cost effectiveness

Effect of multicomponent interventions, including specific combinations of interventions

Effect of policy-level interventions, for example:

Reimbursement

Accreditation

Publicly reported quality metrics

Effect of interventions targeting varieties of:

Settings, including baseline workflows

Clinician types

Types of diseases and conditions

Focus Evaluations on Contextual Factors

Organizational and practice context

Involvement of stakeholders and leadership

Integration with workflow

Implementation scope

Duration

Timing

Leverage EHR Data and Tools

Mine data for observational studies

Platform for pragmatic prospective studies

Access longer-term data than RCTs

Aggregate data and/or interventions by key factors, for example:

Patient characteristics

Clinician characteristics

Clinic

Healthcare delivery system

Conduct Qualitative and Observational Research

Effectiveness in diverse populations

Drivers of success in real-world implementations

Contextual issues not amenable to RCTs

EHR indicates electronic health record; and RCTs, randomized controlled trials.

Innovative research methods and study designs are needed to leverage electronic health records (EHRs) as they might bolster implementation science in many ways. Specifically, electronic clinical data may improve the ability to target patients (e.g., by diagnosis) for appropriate CPGs. Clinic and health system EHRs may have the ability to efficiently provide feedback on progress in achieving relevant CPG measures (e.g., biomarkers) for an entire clinic or healthcare system, not strictly at the patient or clinician level. And for implementation research, EHRs may streamline planning and conducting other aspects of implementation trials (e.g., more accurately determine event rates, eligible patients). EHRs might also be able to follow patient health outcomes on a long-term basis, beyond the typical length of clinical trials. The evolution of EHRs will likely include the development and embedding of risk models capable to enable targeting people with specific risk profiles. The use of networks of EHRs, such as those in the PCORI (Patient-Centered Outcomes Research Institute) Clinical Data Research Networks, could provide remarkable opportunities to study implementation strategies or even exploit the natural variation in strategies across centers. With many large and diverse patient populations now receiving care that is documented in EHRs, large population-based studies are becoming increasingly practical. Such pragmatic studies have the advantage of including the general population of patients and not just a carefully selected set of participants in a randomized controlled trial. Although such studies may not have the precision of measurement commonly seen in rigorous trials, their benefit comes in the assessment of important clinical outcomes for entire populations of patients.

Finally, the good-quality reviews in this report are largely based on evidence from randomized controlled-trial study designs. Traditional randomized controlled trials are quite different from the context in which real-world implementation and behavior change occur. An observational, more qualitative approach may be needed to better understand how the preceding contextual issues and other drivers affect the success of an implementation intervention. An example of a qualitative approach is The Dartmouth Institute for Health Policy and Clinical Practice benchmarking study of how “best-in-class” health systems use clinical decision support (27). Such an outcomes-oriented approach would allow better evaluation of provider incentives, audit and feedback, educational outreach visits, reminders, and other interventions chosen to advance the implementation of CPGs.

In summary, there is some evidence that guideline implementation interventions are effective for both process of care and clinical outcomes. Limited evidence suggests that implementation interventions are generally effective at reducing costs, and in even more-limited evidence, that they are cost effective. Qualitative analysis suggests recurring themes regarding barriers and facilitators of success. Given the mixed results seen in many implementation studies, additional research focused on intervention effectiveness is needed, with special emphasis on improving methods and study designs, increasing the use of pragmatic trials, and determining how to enhance the utility of electronic clinical data. Also, more studies are needed on clinical outcomes, cost effectiveness and the influence of contextual factors on effectiveness of interventions. Studies done in real-world healthcare delivery systems and qualitative research may help address some of these important gaps in current evidence.

5 Perspectives

5.1 Translation Outlook 1

Audit and feedback and educational outreach visits were generally effective for improving both process of care and clinical outcomes while provider incentives showed mixed effectiveness. Reminders showed mixed effectiveness for process of care and were generally ineffective for improving clinical outcomes.

5.2 Translation Outlook 2

Multifaceted interventions were more effective than a single intervention strategy.

5.3 Translation Outlook 3

Additional research is needed on intervention effectiveness, with special emphasis on improving methods and study designs, increasing the use pragmatic trials, leveraging electronic clinical data, and evaluating cost effectiveness of interventions.

Presidents and Staff

American College of Cardiology

Richard A. Chazal, MD, FACC, President

Shalom Jacobovitz, Chief Executive Officer

William J. Oetgen, MD, MBA, FACC, Executive Vice President, Science, Education, Quality, and Publications

Amelia Scholtz, PhD, Publications Manager, Science, Education, Quality, and Publications

American College of Cardiology/American Heart Association

Lisa Bradfield, CAE, Director, Guideline Methodology and Policy

Abdul R. Abdullah, MD, Associate Science and Medicine Advisor

Allison Rabinowitz, MPH, Project Manager, Science and Clinical Policy

American Heart Association

Steven R. Houser, PhD, FAHA, President

Nancy Brown, Chief Executive Officer

Rose Marie Robertson, MD, FAHA, Chief Science and Medicine Officer

Gayle R. Whitman, PhD, RN, FAHA, FAAN, Senior Vice President, Office of Science Operations

Comilla Sasson, MD, PHD, FACEP, Vice President, Science and Medicine

Jody Hundley, Production Manager, Scientific Publications, Office of Science Operations

Appendix 1

Author Relationships With Industry and Other Entities (Comprehensive)—ACC/AHA Special Report: Clinical Practice Guideline Implementation Strategies: A Summary of Systematic Reviews by the NHLBI Implementation Science Work Group (2012-2016)

Committee MemberEmploymentConsultantSpeakers BureauOwnership/Partnership/PrincipalPersonal ResearchInstitutional, Organizational, or Other Financial BenefitExpert Witness
Wiley V. Chan (Co-Chair)Kaiser Permanente Northwest—Director of Guidelines and Evidence-Based Medicine (through 2014 and as a consultant currently)NoneNoneNone

PCORI

Community Clinics Health Network

None
Thomas A. Pearson (Co-Chair)University of Rochester Medical Center—Executive Vice President for Research and EducationNoneNoneNone

NCATS

NHLBI

PCORI

NoneNone
Glen C. BennettNHLBI—Program CoordinatorNoneNoneNoneNoneNoneNone
Graciela CastilloAmerican Institutes for Research—Senior ResearcherNoneNoneNoneNoneNoneNone
William C. CushmanThe University of Tennessee Health Science Center—Professor, Preventative MedicineNoneNoneNoneNoneNoneNone
Thomas A. GazianoHarvard Medical School—Assistant Professor; Brigham and Women’s Hospital—Associate Physician in Cardiovascular MedicineNoneNoneNone

NHLBI

NIA

NoneNone
Paul N. GormanOregon Health and Science University—Associate ProfessorNoneNoneNone

AHRQ

National Science Foundation

NoneNone
Joel HandlerSouthern California Permanente Medical Group—Staff PhysicianNoneNoneNoneNoneNoneNone
Susan K.R. HeilAmerican Institutes for Research—Health and Social DevelopmentNoneNoneNoneNoneNoneNone
Julie C. Jacobson VannThe University of North Carolina at Chapel Hill—Associate ProfessorNoneNoneNoneNoneNoneNone
Harlan M. KrumholzYale-New Haven Hospital—Director

United Healthcare

NoneNone

Medtronic

Johnson & Johnson

NoneNone
Robert F. KushnerNorthwestern University—Professor in Medicine

Novo Nordisk

Weight Watchers

Zafgen

Retrofit

Takeda

NoneNone

Aspire Bariatrics

NoneNone
Thomas D. MacKenzieDenver Health Foundation—Chief Quality OfficerNoneNoneNone

NCATS

NHLBI

PCORI

NoneNone
Ralph L. SaccoUniversity of Miami Health System—Chairman and Professor Department of Neurology

Boehringer Ingelheim

NoneNone

NINDS

AHA

Duke Clinical Research Institute (DSMB)

UCSF (DSMB)

NoneNone
Sidney C. Smith, Jr.University of North Carolina at Chapel Hill—Professor of MedicineNoneNoneNoneNoneNoneNone
Jennifer StephensJohns Hopkins University—Graduate StudentNoneNoneNoneNoneNoneNone
Victor J. StevensKaiser Permanente Center for Health Research—Research ScientistNoneNoneNone

NIH

NoneNone
Barbara L. WellsNational Institutes of Health—Health Scientist AdministratorNoneNoneNoneNoneNoneNone

This table reflects the healthcare-related relationships of authors with industry and other entities provided by the panel during the document development process (2012–2016). Both compensated and uncompensated relationships are reported. These relationships were reviewed and updated in conjunction with all meetings and conference calls of the panel during the document development process. To review the NHLBI comprehensive policies for managing relationships with industry and other entities, please refer to http://www.nhlbi.nih.gov/guidelines/cvd_adult/coi-rwi_policy.htm.

AHA indicates American Heart Association; AHRQ, Agency for Healthcare Research and Quality; DSMB, data safety monitoring board; NCATS, National Center for Advancing Translational Sciences; NHLBI, National Heart, Lung, and Blood Institute; NIA, National Institute on Aging; NIH, National Institutes of Health; NINDS, National Institute of Neurological Disorders and Stroke; PCORI, Patient-Centered Outcomes Research Institute; and UCSF, University of California, San Francisco.

Appendix 2

Reviewer Relationships With Industry and Other Entities (Comprehensive)—ACC/AHA Special Report: Clinical Practice Guideline Implementation Strategies: A Summary of Systematic Reviews by the NHLBI Implementation Science Work Group (March 2016)

ReviewerRepresentationEmploymentConsultantSpeakers BureauOwnership/Partnership/PrincipalPersonal ResearchInstitutional, Organizational, or Other Financial BenefitExpert Witness
Ralph G. BrindisOfficial Reviewer—ACC/AHA Task Force on Clinical Practice GuidelinesUniversity of California—Clinical Professor of Medicine

ACC

FDA CV Device Panel

NoneNone

State of California OSHPD

NoneNone
David C. Goff, Jr.Official Reviewer—AHAWake Forest University Public Health Sciences Internal Medicine—ProfessorNoneNoneNone

NIH

AHA

Colorado School of Public Health

None
Edward P. HavranekOfficial Reviewer—AHADenver Health Medical Center—Cardiologist

AHA

NHLBI

PCORI

NoneNoneNoneNoneNone
Robert C. HendelOfficial Reviewer—ACC Board of TrusteesUniversity of Miami Cardiac Imaging and Outpatient Services—Director

Adenosine Therapeutics

Astellas Pharma

NoneNoneNoneNoneNone
Srinivas MuraliOfficial Reviewer—ACC Board of GovernorsAllegheny Health Network—Director, Cardiovascular Institute

Actelion

Bayer

Actelion

None

Cardiokinetics

CVRx

Gilead

Actelion

Lung, LLC

Bayer

NoneNone
John A. SpertusOfficial Reviewer—AHASaint Luke’s Mid America Heart Institute—Director, Health Outcomes Research; University of Missouri-Kansas City—Professor

Amgen

Bayer

Copyright for SAQ, KCCQ, and PAQ

Janssen Pharmaceuticals

Novartis

Regeneron

United Healthcare (Scientific Advisory Board)

None

Health Outcomes Sciences

ACC

AHA

Lilly

Gilead Sciences

NoneNone

This table represents the relationships of reviewers with industry and other entities that were disclosed at the time of peer review, including those not deemed to be relevant to this document. The table does not necessarily reflect relationships with industry at the time of publication. A person is deemed to have a significant interest in a business if the interest represents ownership of ≥5% of the voting stock or share of the business entity, or ownership of ≥$5,000 of the fair market value of the business entity, or if funds received by the person from the business entity exceed 5% of the person’s gross income for the previous year. A relationship is considered to be modest if it is less than significant under the preceding definition. Relationships that exist with no financial benefit are also included for the purpose of transparency. Names are listed in alphabetical order.

ACC indicates American College of Cardiology; AHA, American Heart Association; CV, cardiovascular; FDA, U.S. Food and Drug Administration; KCCQ, Kansas City Cardiomyopathy Questionnaire; NHLBI, National Heart, Lung, and Blood Institute; NIH, National Institutes of Health; OSHPD, Office of Statewide Health Planning and Development; PAQ, Peripheral Artery Questionnaire; PCORI, Patient-Centered Outcomes Research Institute; SAQ, Seattle Angina Questionnaire.

∗ Significant relationship.

† No financial benefit.

  • 1. National Heart, Lung, and Blood Institute. NHLBI Cardiovascular Disease Thought Leaders Meeting Report: Research Translation, Dissemination, and Application - Moving Toward a New Vision and Strategic Framework. 2005. Available at: http://www.nhlbi.nih.gov/health-pro/resources/heart/cardiovascular-disease-thought-leaders-meeting-report. Accessed January 1, 2012.

    Google Scholar
  • 2. Cabana M.D., Rand C.S., Powe N.R.et al. : "Why don’t physicians follow clinical practice guidelines? A framework for improvement". JAMA 1999; 282: 1458.

    CrossrefMedlineGoogle Scholar
  • 3. Gibbons G.H., Shurin S.B., Mensah G.A.et al. : "Refocusing the agenda on cardiovascular guidelines: an announcement from the National Heart, Lung, and Blood Institute". J Am Coll Cardiol 2013; 62: 1396.

    View ArticleGoogle Scholar
  • 4. Simons-Morton D.G., Simons-Morton B.G., Parcel G.S.et al. : "Influencing personal and environmental conditions for community health: a multilevel intervention model". Fam Community Health 1988; 11: 25.

    CrossrefMedlineGoogle Scholar
  • 5. Effective Practice and Organisation of Care (EPOC). EPOC Taxonomy. Available at: https://epoc.cochrane.org/epoc-taxonomy. Accessed March 23, 2012.

    Google Scholar
  • 6. Canadian Agency for Drugs and Technologies in Health. Rx for Change Database. Available at: http://www.cadth.ca/en/resources/rx-for-change. Accessed December 1, 2012.

    Google Scholar
  • 7. Shea B.J., Grimshaw J.M., Wells G.A.et al. : "Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews". BMC Med Res Methodol 2007; 7: 10.

    CrossrefMedlineGoogle Scholar
  • 8. Cheung A., Weir M., Mayhew A.et al. : "Overview of systematic reviews of the effectiveness of reminders in improving healthcare professional behavior". Syst Rev 2012; 1: 36.

    CrossrefMedlineGoogle Scholar
  • 9. Walsh J., McDonald K.M., Shojania K.G.et al. : Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 3: Hypertension Care). Technical Reviews, No. 9.3 . Rockville, MD: Agency for Healthcare Research and Quality, U.S. Department of Health and Human ServicesJanuary 2005. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43920/. Accessed December 1, 2012.

    Google Scholar
  • 10. Lu C.Y., Ross-Degnan D., Soumerai S.B.et al. : "Interventions designed to improve the quality and efficiency of medication use in managed care: a critical review of the literature - 2001-2007". BMC Health Serv Res 2008; 8: 75.

    CrossrefMedlineGoogle Scholar
  • 11. Ostini R., Hegney D., Jackson C.et al. : "Systematic review of interventions to improve prescribing". Ann Pharmacother 2009; 43: 502.

    CrossrefMedlineGoogle Scholar
  • 12. Pearson S.A., Ross-Degnan D., Payson A.et al. : "Changing medication use in managed care: a critical review of the available evidence". Am J Manag Care 2003; 9: 715.

    MedlineGoogle Scholar
  • 13. Weingarten S.R., Henning J.M., Badamgarav E.et al. : "Interventions used in disease management programmes for patients with chronic illness-which ones work? Meta-analysis of published reports". BMJ 2002; 325: 925.

    CrossrefMedlineGoogle Scholar
  • 14. Bryan C. and Boren S.A. : "The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature". Inform Prim Care. 2008; 16: 79.

    MedlineGoogle Scholar
  • 15. Damiani G., Pinnarelli L., Colosimo S.C.et al. : "The effectiveness of computerized clinical guidelines in the process of care: a systematic review". BMC Health Serv Res 2010; 10: 2.

    CrossrefMedlineGoogle Scholar
  • 16. Heselmans A., Van de Velde S., Donceel P.et al. : "Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review". Implement Sci 2009; 4: 82.

    CrossrefMedlineGoogle Scholar
  • 17. Robertson J., Walkom E., Pearson S.A.et al. : "The impact of pharmacy computerised clinical decision support on prescribing, clinical and patient outcomes: a systematic review of the literature". Int J Pharm Pract 2010; 18: 69.

    MedlineGoogle Scholar
  • 18. Roshanov P.S., Misra S., Gerstein H.C.et al. : "Computerized clinical decision support systems for chronic disease management: a decision-maker-researcher partnership systematic review". Implement Sci 2011; 6: 92.

    CrossrefMedlineGoogle Scholar
  • 19. Van Herck P., De Smedt D., Annemans L.et al. : "Systematic review: effects, design choices, and context of pay-for-performance in health care". BMC Health Serv Res 2010; 10: 247.

    CrossrefMedlineGoogle Scholar
  • 20. Bright T.J., Wong A., Dhurjati R.et al. : "Effect of clinical decision-support systems: a systematic review". Ann Intern Med 2012; 157: 29.

    CrossrefMedlineGoogle Scholar
  • 21. O’Brien M.A., Rogers S., Jamtvedt G.et al. : "Educational outreach visits: effects on professional practice and health care outcomes". Cochrane Database Syst Rev 2007; : CD000409.

    MedlineGoogle Scholar
  • 22. Sahota N., Lloyd R., Ramakrishna A.et al. : "Computerized clinical decision support systems for acute care management: a decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes". Implement Sci 2011; 6: 91.

    CrossrefMedlineGoogle Scholar
  • 23. Siddiqi K., Newell J. and Robinson M. : "Getting evidence into practice: what works in developing countries?". Int J Qual Health Care 2005; 17: 447.

    CrossrefMedlineGoogle Scholar
  • 24. Tooher R., Middleton P., Pham C.et al. : "A systematic review of strategies to improve prophylaxis for venous thromboembolism in hospitals". Ann Surg 2005; 241: 397.

    CrossrefMedlineGoogle Scholar
  • 25. Cortoos P.J., Simoens S., Peetermans W.et al. : "Implementing a hospital guideline on pneumonia: a semi-quantitative review". Int J Qual Health Care 2007; 19: 358.

    CrossrefMedlineGoogle Scholar
  • 26. Mauger B., Marbella A., Pines E.et al. : "Implementing quality improvement strategies to reduce healthcare-associated infections: a systematic review". Am J Infect Control 2014; 42: S274.

    CrossrefMedlineGoogle Scholar
  • 27. Caruso D., Kerrigan C.L., Mastanduno M.P.et al. : Improving Value-Based Care and Outcomes of Clinical Populations in an Electronic Health Record System Environment. A Technical Report . : The Dartmouth Institute for Health Policy and Clinical Practice2011.

    Google Scholar
  • 28. Bravata D.M., Sundaram V., Lewis R.et al. : Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 5: Asthma Care). Technical Reviews, No. 9.5 . Rockville, MD: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43968/. Accessed September 18, 2016.

    Google Scholar
  • 29. Grimshaw J.M., Thomas R.E., MacLennan G.et al. : "Effectiveness and efficiency of guideline dissemination and implementation strategies". Health Technol Assess 2004; 8: iii. 1–72.

    CrossrefMedlineGoogle Scholar
  • 30. Chaillet N., Dube E., Dugas M.et al. : "Evidence-based strategies for implementing guidelines in obstetrics: a systematic review". Obstet Gynecol 2006; 108: 1234.

    CrossrefMedlineGoogle Scholar
  • 31. Lineker S.C. and Husted J.A. : "Educational interventions for implementation of arthritis clinical practice guidelines in primary care: effects on health professional behavior". J Rheumatol 2010; 37: 1562.

    CrossrefMedlineGoogle Scholar
  • 32. Menon A., Korner-Bitensky N., Kastner M.et al. : "Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review". J Rehabil Med 2009; 41: 1024.

    CrossrefMedlineGoogle Scholar
  • 33. Gagliardi A.R. and Alhabib S. : "Trends in guideline implementation: a scoping systematic review". Implement Sci 2015; 10: 54.

    CrossrefMedlineGoogle Scholar
  • 34. Ivers N., Jamtvedt G., Flottorp S.et al. : "Audit and feedback: effects on professional practice and healthcare outcomes". Cochrane Database Syst Rev 2012; 6: CD000259.

    Google Scholar
  • 35. Mansell G., Shapley M., Jordan J.L.et al. : "Interventions to reduce primary care delay in cancer referral: a systematic review". Br J Gen Pract 2011; 61: e821.

    CrossrefMedlineGoogle Scholar
  • 36. Okelo S.O., Butz A.M., Sharma R.et al. : Interventions to Modify Health Care Provider Adherence to Asthma Guidelines. Comparative Effectiveness Reviews, No. 95 . Rockville, MD: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services2013. Available at: http://www.ncbi.nlm.nih.gov/books/NBK144097/. Accessed September 16, 2016.

    Google Scholar
  • 37. Mahan C.E. and Spyropoulos A.C. : "Venous thromboembolism prevention: a systematic review of methods to improve prophylaxis and decrease events in the hospitalized patient". Hosp Pract 2010; 38: 97.

    CrossrefGoogle Scholar
  • 38. Stone E.G., Morton S.C., Hulscher M.E.et al. : "Interventions that increase use of adult immunization and cancer screening services: a meta-analysis". Ann Intern Med 2002; 136: 641.

    CrossrefMedlineGoogle Scholar
  • 39. Collinsworth A.W., Priest E.L., Campbell C.R.et al. : "A review of multifaceted care approaches for the prevention and mitigation of delirium in intensive care units". J Intens Care Med. 2016; 31: 127.

    CrossrefMedlineGoogle Scholar
  • 40. Chaillet N. and Dumont A. : "Evidence-based strategies for reducing cesarean section rates: a meta-analysis". Birth 2007; 34: 53.

    CrossrefMedlineGoogle Scholar
  • 41. Khunpradit S., Tavender E., Lumbiganon P.et al. : "Non-clinical interventions for reducing unnecessary caesarean section". Cochrane Database Syst Rev 2011; : CD005528.

    MedlineGoogle Scholar
  • 42. Kawamoto K., Houlihan C.A., Balas E.A.et al. : "Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success". BMJ 2005; 330: 765.

    CrossrefMedlineGoogle Scholar
  • 43. Nieuwlaat R., Connolly S.J., Mackay J.A.et al. : "Computerized clinical decision support systems for therapeutic drug monitoring and dosing: a decision-maker-researcher partnership systematic review". Implement Sci 2011; 6: 90.

    CrossrefMedlineGoogle Scholar
  • 44. Tan K., Dear P.R. and Newell S.J. : "Clinical decision support systems for neonatal care". Cochrane Database Syst Rev 2005; 2: CD004211.

    Google Scholar
  • 45. Jeffery R., Iserman E. and Haynes R.B. : "CDSS Systematic Review Team. Can computerized clinical decision support systems improve diabetes management? A systematic review and meta-analysis". Diabet Med 2013; 30: 739.

    CrossrefMedlineGoogle Scholar
  • 46. Scott A., Sivey P., Ait Ouakrim D.et al. : "The effect of financial incentives on the quality of health care provided by primary care physicians". Cochrane Database Syst Rev 2011; 9: CD008451.

    Google Scholar
  • 47. Carlsen B., Glenton C. and Pope C. : "Thou shalt versus thou shalt not: a meta-synthesis of GPs’ attitudes to clinical practice guidelines". Br J Gen Pract 2007; 57: 971.

    CrossrefMedlineGoogle Scholar
  • 48. Francke A.L., Smit M.C., de Veer A.J.et al. : "Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review". BMC Med Inform Decis Mak 2008; 8: 38.

    CrossrefMedlineGoogle Scholar
  • 49. Schunemann H.J., Cook D., Grimshaw J.et al. : "Antithrombotic and thrombolytic therapy: from evidence to application: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy". Chest 2004; 126: 688S.

    CrossrefMedlineGoogle Scholar
  • 50. Simpson S.H., Marrie T.J. and Majumdar S.R. : "Do guidelines guide pneumonia practice? A systematic review of interventions and barriers to best practice in the management of community-acquired pneumonia". Respir Care Clin N Am 2005; 11: 1.

    CrossrefMedlineGoogle Scholar
  • 51. Salam R.A., Lassi Z.S., Das J.K.et al. : "Evidence from district level inputs to improve quality of care for maternal and newborn health: interventions and findings". Reprod Health 2014; 11: S3.

    MedlineGoogle Scholar
  • 52. Dulko D. : "Audit and feedback as a clinical practice guideline implementation strategy: a model for acute care nurse practitioners". Worldviews Evid Based Nurs 2007; 4: 200.

    CrossrefMedlineGoogle Scholar
  • 53. Grimshaw J.M., Shirran L., Thomas R.et al. : "Changing provider behavior: an overview of systematic reviews of interventions". Med Care 2001; 39: II2.

    CrossrefMedlineGoogle Scholar
  • 54. Grindrod K.A., Patel P. and Martin J.E. : "What interventions should pharmacists employ to impact health practitioners’ prescribing practices?". Ann Pharmacother 2006; 40: 1546.

    CrossrefMedlineGoogle Scholar
  • 55. Jamtvedt G., Young J.M., Kristoffersen D.T.et al. : "Audit and feedback: effects on professional practice and health care outcomes". Cochrane Database Syst Rev 2006; 2: CD000259.

    Google Scholar
  • 56. Brouwers M.C., Garcia K., Makarski J.et al. : "The landscape of knowledge translation interventions in cancer control: what do we know and where to next? A review of systematic reviews". Implement Sci 2011; 6: 130.

    CrossrefMedlineGoogle Scholar

Footnotes

This document was approved by the American College of Cardiology Board of Trustees and the American Heart Association Science Advisory and Coordinating Committee and Executive Committee in October 2016.

The American College of Cardiology requests that this document be cited as follows: Chan WV, Pearson TA, Bennett GC, Castillo G, Cushman WC, Gaziano TA, Gorman PN, Handler J, Heil SKR, Krumholz HM, Kushner RF, MacKenzie TD, Sacco RL, Smith SC Jr, Stephens J, Stevens VJ, Vann JCJ, Wells BL. ACC/AHA special report: clinical practice guideline implementation strategies: a summary of systematic reviews by the NHLBI implementation science work group: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol. 2017;69:1076–92.

This article has been copublished in Circulation.

Copies: This document is available on the World Wide Web sites of the American College of Cardiology ( www.acc.org) and the American Heart Association ( professional.heart.org). For copies of this document, please contact the Elsevier Reprint Department via fax (212-633-3820) or e-mail ( [email protected]).

Permissions: Multiple copies, modification, alteration, enhancement, and/or distribution of this document are not permitted without the express permission of the American College of Cardiology. Requests may be completed online via the Elsevier site ( http://www.elsevier.com/about/policies/author-agreement/obtaining-permission).