Cadogan et al. Implementation Science (2015) 10:167 DOI 10.1186/s13012-015-0356-4

SYSTEMATIC REVIEW

Open Access

The effectiveness of interventions to improve laboratory requesting patterns among primary care physicians: a systematic review Sharon L. Cadogan1*, John P. Browne1, Colin P. Bradley2 and Mary R. Cahill3

Abstract Background: Laboratory testing is an integral part of day-to-day primary care practice, with approximately 30 % of patient encounters resulting in a request. However, research suggests that a large proportion of requests does not benefit patient care and is avoidable. The aim of this systematic review was to comprehensively search the literature for studies evaluating the effectiveness of interventions to improve primary care physician use of laboratory tests. Methods: A search of PubMed, Cochrane Library, Embase and Scopus (from inception to 09/02/14) was conducted. The following study designs were considered: systematic reviews, randomised controlled trials (RCTs), controlled clinical trials (CCTs), controlled before and after studies (CBAs) and interrupted time series analysis (ITSs). Studies were quality appraised using a modified version of the Effective Practice and Organisation of Care (EPOC) checklist. The population of interest was primary care physicians. Interventions were considered if they aimed to improve laboratory testing in primary care. The outcome of interest was a volume of laboratory tests. Results: In total, 6,166 titles and abstracts were reviewed, followed by 87 full texts. Of these, 11 papers were eligible for inclusion in the systematic review. This included four RCTs, six CBAs and one ITS study. The types of interventions examined included education, feedback, guidelines, education with feedback, feedback with guidelines and changing order forms. The quality of included studies varied with seven studies deemed to have a low risk of bias, three with unclear risk of bias and one with high risk of bias. All but one study found significant reductions in the volume of tests following the intervention, with effect sizes ranging from 1.2 to 60 %. Due to heterogeneity, meta-analysis was not performed. Conclusions: Interventions such as educational strategies, feedback and changing test order forms may improve the efficient use of laboratory tests in primary care; however, the level of evidence is quite low and the quality is poor. The reproducibility of findings from different laboratories is also difficult to ascertain from the literature. Some standardisation of both interventions and outcome measures is required to enable formal meta-analysis. Keywords: Interventions, Primary care, Behaviour change, Healthcare interventions, Laboratory testing

Background Laboratory testing is an integral part of day-to-day practice in medicine and supports approximately 70 % of diagnoses and treatment decisions [1]. Further, among primary care physicians, an estimated 30 % of patient visits result in a laboratory request [2]. Healthcare budgets worldwide * Correspondence: [email protected] 1 Department of Epidemiology and Public Health, University College Cork, Cork, Ireland Full list of author information is available at the end of the article

are facing increasing pressure to reduce costs and remove inefficiencies, while maintaining quality and safety. Laboratory testing is a major component of healthcare budgets in absolute terms, and demand for testing is increasing faster than medical activity [3]. In the National Health Service (NHS) in England, for example, an estimated £2.5 billion per annum is spent on laboratory services accounting for 3–4 % of the UK national health budget [4, 5]. Despite this relatively small proportion of healthcare budget

© 2015 Cadogan et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Cadogan et al. Implementation Science (2015) 10:167

expenditure, laboratory testing often underpins more costly downstream care such as outpatient visits and radiology requests. The unnecessary use of laboratory services has been highlighted by a meta-analysis of 108 studies involving 1.6 million results from 46 of the 50 most commonly ordered lab tests in medicine [6]. This found that, on average, 30 % of all tests are likely to be unnecessary [6, 7]. With respect to primary care, US research has found that physicians order diagnostic laboratory tests for approximately 30 % of patient visits [8]. Authors reported that test-ordering factors including unnecessary test requests were responsible for 13 % of testing process errors in primary care [9]. The overuse of laboratory services can stem from the physician, the patient and the broader policy context. For example, some studies have found that many physicians report uncertainty over when to order tests and how to interpret test results [2]. Reasons given for this include lack of knowledge about indications, costs, insurance restrictions and inconsistent names for the same test [8]. Meanwhile, patients have high expectations that blood tests are performed and have little understanding of the limitations of testing [7, 10]. Other factors include a lack of knowledge regarding the financial effect of laboratory testing on the healthcare system [11] and the increasing volume of laboratory tests available to physicians [5]. Furthermore, system level factors associated with laboratory testing patterns have been identified and include limitations of laboratory and/or surgery information technology systems [5, 12]. As a result, it has been recommended that the theoretical and contextual factors responsible for changing primary care physician behaviour should also be considered when designing interventions [13–16]. A number of approaches for reducing unnecessary test ordering in primary care have implemented. These comprise of interventions aimed at tackling both the overutilization and underutilization of tests through strategies such as including cost displays on electronic order forms, facilitating educational workshops, and providing feedback to physicians on their testordering patterns [17–19]. However, the effectiveness of these strategies vary, and to date, no systematic reviews have focused solely on studies evaluating testordering behaviours of primary care physicians. Hence, the objective of this systematic review was to systematically and comprehensively search the literature for studies evaluating the effectiveness of interventions aimed at nudging primary care physicians’ ordering practice further in a direction which will maximally impact patient care. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria guided reporting of the methods and findings (see Additional file 1).

Page 2 of 13

Methods Primary objective

The main objective of this systematic review was to synthesise the available published literature on interventions focused on improving the appropriateness of laboratory requesting patterns from primary care. Primary outcomes

The outcome of interest in this review was objectively measured provider performance (request rates or appropriateness of requests). Types of studies

Systematic reviews, randomised controlled trials (RCTs), non-randomised controlled trials (NRCTs), controlled before-after studies (CBAs) and interrupted time series analysis (ITSs) were considered for this review. Types of interventions

The review focused on interventions to change laboratory requesting patterns or improve laboratory requesting appropriateness. Data sources

The following databases were searched for potentially eligible studies: PubMed (1966 to Feb 9, 2014) the Cochrane Library (1993 to Feb 9, 2014), Embase (1974 to Feb 9, 2014) and Scopus (1960 to Feb 9, 2014). Updated searches of the electronic databases were performed in November 2014 to ensure additional relevant papers were not published since. Inclusion criteria

This review included interventions aimed at improving laboratory requesting patterns where objectively measured provider performance (requesting rates or appropriateness of requests) served as the dependent variable. Intervention studies were only considered if participants were primary care physicians, defined as any medically qualified physician providing primary healthcare and including general practitioners, family doctors, family physicians or family practitioners. Search strategy

PubMed was searched for potentially eligible studies by combining relevant medical subject headings (MeSH terms) with subheadings and text words (e.g. “utilisation”, “laboratory test(s)”). Only citations on human subjects were included. Search terms and search findings are provided in Additional file 2. For completeness, searches were repeated without subheadings and the results of these two searches were combined. The same methods were used for searching the Cochrane Library, Embase (Elsevier) and Scopus databases. Electronic searches were

Cadogan et al. Implementation Science (2015) 10:167

supplemented by cross-checking the reference lists of all identified studies. Duplicate citations were identified and removed using Endnote citation manager. Data collection and analysis

SLC carried out the electronic database searches. The search strategy for the review can be found in Additional file 2. Titles and abstracts of studies retrieved from the search strategy were reviewed independently by applying the appropriate inclusion/exclusion criteria. For each citation, two investigators (SLC and MRC) independently screened the titles and abstracts for potential relevance. The full text article was obtained for all potentially eligible studies. Any disagreements between SLC and MRC were resolved through discussion with a third reviewer (JPB). Data were extracted from the included papers by a single reviewer (SLC). A second reviewer (JPB) checked data extraction sheets for errors. Information was extracted on study design, year of study, setting, participants, intervention characteristics and the reporting of results. A sample data extraction form can be found in Additional file 3. It was not deemed appropriate to conduct a metaanalysis due to the heterogeneity of interventions and outcomes across the included studies. Instead, the existing analyses reported in the articles reviewed were extracted and reported in a narrative format. Quality assessment of included studies

Included studies were independently assessed for quality and risk of bias by SLC and JPB, with any disagreements

Fig. 1 Flow diagram of the search strategy for review

Page 3 of 13

resolved by discussion. This was performed using a modified version of the Cochrane Effective Practice and Organisation of Care (EPOC) Data Collection Checklist and Quality Criteria for studies with a control group (RCTs, CCTs and CBAs) and for ITSs studies [20]. The tool is specifically designed for interventions aiming to improve practice and provides a risk of bias assessment for each of the included study designs (RCT, CCT, CBA and ITS). This comprises of nine quality standards for RCTs, CCTs and CBAs: generation of allocation sequence, concealment of allocation, baseline outcome measurements, baseline characteristics, incomplete outcome data, blinding of outcome assessor and protection against contamination, selective outcome reporting and other risks of bias. For ITS study designs, the following three quality standards were also assessed: the independence of the intervention from other changes, the pre-specified shape of the intervention and if the intervention was unlikely to affect data collection [20].

Results Search results

In total, 6,166 records of papers were identified from the search of the literature (Fig. 1). Based on a title review, 5,276 records were excluded. A further 504 records were duplicates and also excluded. Of the 386 records remaining, 299 were excluded based on abstract review. Full texts were obtained for the remaining 87 records, of which 11 papers met the inclusion criteria and were included in the review (Table 1).

Setting

Design Participants

Comparator

Follow-up

Effect of intervention

Horn et al. [17]

USA

ITS

215 primary care Changing order Cost displays within electronic form health record at time of physicians (5 group practices) ordering (153 physicians)

Type

Intervention

Control group: no cost information (62 physicians)

12 months pre- and 6 months postintervention

Difference-in-difference approach. 1–2.6 % reduction. 20 % The cost displays resulted in a reduction of 0.4–5.6 laboratory orders per 1000 visits per month (p < 0.001)

Kahan et al. [26]

Israel

CBA

Not disclosed

Older version of computerised order form

6 months pre- and 4 months postintervention

31–41 % reduction relative to the pre-intervention month, with 36–58 % reduction the following month. −2–3 % changes for control tests

Shalev et al. [27]

Israel

CBA

865 primary care Changing order Changing volume of tests on physicians form order form (27 tests removed and 2 tests added—reducing the number of tests available using a check-box form from 51 to 26)

Standard form prior to intervention

12 months pre- and 24 months postintervention

For deleted tests, there was a 27 % and 19.2 % reduction 1 and 2 years after intervention, respectively

Zaat et al. [46]

Netherlands CBA

75 primary care physicians

Changing order Volume of tests on order form form reduced (hand written request if test not displayed) (47 physicians)

Standard form (28 physicians)

Five month preintervention (control) and 12 months post-intervention

18 % reduction in number of tests requested monthly in experimental group after the intervention compared to the control doctors

Barrichi et al. [25]

Italy

CBA

44 primary care physicians

Education

Pathology-specific laboratory algorithms for 7 common clinical scenarios were tested. Education was provided (8 training sessions) to the physicians about the algorithms and their use (23 physicians)

Current practice

12 months pre- and 12 months postintervention (data on test requests for randomly selected 30 days in each period)

5 % reduction in the volume of tests requested by the intervention district 1 year following the intervention (retrospective audit) compared with a 1 % increase in the control district

Larsson et al. [18]

Sweden

CBA

63 primary care physicians (19 practices)

Education

An education programme (2-day lecture series)

Current practice (2 practices)

5 months preintervention and 4 months postintervention

7 ratios were recommended to decrease in volume, 5 did at p < 0.05. 7 were expected to increase in volume, 4 did at p < 0.05

Verstappen et al. [21]

Netherlands RCT

174 primary care Education physicians (26 practices)

A primary care physician-based Each group acted as a strategy focused on clinical control for the other problems and associated tests (85 physicians in arm a and 89 physicians in arm b)

6 months pre- and 6 months postintervention

12 % reduction in volume of total tests in intervention group versus no change in control arm. 16 % reduction of inappropriate tests for intervention group

van Wijk et al. [22]

Netherlands RCT

60 primary care physicians (44 practices)

Guideline-based order form (29 GPs) versus restricted guideline-based electronic order form (31 GPs)

Study period: 1 July 1994–30 June 1995

Decision support based on guidelines is more effective in changing blood test ordering than decision support based on initially displaying a limited number of tests. Primary care physicians who used BloodLink-Guideline requested 20 % fewer tests on average than did practitioners who used

Changing order A new version of electronic form order form

Guidelines

Each group acted as a control for the other

Page 4 of 13

Reference

Cadogan et al. Implementation Science (2015) 10:167

Table 1 Overview of intervention characteristics and results

BloodLink-Restricted (mean (±SD), 5.5 ± 0.9 tests versus 6.9 ± 1.6 tests (p = 0.003)) Baker et al. [23] UK

RCT

96 primary care physicians (33 practices)

Thomas et al. [19]

UK

RCT

Tomlin et al. [24]

New Zealand

CBA

Guidelines and feedback

58 GPs (17 practices) guidelines followed by feedback about the numbers of thyroid function, rheumatoid factor test and urine cultures they ordered (quarterly for 1 year)

38 GPs (16 practices) received guidelines then feedback about lipid and plasma viscosity tests (each a control group for the other)

Baseline and 1 year postintervention

No effect. No change in volume of tests per 1000 requested in either of the study groups for any of the tests

370 primary care Feedback and physicians (85 education practices)

Quarterly feedback of requesting rates and reminder messages. Practices allocated to 1 of 4 groups: control (20 practices), enhanced feedback alone (22 practices), reminder messages alone (22 practices) or both enhanced feedback and reminder messages (21 practices)

Current practice

12 months pre- and post-intervention

11 % reduction in requests for practices receiving enhanced feedback or reminder messages (OR 0.89, 95 % CI 0.83–0.93) compared with control group

3160, 3140 and 3335 primary care physicians

3 marketing programmes Locum and other 2 years pre- and (guidelines, individual feedback physicians not targeted post-intervention and professional development) by the programmes

Guidelines, feedback and education

Cadogan et al. Implementation Science (2015) 10:167

Table 1 Overview of intervention characteristics and results (Continued)

60 % reduction in number of ESR tests by the intervention group following the intervention versus an 18 % reduction in comparison doctors after intervention

Page 5 of 13

Author

Intervention

Design

Independent of other changes

Knowledge of allocated intervention

Unlikely to affect data collection

Shape of effect pre-specified

Attrition bias

Selective reporting

Other risk of bias

Overall risk

Horn et al. [17]

Changing order form

ITS

Low risk

High risk

Low risk

Low risk

Unclear risk

Low risk

Low risk

High risk

The study had a control group

Participants knew which intervention they were receiving

Sources and data Specified collection methods same before and after intervention

Missing data unclear

Appropriate No other potential bias outcomes reported

Design

Random sequence generation

Allocation concealment

Protection against contamination

Blinding

Attrition bias

Selective reporting

Similar at baseline

Overall risk

CBA

High risk

High risk

N/A

High risk

Low risk

Low risk

N/A

High risk

CBA study

CBA study

No control group

No blinding

No missing data

Appropriate No control group outcomes reported

High risk

High risk

N/A

High risk

Low risk

Low risk

CBA study

CBA study

No control group

No blinding

No missing data

Appropriate No control group outcomes reported

High risk

High risk

Unclear risk

High risk

Low risk

Low risk

CBA study

CBA study

No information in text

Participants knew what they had been allocated to

No missing data

Appropriate No information in outcomes text on baseline reported characteristics, baseline outcomes similar (Fig 1/2)

High risk

High risk

Unclear risk

High risk

Low risk

Low risk

CBA study

CBA study

No information in text

Participants knew what they had been allocated to

No missing data

Appropriate No information in outcomes text reported

High risk

High risk

Unclear risk

High risk

Low risk

Low risk

CBA study

CBA study

No information in text

Participants knew what they had been allocated to

No missing data

Appropriate No information in outcomes text reported

Low risk

Low risk

Low risk

Low risk

Low risk

Low risk

Author

Intervention

Shalev et al. [27] Changing order form

Kahan et al. [26]

Zaat et al. [28]

Baricchi et al. [25]

Changing order form

Changing order form

Education

Larson et al. [18] Education

CBA

CBA

CBA

RCT

Blocked randomization

N/A

Unclear risk

Unclear risk

Unclear risk

Low risk

High risk

High risk

High risk

High risk

Low risk

Page 6 of 13

Verstappen et al. Education [21]

CBA

Cadogan et al. Implementation Science (2015) 10:167

Table 2 Quality assessment of included studies

van Wijk et al. [22]

Baker et al. [23]

Thomas et al.[19]

Guidelines

Guidelines and feedback

Feedback and education

RCT

RCT

Cluster RCT

Tomlin et al. [24] Guidelines, feedback CBA and Education

Cluster trial, allocation after recruitment completed

Independent clinics

Controls were blinded, hence preventing the Hawthorne effect

No missing data

Appropriate Outcomes measured outcomes at baseline and reported baseline characteristics reported and similar

Low risk

Low risk

Low risk

High risk

Low risk

Low risk

“researcher not involved in the study… performed the randomisation using random-numbers table”

“each practice Separate practices assigned by simple random allocation”

All participants knew what they had been allocated to. Test ordering in controls may have been affected

Missing data Appropriate Similar baseline similar outcomes characteristics between reported groups and all participants accounted for

Low risk

Low risk

High risk

Low risk

Random number table

Cluster trial, Primary care allocation after physicians work recruitment separately completed

Participants knew No sites lost what they had been to follow-up allocated to. Test ordering in controls may have been affected

Appropriate Participants in group outcomes 2 had fewer patients reported and GPs than group 1

Low risk

Low risk

High risk

Low risk

Low risk

Low risk

Low risk

Low risk

Low risk

High risk

Low risk

“cluster randomization… Cluster trial, Separate practices with a minimization allocation after procedure” recruitment completed.

Participants knew No missing what they had been data allocated to. Test ordering in controls may have been affected.

Appropriate Similar baseline outcomes characteristics reported

High risk

High risk

High risk

High risk

Low risk

Low risk

CBA study

CBA study

“Changes…might be explained by ….contamination of the comparison group”

Participants knew what they had been allocated to. Test ordering in controls may have been affected

No missing data

Appropriate Intervention group outcomes included GPs only reported while control group included locum GPs also

High risk

High risk

High risk

Cadogan et al. Implementation Science (2015) 10:167

Table 2 Quality assessment of included studies (Continued)

High risk

High risk

Page 7 of 13

Cadogan et al. Implementation Science (2015) 10:167

Risk of bias in included studies

Table 2 provides details on the quality assessment of the studies. Ten of the 11 studies were deemed to have a high risk of bias, while one study [21] was deemed to be of low risk. Randomisation and allocation concealment was adequately performed for the four included RCTs [19, 21–23]. The most common reason for high risk of bias was the fact that participants (primary care physicians) in the intervention groups could not be blinded. This introduces the risk of information bias as physicians may have altered their requesting behaviours based on knowledge of being assessed. Only one of the studies adequately blinded participants [21]. Also, a key limitation of the RCT by Thomas and colleagues [19] was the lack of power to detect interactions and, hence, a much larger trial would be required to ensure that these interventions act independently. A key limitation of the included ITS study [17] was the physician-level design utilised with an absence of individual patient characteristics. Characteristics of studies included in the review

Characteristics and the key findings of the included studies are presented in Table 1. Four of the studies were RCTs [19, 21–23], six were before-after studies [18, 24–28] and one was an ITS [17] with a parallel control group. The included studies were conducted in the Netherlands [21, 22, 28], USA [17], UK [19, 23], Italy [25], Israel [26, 27], Sweden [18] and New Zealand [24] with samples ranging in size from 44 to over 3,000 primary care physicians. The interventions covered by the review include education programmes [18, 21], laboratory profiles [25] clinical guidelines [22], guidelines and feedback combined [23], cost displays [17], the redesign of order forms [26–28] and the use of feedback and education strategies [19, 24]. Clinical guidelines and policy recommendations

In their RCT study, van Wijk et al. [22] found that decision support based on guidelines integrated with patient electronic records was more effective for changing blood test-ordering behaviour than decision support based on limited testing offered in modified request forms. Primary care physicians who had access to the guideline-based system has ordered 20 % fewer tests per form than did primary care physicians who had access to the restricted system (mean ± SD 5.5 ± 0.9 versus 6.9 ± 1.6 tests, respectively; p = 0.003, Mann-Whitney test) [22]. Similar findings were obtained in the adjusted multivariate regression analysis [22]. Controlling for practice characteristics and historic test-ordering behaviour, 19 % more tests were requested by primary care physicians with access to the restricted order form (RR 1.19, 95 % CI 1.10–1.29) [22]. The study also reported a difference in requesting patterns between the two groups for specific tests. For example, in the restricted group, 61.2 % of order forms included an

Page 8 of 13

erythrocyte sedimentation rate test, compared with 44.1 % in the guideline group (p < 0.001) [22]. In a study using a CBA design and involving over 3,000 primary care physicians in New Zealand, Tomlin et al. [24] assessed the effect of three different marketing programmes promoting clinical guidelines. Each of these programmes involved written material advising of guideline recommendations. Individual laboratory-test use feedback data was distributed to each practice, and professional development opportunities were provided. The study found that clear information marketed to primary care physicians improved the quality of laboratory test ordering [24]. Some key findings included a 42 % reduction in erythrocyte sedimentation rate tests following the intervention (intervention physicians −60 %, comparison physicians −18 %, p < 0.01) [24]. Feedback and reminders

Baker et al. [23] evaluated the use of feedback following guidelines in their RCT. Both groups received guidelines followed up with feedback about their use of selected tests. The first group of practices received feedback about their testing for thyroid function tests, rheumatoid factor and urine culture requests, while the second group received feedback about their serum lipids and viscosity requests. Hence, both groups were intervention groups but acted as control groups for the other group. The authors reported no change in laboratory requests quarterly feedback over a 1-year time period for any of the five tests studied [23]. A multifaceted clustered RCT by Verstappen et al. [21] aimed to optimise primary care physicians’ test-ordering behaviour by means of practice-based strategies targeting tests for specific clinical problems. Thirteen groups of primary care physicians underwent the strategy for three clinical problems (arm A; cardiovascular topics, upper and lower abdominal complaints), while 13 other groups underwent the strategy for three other clinical problems (arm B; chronic obstructive pulmonary disease and asthma, general complaints, degenerative joint complaints). The strategy consisted of personalised graphical feedback, including a comparison of each physician’s own data with those of colleagues; dissemination of national, evidence-based guidelines; and regular meetings on quality improvement in small groups [21]. Each of the arms of the trial acted as a control for each other. Physicians discussed personal feedback reports in small group meetings, related them to evidence-based clinical guidelines, and made plans for change. Authors reported a 12 % reduction in volume of total tests for arm A in intervention group versus no change in control arm (p < 0.001) [21]. However, for arm B of the trial, no statistically significant changes were identified (p = 0.22) [21]. In a third RCT by Thomas et al. [19], the use of feedback combined with educational reminder interventions

Cadogan et al. Implementation Science (2015) 10:167

was assessed. The feedback intervention involved the use of a booklet containing graphical presentations of individual practice level ordering for the targeted tests. Each practice was compared to regional statistics over a 3-year period. Educational reminders were developed in conjunction with the primary care physicians and were included with test results. The study found that primary care practices receiving either or both feedback and reminders had significantly reduced blood test utilisation (p < 0.001) [19]. The combined effect of the interventions resulted in a 22 % reduction in total number of targeted tests ordered (OR = 0.78, 95 % CI 0.71–0.85) versus 13 % for reminders alone (OR = 0.87, 95 % CI 0.81–0.94, p =

The effectiveness of interventions to improve laboratory requesting patterns among primary care physicians: a systematic review.

Laboratory testing is an integral part of day-to-day primary care practice, with approximately 30 % of patient encounters resulting in a request. Howe...
NAN Sizes 1 Downloads 8 Views