bs_bs_banner

Emergency Medicine Australasia (2014) 26, 113–124

doi: 10.1111/1742-6723.12195

REVIEW ARTICLE

Review article: What makes a good healthcare quality indicator? A systematic review and validation study Peter JONES,1 Michael SHEPHERD,2 Susan WELLS,3 James LE FEVRE4 and Shanthi AMERATUNGA3 1 Adult Emergency Department, Auckland City Hospital, Auckland, New Zealand, 2Children’s Emergency Department, Auckland District Health Board, Auckland, New Zealand, 3Section of Epidemiology and Biostatistics, School of Population Health, University of Auckland, Auckland, New Zealand, and 4Adult Emergency Department, Auckland District Health Board, Auckland, New Zealand

Abstract Indicators measuring aspects of performance to assess quality of care are often chosen arbitrarily. The present study aimed to determine what should be considered when selecting healthcare quality indicators, particularly focusing on the application to emergency medicine. Structured searches of electronic databases were supplemented by website searches of quality of care and benchmarking organisations, citation searches and discussions with experts. Candidate attributes of ‘good’ healthcare indicators were extracted independently by two authors. The validity of each attribute was independently assessed by 16 experts in quality of care and emergency medicine. Valid and reliable attributes were included in a critical appraisal tool for healthcare quality indicators, which was piloted by emergency medicine specialists. Twenty-three attributes were identified, and all were rated moderate to extremely important by an expert panel. The reliability was high: alpha = 0.98. Twelve existing tools explicitly stated a median (range) of 14 (8–17) attributes. A critical appraisal tool incorporating all the attributes was

developed. This was piloted by four emergency medicine specialists who were asked to appraise and rank a set of six candidate indicators. Although using the tool took more time than implicit gestalt decision making: median (interquartile range) 190 (43–352) min versus 17.5 (3–34) min, their rankings changed after using the tool. To inform the appraisal of quality improvement indicators for emergency medicine, a comprehensive list of indicator attributes was identified, validated, developed into a tool and piloted. Although expert consensus is still required, this tool provides an explicit basis for discussions around indicator selection. Key words: evidence-based emergency medicine, pilot study, quality indicator, review, validation study.

Background With the advent of National Emergency Access Targets worldwide, there has been much recent interest in the best ‘quality indicators’ for EDs. The definition of healthcare quality will differ, depending on the particular lens through which the health system is viewed. 1 Frameworks

Correspondence: Dr Peter Jones, Adult Emergency Department, Auckland City Hospital, Park Road, Grafton Private Bag 92024, Auckland 1142, New Zealand. Email: [email protected] Peter Jones, MBChB, MSc (Oxon), FACEM, Director of Emergency Medicine Research; Michael Shepherd, MBChB, FRACP, Clinical Director Emergency Medicine; Susan Wells, MBChB, MPH (Hons), PhD, Public Health Physician, Senior Lecturer; James Le Fevre, MBChB, Research Fellow; Shanthi Ameratunga, MBChB, MPH (Dist), PhD, Public Health Physician, Deputy Head of Department. Accepted 26 November 2013

Key findings • Healthcare quality indicator selection is important. • Poor indicator selection and application may result in unintended consequences. • This study highlights what needs to be considered when selecting healthcare quality indicators. • Implicit decisions about indicator selection may be enhanced by use of a checklist tool.

have been developed to consider the dimensions of healthcare quality in the context of different levels of the healthcare system including frontline care, healthcare service, organisation or policy levels.2 Typically, the dimensions of quality stated in such frameworks are based on seminal work by the Institute of Medicine (an independent, non-profit organisation that is the health arm of the American National Academy of Sciences)3 and encompass Patient Centeredness, Access/ Timeliness, Equity, Effectiveness, Efficiency and Safety. A quality indicator is a measure relating to aspects of the healthcare system, such as the resources required to provide care, how care is delivered or the outcomes of care. Indicators might therefore be classified both by the domains of quality encompassed and how these relate to the structure, processes or outcomes of the healthcare system.4 With growing interest in measuring the quality of healthcare, it is important to take account of the validity of quality indicators for both (ED) and

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

114

P JONES ET AL.

the wider healthcare system.5–8 The importance to the end user, the strength of evidence that links a proposed indicator to patient outcomes and the feasibility of data collection have been cited as relevant aspects influencing the choice of indicators.6,7,9 However, there are other attributes of healthcare quality indicators that might also need to be considered.10 The use of critical appraisal tools is standard practice for guideline developers and systematic reviewers11,12 and recommended as best practice for all clinicians when making decisions on investigations, prognosis and interventions.13 The use of critical appraisal tools to assess quality of care indicators is less well established. Although such tools have been used in healthcare settings14,15 some groups advocating particular indicators in the ED setting have either used limited tools6 or none.7 In recent systematic reviews considering the issue of ED crowding as a marker of quality of care, metrics for crowding were not critically appraised in a systematic way using a tool designed for this purpose.16–18 The present study aimed to identify the attributes that should be considered when selecting healthcare quality indicators. The secondary aims were to determine the usefulness or importance of each attribute to the selection of an indicator and to develop and pilot a critical appraisal tool for healthcare quality indicators based on these attributes.

Method Search strategy Searches of the websites of the EQUATOR network, Critical Appraisal Skills Program (CASP), the Cochrane Effective Practice group, the University of Auckland’s Effective Practice, Informatics and Quality Improvement group, National Health Service (NHS), National Institutes of Health and the Emergency Medicine Colleges (American, Australasian, British and Canadian) were undertaken in May and June 2011. These were followed by structured searches in Medline, Embase and CInAHL (for all indexed years up to June 2011) using

MeSH and free-text terms for ‘quality assurance or performance indicators’, and ‘critical appraisal tools’ (Appendix S1). A citation search was undertaken, and experts in the field were consulted. In addition, the websites for the Agency for Healthcare Research and Quality, the National Committee for Quality Assurance, RAND corporation, The Joint Commission, the Australian Council on Healthcare Standards, the Australian Commission on Safety and Quality in Healthcare, the Health Roundtable and the Institute of Medicine were also searched (October–November 2011). The definitions used were as follows. Quality indicator: ‘a measurable element of practice performance, for which there is evidence or consensus that it can be used to assess the quality, and hence the change in quality, of care provided’.19 Quality indicator critical appraisal (QICA) tool: a list of attributes, with provision to make judgements about the presence or absence of that attribute (e.g. checkboxes, space to summarise evidence), and/or a final decision about whether the indicator should be used.

Study selection Titles and abstracts were screened, and potentially relevant articles were retrieved for full-text review without language restriction. Articles not describing attributes of a quality indicator were excluded. Quality indicator attributes were recorded for all studies and ordered by year of publication. Data were extracted by two authors independently, and differences were resolved by consensus. The point of saturation, where no subsequent studies suggested new attributes was noted. The level of evidence was based on the Oxford Centre for Evidence Based Medicine levels of evidence table (2009). Level 1 Systematic Review of Validating Cohort Studies Level 2 Validating Cohort Study; validation independent Level 3 Validating Cohort Study; validation not independent

Level 4

Level 5

Exploratory Cohort Study; no validation/ unstructured review Expert Opinion

Validation of indicator attributes An Internet survey (http://www .surveymonkey.com) was sent to 26 independent experts purposefully selected from the fields of quality of care and Emergency Medicine from Australasia, England and the USA. The attributes appeared in a random order for each panelist. The panel scored each attribute on its usefulness or importance (validity) to the process of indicator appraisal using a four point scale (1 = not useful/important; 2 = slightly useful/important, 3 = moderately useful/important, 4 = extremely useful/important) with provision for free text comments. The reliability of the scoring for each attribute was also assessed.

Development of a quality indicator critical appraisal tool The attributes found to be valid and reliable by the expert panel were grouped into sections based on which aspects of indicator appraisal they concerned. The resulting tool was piloted twice. The first pilot was completed by two authors independently who appraised and ranked eight indicators (PJ and JLF). The indicators had previously been shortlisted for inclusion as process or outcome measures for a study assessing the impact of a national emergency access target in New Zealand.20 In the second pilot, 13 senior ED clinicians with no prior experience using indicator appraisal tools independently ranked six different indicators. Once their responses were returned, they were to repeat the exercise using the QICA tool. These participants were volunteers from 102 attendees at the New Zealand Emergency Departments Conference in 2012. We sought to determine whether using the QICA tool would alter their views on potential quality indicators for ED. The present study was assessed as an audit not requiring ethical approval by the regional ethics committee.

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

115

REVIEW OF QUALITY INDICATOR ATTRIBUTES

Statistics Medians (interquartile range [IQR]), means (95% confidence intervals [CIs]) and proportions (95% CI) were used to describe the data as appropriate. Inter-observer agreement was tested with the kappa statistic (Vasserstats 2012, Poughkeepsie, NY, USA, http:// vassarstats.net/kappa.html). Cronbach’s alpha was used to assess reliability of the scoring of the attributes. Differences were tested with the paired samples t-test as appropriate, and agreement was assessed by correlation (Spearman’s Rho). Statistical significance was assessed at the 0.05 level, and all tests were two-tailed (spss v18.0.3, 2010, IBM Corporation, Armonk, NY, USA).

Results Search strategy and identification of attributes We identified 33 relevant references. Twenty-one were journal articles, and 12 were documents from the grey literature (Fig. 1). Eleven articles discussed attributes,4,10,21–29 three provided lists30–32 and seven either suggested an indicator appraisal tool33–35 or used tools.14,15,19,36 Of the 12 grey literature sources (reports, websites and an unpublished study), we found seven lists of indicator attributes37–43 and five indicator appraisal tools.6,44–47 Both authors (PJ and MS) found the same 20 attributes with three additional attributes identified by PJ (Table 1). These were specificity of the indicator for a particular department/setting, the use of the tool to allow for comparisons between indicators (the ‘bottom line’) and stating which domain(s) of quality that an indicator reflects. No new attributes were identified after 1997. The agreement between data extractors on whether each source described a particular attribute was fair to moderate (k = 0.39, 95% CI 0.3–0.48). Existing tools included 8 to 17 attributes explicitly; median (IQR) 14 (12–15) as shown in Figure 2.

Validation of indicator attributes Sixteen of 26 (62%, 95% CI 40– 80%) experts responded to the web

Figure 1. dicators.

Results of search strategy to find appraisal tools for healthcare quality in-

survey. Table 2 shows how they scored the attribute. All were considered at least moderately useful except for placing the indicator in the Donabedian ‘Structure/Process/Outcome’ framework, 48 which was believed to be slightly useful. The reliability of the attribute scoring was good. Cronbach’s alpha was 0.98 for all attributes considered together and ranged from 0.83 to 0.93 when the attributes were grouped.

Development and use of the quality indicator critical appraisal tool The QICA tool (Fig. 3) was developed because no previous tool considered all important attributes of an indicator. The attributes were grouped into six sections according to which part of the indicator selection process they reflected. Sections A and B contain screening questions dealing with the importance and relevance of the indicator to the setting in which

it will be used and whether it actually measures the outcome of interest. If these are not answered satisfactorily, then no further review is necessary and the indicator should not be used. If sections A and B are considered satisfactory, an assessor should go on to look at the other attributes. Section C considers the evidence for prior use of the indicator and its association with outcomes for patients. Section D explores the characteristics of the indicator, whereas section E looks at the practicalities of data collection and analysis. Section F allows the user to decide which domains of quality of care the indicator are likely to encompass and to make the final decision whether the indicator should be used. As such decisions are often not clear cut, a 100 mm VAS was arbitrarily chosen with the dichotomous outcome of YES or NO anchoring each end to enable the user to weight this decision and facilitate comparison between indicators.

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

Yes Yes Yes Yes Yes

Yes

Yes

Neuhauser25 USA 1914 5 D

Yes

I

I

O’Malley26 USA 1950 5 D

Attributes of a quality indicator in published articles

Author (reference) Country Year Evidence level Format Attribute of quality indicator mentioned by this author Ethical conduct/reporting Specific to one department/area Originator conflict of interest Whole system indicator Policy relevance Equitable Understandable Target group identified Domain of quality being measured Responsive (actionable) Reliability Power/Precision Scoring system Potential for perverse outcomes Cost of collecting versus utility Bottom line Strength of evidence Current use/testing Comparable/bias addressed Measures what it says Description/definition Accessible/useable data Importance/relevance

TABLE 1.

Yes

Yes Yes Yes

I

Yes

Yes

Sheps28 USA 1955 5 D

Yes Yes Yes Yes Yes

Yes

Yes Yes

Yes

Lembcke30 USA 1956 5 L

Yes Yes Yes Yes Yes

Yes

I I I Yes Yes Yes Yes Yes I Yes

Donabedian4 USA 1966 5 D

Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Yes I Yes I Yes Yes Yes Yes Yes

Yes

I

I

I

Yes I I Yes I Yes Yes I Yes Yes

Brook22 USA 1996 5 D

Lohr31 USA 1990 5 L

Yes Yes Yes Yes Yes

Yes Yes I Yes Yes Yes I Yes Yes Yes Yes

I Yes I

Eddy29 USA 1997 5 D

Yes Yes Yes Yes Yes

Yes Yes Yes

Yes Yes Yes

Yes Yes

I

I

McGlynn10 USA 1998 5 D

Yes

Yes Yes Yes

Yes

Yes Yes Yes

I

McGlynn35 USA 1998 5 T

116 P JONES ET AL.

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

Continued

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

D, discussed; Den, Denmark; Fra, France; Ger, Germany; L, list; NZ, New Zealand; T, tool; UK, United Kingdom; USA, United States of America. Level of evidence: 5 = expert opinion, 4 = used without independent reference standard. Yes = mentioned explicitly in article, I = implied but not mentioned explicitly.

Author (reference) McColl36 Rhew14 Rubin27 Campbell23 Mainz24 Geraedts34 Gory15 Bird21 Buchanan33 Perera19 Willis32 Country UK USA USA UK Den Ger Fra UK NZ NZ Aus Year 1998 2001 2001 2002 2003 2003 2003 2005 2006 2007 2007 Evidence level 4 4 5 5 5 5 4 5 5 4 5 Format T T D D D T T D T T L Attribute of quality indicator mentioned by this author Ethical conduct/reporting I Yes Specific to one department/area I I I I I I I I I Originator conflict of interest Yes I Whole system indicator I I Yes Policy relevance Yes Yes Yes Equitable Yes Understandable Yes Yes Yes Yes I Target group identified I I Yes Yes Yes Yes Yes Yes Yes Yes Domain of quality being measured Yes Yes Yes Responsive (actionable) I Yes Yes Yes Yes Yes I Reliability Yes Yes Yes Yes Yes Yes Yes Yes Power/Precision Yes Yes Yes Yes Yes Yes Yes Yes Scoring system Yes Yes Yes Yes Yes Yes Potential for perverse outcomes Yes Yes Yes Yes Cost of collecting versus utility Yes Yes Yes Yes Yes Yes Yes Yes Yes Bottom line Yes Yes I I I Yes Yes Yes Yes Strength of evidence Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Current use/testing Yes Yes Yes Yes Yes Yes Yes Yes Yes Comparable/bias addressed Yes Yes Yes Yes Yes Yes Yes Yes Yes Measures what it says Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Description/definition Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Accessible/useable data Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Importance/relevance Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

TABLE 1.

REVIEW OF QUALITY INDICATOR ATTRIBUTES

117

118

P JONES ET AL.

Figure 2. Attributes of a quality indicator explicitly stated in existing appraisal tools.

In the first pilot, the correlation between user’s VAS scores was weakly positive for the eight indicators (rho = 0.21, P = 0.6). The tool provided an explicit basis for the study team to reach consensus around the final choice of indicators for the study. Four staff specialists in Emergency Medicine completed the second pilot. The interval between completing the two phases of the pilot was from 1 to 2 months. Table 3 shows how their responses changed and the time taken to use the QICA tool. The time taken to assess indicators was about 30 min per indicator using the QICA tool compared with less than 5 min per indicator for gestalt decisions. Although they changed their ranking of indicators after using the tool, the users felt gestalt decisions were easier to make and they were neutral about recommending the tool for indicator selection.

Discussion A comprehensive list of healthcare quality indicator attributes that were thought to be valid by independent experts was found. This list formed the basis of a new tool to assist the critical appraisal of quality indicators. The QICA tool was thought to have facilitated the final decision on which indicators would be measured as part of the Shorter Stays in ED (SSED) research project, although simply using the tool without discussion did not

result in good correlation between users choice of indicators. Subjectivity is inevitable when using appraisal tools as the context and prior knowledge and experience of the appraisers will differ. Using the tool was also associated with a subtle change in ranking of indicators by emergency physicians at the cost of being more difficult than a gestalt decision. The extra time and care required to assess an indicator with the QICA tool is likely to be beneficial because users are encouraged to think systematically and in more depth about the decision they are making. Users also become aware of all important elements of a quality indicator. It is generally accepted that use of appraisal tools for review of studies improves interpretation,11 and there is no reason to believe that the same is not true for assessing quality indicators although there is a paucity of evidence other than expert opinion to support this. Almost a century ago, Codman first discussed the attributes of a good quality indicator 25 and in 1956 Lembcke was the first to specifically list attributes.30 The point after which no new attributes were described was reached with a presentation to the Institute of Medicine in 1997 by Eddy,29 and the first tools to facilitate critical appraisal of a quality indicator were published in 1998 by McColl36 and McGlynn.35 These tools consider only half of the important indicator attributes found in the present review. The

QICA tool builds on the foundation provided by this and other work, with a structure based on the CASP tools for appraising research studies49 and prior tools suggested by Buchanan,33 the NHS Good Indicators Guide45 and Perera et al.19 Compared with other published tools, the QICA tool has both similarities and differences. The Canadian Institute of Clinical Evaluative Sciences6 tool is the only one previously used to appraise ED quality indicators; however, it considers only half of the attributes. Gory et al. used the Joint Commission’s framework to create a practical tool emphasising the specifications of a particular indicator.15 The Agency for Health Research and Quality template46 provides scope for the user to comment on attributes with the capacity to directly compare attributes between a maximum of three quality indicators online. This tool along with Campbell’s discussion 23 and Buchanan’s tool33 provide succinct and clearly understood language around the important concepts of validity and reliability. The Outcomes Utility Index35 weighted seven attributes according to the author’s view of their relative importance. In designing the QICA tool, attributes were not weighted, as their relative importance is likely to differ depending on the lens of the appraiser, the setting and issue of interest.50 The Appraisal of Indicators through Research and Evaluation (AIRE) instrument47 was developed along similar lines to the QICA tool using a structured literature review followed by validation by an expert panel.51 However, the AIRE is based on a tool designed to assess the rigor of the methods used to develop guidelines rather than one to appraise research studies.52 As such, it includes descriptions of who developed the indicator, whether all stakeholders were involved and whether the indicator has been formally endorsed. The AIRE also emphasises the reason an issue was selected and seeks more detail around the search for evidence. Conversely, less emphasis is placed on the availability and quality of data and the issues of equity, ethics, conflicts of interest and the potential for perverse outcomes. The QICA is designed to appraise a

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

119

REVIEW OF QUALITY INDICATOR ATTRIBUTES

TABLE 2.

Reliability and validity of indicator attributes

Indicator Attribute

Name what is the indicator called? Issue what is the issue of importance? Setting in what setting is the indicator to be measured/reflect quality of care? Importance/relevance of indicator What is the purpose of this indicator? Does it reflect an important healthcare issue (occurs frequently/carries a high burden of illness)? Is it financially or strategically important to stakeholders (patients, clinicians, managers, policy-makers)? Is there a demonstrated need for improvement in the quality, or reduction in the variability of care? True measure of the issue Is the indicator directly related to the issue of interest? Is the indicator a true measure of the outcome of interest? Evidence of association with outcomes (consider strength of evidence) Prior/current use as an indicator (in which settings?) Acceptable to end users The indicator has a sound rationale and meaning for either consumers, providers or planners of healthcare Concordant with other measures Does the indicator have correlations with other measures of the same issue in the same direction or a similar magnitude Reliability Is the measurement of the indicator free from observer bias? Would different observers obtain the same results from the same data? Would the same observer obtain the same results from the same data extracted at a different point in time? Precision of the indicator Is the indicator sufficiently common to allow meaningful comparisons over time or between organizations? Adjustable for case mix Are the systems for measuring this indicator the same in different settings? Are the definitions, inclusions and exclusions the same? Has the case mix been adjusted for? If there is a need for adjustment for demographics (age, sex, ethnicity etc) to ensure fair comparisons, can this be done? Cost of measuring worth the effort Is the cost associated with collecting or extracting the data required to measure this indicator reasonable? Reflects inequities Is the indicator suitable for comparisons between subgroups of a population to look for disparity between groups? Is there likely to be equal explanatory power for the smaller groups? Measuring and reporting is ethical Will data collection and analysis conform to ethical principles? Will reporting the indicator compromise the confidentiality of the participants (patients, clinicians, managers, institutions) or performance of the system? Unintended consequences considered Could focus on this indicator or its application result in adverse outcomes or worse care in other areas (within your department, the hospital or primary care)? What is the potential for manipulation of results without improving care? What are the external pressures (financial or political) on compliance with this indicator? Target population described Is the population to which the indicator applies stated explicitly and unambiguous (e.g. patients, clinicians, institution)? Inclusions and exclusions described Are appropriate inclusions and exclusions described? If the indicator is expressed as a proportion, are the numerator and denominator described clearly and are they appropriate? Data available from existing sources Are the requirements for data collection clearly defined? Are the data required to measure and report this indicator available and extractable? Existing software sufficient for collection The Information Technology resources are sufficient to allow collection and collation of data Unit of analysis clear It is clear whether the indicator is measuring individuals, groups, organisation or systems Accuracy of data verifiable Can the accuracy of the data collection process be verified (standard data collection forms, dual data collection)? Defined measurement/scoring system Is there a threshold for performance or system of ranking, against which the department or institution will be judged for this indicator? What is good performance? What is the evidence for this scoring system? Responsiveness Is it possible to act on the results of this indicator in real time to effect improvements or prevent adverse events? Results understandable by end users Will the results be meaningful to the end-users (patients, clinicians, managers, policy-makers?) Reflects your department To what extent is performance against this indicator under the control of your department? Reflects whole system Does the indicator reflect whole system performance? Does it give you information about how your department interacts with the rest of the system (e.g. primary care, wider hospital) Scoring the indicator on 10mmVAS Structure/process/outcome framework (what type of indicator is this?) Institute of Medicine domains (which domains does this indicator reflect?) Conflicts of interest: Proposer Are the developers/promoters of the indicator clearly identified and their affiliations declared? Is there any commercial interest in the use of the indicator? Was its development sponsored or supported by industry? Conflicts of interest: Assessor You should declare any potential conflict of interest that you might have with respect to appraising this indicator

Validity Median (IQR)† n = 16 4 4 4 4

(4, (4, (4, (4,

4) 4) 4) 4)

Section (n items)

Reliability Alpha‡

Introduction (3)

0.85

Essential Items (2)

0.85

4 (4, 4) 4 (3, 4) 3 (2, 3) 3.5 (3, 4)

Evidence (2)

0.83

Technical aspects (9)

0.93

Data collection and analysis (10)

0.88

Summary (3)

0.86

Quality framework (2) Conflicts of interest (2)

0.91

3 (2.75, 3)

4 (4, 4)

4 (3, 4) 3 (3, 3)

4 (3, 4) 3 (2.75, 3)

4 (3, 4)

4 (3.75, 4)

4 (4, 4)

4 (4, 4)

3 (3, 3)

3 (2.5, 3) 4 (4, 4) 4 (3.5, 4) 4 (3, 4)

3 (3, 3) 4 (4, 4) 3 (3, 3.5) 3 (3, 3)

3 2.5 3 3

(2, 3) (2, 3) (2, 3) (2.75, 4)

3 (3, 4)

†Median (interquartile range [IQR]) of four point scale: 1 = not useful, 2 = slightly useful, 3 = moderately useful, 4 = extremely useful. ‡Cronbach’s alpha.

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

0.87

120

TABLE 3.

P JONES ET AL.

Effect of QICA tool on user’s assessment of performance measures

Performance measure

Score mean (SD) Gestalt

Time to appropriate antibiotics for sepsis Unplanned representation 48 h % Missed sub-arachnoid haemorrhage % Time to analgesia for acute fracture Stroke patients who get thrombolysis % Time to CT in acute head injury

7.5 9 2.3 9.5 2 5.8

(2.1) (0.82) (1.7) (0.58) (2.2) (2.6)

Difference mean (95% CI)

QICA 7 8 5 9 1.8 5.6

(2.9) (0.82) (3.7) (0) (1.7) (4.2)

0.5 1 −2.8 0.5 0.25 0.13

Rank mean (SD) Gestalt Time to appropriate antibiotics for sepsis Unplanned representation 48 h % Missed sub-arachnoid haemorrhage % Time to analgesia for acute fracture Stroke patients who get thrombolysis % Time to CT in acute head injury Overall rank

2.8 1.5 5.5 1.8 5.3 4.3

(0.5) (0.6) (1) (0.96) (5.5) (0.5)

(−4.4, 5.4) (†) (−10, 4.6) (−0.4, 1.4) (−5, 5.5) (−8.5, 8.8)

Difference mean (95%CI)

0.77 † 0.32 0.18 0.89 0.97 P

QICA 2.5 2.8 5 1.8 5.5 3.5

(1.3) (0.5) (0.82) (1.5) (1) (1.7)

0.3 −1.3 0.5 0 −0.3 0.8

Gestalt

1 2 3 4 5 6 Total time taken to assess six indicators: median (IQR)

P

Unplanned representations Time to analgesia for fractures Time to antibiotics for sepsis Time to CT for head injury Stroke thrombolysis Missed sub-arachnoid haemorrhage 17.5 (3–34) min

(−1.3, (−2.7, (−1.6, (−1.3, (−1.8, (−2.5,

1.7) 0.3) 2.6) 1.3) 1.3) 4)

0.64 0.08 0.50 1 0.64 0.52

QICA Time to analgesia for fractures Time to antibiotics for sepsis Unplanned representations Time to CT for head injury Missed sub-arachnoid haemorrhage Stroke thrombolysis 190 (43–352) min**

†Unable to compute as the standard error of the difference = 0. **P = 0.07. CI, confidence interval; CT, computed tomography; IQR, interquartile range; QICA, Quality Indicator Critical Appraisal Tool; SD, standard deviation.

particular indicator, rather than to appraise the rigor of the indicator development process. It is essential that ethical principles are maintained when collecting data and reporting indicators as this might involve the use of patient data without prior consent and reports might identify organisations that are publically accountable.21 No existing tools considered this issue. The value of a comprehensive list of indicator attributes is to inform those choosing quality indicators about all the potentially relevant aspects to consider. It also provides a framework that allows them to compare indicators within their particular context. Failing to consider all aspects might lead to poor indicator selection or application. For example, a range of ED indicators including ‘missed subarachnoid haemorrhage’ were recently recommended.6 Such infrequently

occurring sentinel events will have considerable uncertainty around any point estimates of proportion observed, which makes them imprecise. It is unlikely that meaningful comparisons would be able to be made over time or between hospitals using an imprecise indicator. Similarly, the strength of evidence for an indicator is an important attribute but evidence might change over time. For example, there is published evidence that time to antibiotics impacted on survival from communityacquired pneumonia.14,53,54 As a result this indicator was believed to be useful,14 and its use was recommended by the Joint Commission. However, subsequent evidence was conflicting with further studies suggesting no benefit and perhaps even harm.55 Furthermore, linking this indicator to ‘pay for performance’ resulted in perverse behaviours and unintended consequences.56

Limitations of the present study The search was performed by a single author so there is potential for bias in selection of articles. The search strategy might have missed tools, such as those of private benchmarking organisations and unpublished studies. There might also have been attributes of a quality indicator that were not recognised in the retrieved articles, although this was mitigated by two authors extracting data independently. Agreement between authors regarding which article contained which attributes was moderate, reflecting subjectivity in assessment of the source articles. However, no attribute we identified was excluded from the final list. The expert panel were English speakers mostly from Australasia who were purposefully selected which might have lead to bias because of culture or location. The panellists all had medical

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

121

REVIEW OF QUALITY INDICATOR ATTRIBUTES

a set of candidate indicators in the SSED study, all indicators had high face validity to the users 20 and participants in both pilots knew their own prior assessments. Both factors are likely to bias against a difference using the QICA tool. The small sample sizes in the pilots meant that the observed correlations and differences in rankings of indicators were not statistically significant.

Conclusion A comprehensive list of attributes to consider when assessing health service performance measures was identified, validated and brought together in the QICA tool. When piloted, the new tool facilitated indicator selection, and there were changes in the rankings of indicators. Future research should examine the utility of critical appraisal tools to assist selection of indicators for healthcare improvement initiatives.

Acknowledgement SW is partly funded by the Stevenson Foundation.

Competing interests PJ is a section editor for Emergency Medicine Australasia and is a member of the Quality Management subcommittee of the ACEM. No other author has any competing interest to declare.

References

Figure 3.

The quality indicator critical appraisal (QICA) tool.

training but had varying degrees of prior knowledge of the indicator appraisal process. The authors were unaware of the expert’s personal views on attributes and the indicator appraisal process prior to undertaking the web survey. Because of the ‘before and after’ nature of the pilots, any change in the

ranking of indicators by participants might not be due to using QICA. Confounding factors, such as having more time to think about this process and increased alertness to evidence around the particular indicators selected in the intervening time might have led to changes in choice regardless of whether QICA was used or not. When ranking

1. Seddon M, Effective Practice I Quality. Quality improvement in healthcare in New Zealand. Part 1: what would a high-quality healthcare system look like? N. Z. Med. J. 2006; 119: U2056. 2. Minister of Health. Improving Quality (IQ): a systems approach for the New Zealand health and disability sector. 2003. 3. Institute of Medicine. Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press, 2001.

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

122

P JONES ET AL.

9.

10.

11.

12.

13.

14.

15.

16.

Figure 3.

Continued. 17.

4. Donabedian A. Evaluating the quality of medical care. Milbank Mem. Fund Q. 1966; 44: 166–203. 5. Jones P, Chalmers L, Wells S et al. Implementing performance improvement in New Zealand emergency departments: the six hour time target policy national research project protocol. BMC Health Serv. Res. 2012; 12: 45. 6. Schull MJ, Hatcher CM, Guttmann A et al. Development of a Consensus on Evidence-based Quality of

Care Indicators for Canadian Emergency DepartmentsInstitute for Clinical Evaluative Sciences. Toronto: Institute of Clinical Evaluative Sciences, 2010. 7. Cooke MW. A & E clinical quality indicators: implementation guidance: DH (UK); 2010. [Updated 17 Dec 2010; Cited 7 Sep 2013.] Available from URL: http://www.dh .gov.uk/publications 8. Chassin M, Loeb J, Schmaltz S, Wachter R. Accountability meas-

18.

19.

ures – using measurement to promote quality improvement. N. Engl. J. Med. 2010; 363: 683. Guttmann A, Razzaq A, Lindsay P, Zagorski B, Anderson GM. Development of measures of the quality of emergency department care for children using a structured panel process. Pediatrics 2006; 118: 114–23. McGlynn E, Asch S. Developing a clinical performance measure. Am. J. Prev. Med. 1998; 14 (3 Suppl 1): 14– 21. Byers JF, Beaudin CL. Critical appraisal tools facilitate the work of the quality professional. J. Healthc. Qual. 2001; 23: 35–8. Guyatt G, Meade M, Jaeschke R, Cook D, Haynes R. Practitioners of evidence based care: not all clinicians need to appraise evidence from scratch but all need some skills. BMJ 2000; 320: 954–5. Guyatt G, Rennie D, Meade M, Cook D. Users’ Guides to the Medical Literature: A Manual for Evidence-based Clinical Practice, 2nd edn. Chicago, IL: American Medical Association, 2008. [Cited 5 May 2012.] Available from URL: http:// jamaevidence.com/resource/520 Rhew DC, Goetz MB, Shekelle PG. Evaluating quality indicators for patients with community-acquired pneumonia. Jt Comm. J. Qual. Improv. 2001; 27: 575–90. Gory I, Michel P, Phely-Peyronnaud C. The development of a core set of quality of care indicators in a psychiatric hospital [French]. Sante Publique 2003; 15: 99–113. Hoot N, Zhou C, Jones I, Aronsky D. Measuring and forecasting emergency department crowding in real time. Ann. Emerg. Med. 2007; 49: 747–55. Bernstein SL, Aronsky D, Duseja R et al. The effect of emergency department crowding on clinically oriented outcomes. Acad. Emerg. Med. 2009; 16: 1–10. Hoot NR, Aronsky D. Systematic review of emergency department crowding: causes, effects, and solutions. Ann. Emerg. Med. 2008; 52: 126–36. Perera R, Dowell T, Crampton P, Kearns R. Panning for gold: an evidence-based tool for assessment of performance indicators in primary

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

123

REVIEW OF QUALITY INDICATOR ATTRIBUTES

Figure 3.

Continued.

health care. Health Policy 2007; 80: 314–27. 20. Jones P, Harper A, Wells S et al. Selection and validation of quality indicators for the Shorter Stays in Emergency Departments National Research Project. Emerg. Med. Australas. 2012; 24: 303–12.

21. Bird S, Cox D, Farewell V, Goldstein H, Holt T, Smith P. Performance indicators: good, bad, and ugly. J. R. Stat. Soc. Ser. A Stat. Soc. 2005; 168: 1–27. 22. Brook R, McGlynn E, Cleary P. Measuring quality of care. N. Engl. J. Med. 1996; 335: 966–70.

23. Campbell SM, Braspenning J, Hutchinson A, Marshall M. Research methods used in developing and applying quality indicators in primary care. Qual. Saf. Health Care 2002; 11: 358–64. 24. Mainz J. Developing evidence-based clinical indicators: a state of the art methods primer. Int. J. Qual. Health Care 2003; 15(Suppl 1): i5–11. 25. Neuhauser D. Ernest Amory Codman, M.D., and end results of medical care. Int. J. Technol. Assess. Health Care 1990; 6: 307–25. 26. O’Malley M, Kossack C. A statistical study of factors influencing the quality of patient care in hospitals. Am. J. Public Health Nations Health 1950; 40: 1428–36. 27. Rubin H, Pronovost P, Diette G. The advantages and disadvantages of process based measures of health care quality. Int. J. Qual. Health Care 2001; 13: 469–74. 28. Sheps M. Approaches to the quality of hospital care. Public Health Rep. 1955; 70: 877–86. 29. Eddy DM. Performance measurement: problems and solutions. Health Aff. (Millwood) 1998; 17: 7–25. 30. Lembcke PA. Medical auditing by scientific methods. JAMA 1956; 162: 646–55. 31. Lohr KN, Schroeder SA. A strategy for quality assurance in Medicare Institute of Medicine. 1990. Report No.: 0028-4793. 32. Willis C, Gabbe B, Cameron P. Measuring quality in trauma care. Injury 2007; 38: 527–37. 33. Buchanan J, Pelkowitz A, Seddon M. Quality improvement in New Zealand healthcare. Part 4: achieving effective care through clinical indicators. N. Z. Med. J. 2006; 119: U2131. 34. Geraedts M, Selbmann HK, Ollenschlaeger G. Critical appraisal of clinical performance measures in Germany. Int. J. Qual. Health Care 2003; 15: 79–85. 35. McGlynn EA. The outcomes utility index: will outcomes data tell us what we want to know? Int. J. Qual. Health Care 1998; 10: 485–90. 36. McColl A, Roderick P, Gabbay J, Smith H, Moore M. Performance indicators for primary care groups: an evidence based approach. BMJ 1998; 317: 1354–60.

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

124

P JONES ET AL.

37. TJC. Improving America’s Hospitals. The Joint Commission’s annual report on quality and safety 2011. [Cited 7 Sep 2013.] Available from URL: http://www.jointcommission .org/performance_measurement .aspx 38. NQF. National quality forum 2011. [Cited 30 Nov 2011.] Available from URL: http://www .qualityforum.org/docs/measure _evaluation_criteria.aspx 39. ACHS. Australian council on healthcare standards: clinical indicators. Ultimo. 2007. [Cited 1 Dec 2011.] Available from URL: http:// www.achs.org.au/ClinicalIndicators/ 40. Brand C, Elkadi SO, Tropea J. Measurement for improvement toolkit Australian council for safety and quality in health care, clinical epidemiology and health service evaluation unit TRMH. 2005. [Cited 7 Sep 2013.] Available from URL: http:// www.health.gov.au/internet/safety/ publishing.nsf/Content/ F22384CCE74A9F01CA 257483000D845E/$File/mtoolkit .pdf 41. Kelley E, Hurst J. Health care quality indicators project. Conceptual Framework Paper OECD. 2006. [Cited 7 Sep 2013.] Available from URL: http://www.oecd-ilibrary.org/social -issues-migration-health/health-care -quality-indicators-project _440134737301 42. Romano P, Hussey P, Ritley D. Agency for healthcare research and quality. May 2010. Selecting Quality and Resource Use Measures: A Decision Guide for Community Quality Collaboratives. AHRQ Publication No. 09(10)-0073 Rockville: US Department of Health and Human Ser-

43.

44.

45.

46.

47.

48.

49.

50.

vices. 2010. [Cited 21 Apr 2011.] Available from URL: http://www .ahrq.gov/qual/perfmeasguide/ NCQA. National Committee for Quality Assurance 2011. [Cited 23 Nov 2011.] Available from URL: http://www.ncqa.org/Home.aspx Collopy B, Campbell J, Williams J et al. Acute health clinical indicator project: final report. Victoria: ACHS Care Evaluation Program and Monash University Department of Epidemiology and Preventive Medicine. 1999. [Cited 7 Sep 2013.] Available from URL: http://www.health .vic.gov.au/archive/archive2004/ clinical-indicators/ Pencheon D. The good indicators guide – understanding how to use and choose indicators. NHS Institute for Innovation and Improvement. 2010. [Cited 7 Sep 2013.] Available from URL: http://www.institute.nhs .uk/option,com_joomcart/Itemid,26/ main_page,document_product_info/ products_id,372.html AHRQ. National quality measures clearinghouse: template of measure attributes. 2011. [Cited 23 Nov 2011.] Available from URL: http:// www.qualitymeasures.ahrq.gov/about/ template-of-attributes.aspx# DataSource de Koning J, Burgers J, Klazinga NS. Appraisal of indicators through research and evaluation (AIRE). 2007 (Forthcoming). Donabedian A. The quality of care. How can it be assessed? JAMA 1988; 260: 1743–8. Critical Appraisal Skills Programme. [Cited 24 Mar 2013.] Available from URL: http://www.casp-uk.net ACEM. History of the Australasian College for Emergency Medicine.

51.

52.

53.

54.

55.

56.

[Cited 13 Sep 2011.] Available from URL: http://www.acem.org.au/ about.aspx?docId=12 de Koning J, Burgers J, Klazinga N. Appraisal of indicators through research and evaluation (AIRE). Holland: Dutch Association of Medical Specialists; 2007. AGREE Collaboration. Development and validation of an independent appraisal instrument for assessing the quality of clinical practice guidelines: the AGREE project. Qual. Saf. Health Care 2003; 12: 18–23. Houck PM, Bratzler DW, Nsa W, Ma A, Bartlett JG. Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community-acquired pneumonia. Arch. Intern. Med. 2004; 164: 637–44. Meehan TP, Fine MJ, Krumholz HM et al. Quality of care, process, and outcomes in elderly patients with pneumonia. JAMA 1997; 278: 2080– 4. Yu KT, Wyer PC. Evidence behind the 4-hour rule for initiation of antibiotic therapy in community-acquired pneumonia. Ann. Emerg. Med. 2008; 51: 651–62, 62 e1-2. Pines JM. Time to first antibiotic dose measurement in community-acquired pneumonia: time for a change. Ann. Emerg. Med. 2009; 54: 312.

Supporting Information Additional Supporting Information may be found in the online version of this article at the publisher’s website: Appendix S1. Search strategy in Medline.

© 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

Review article: what makes a good healthcare quality indicator? A systematic review and validation study.

Indicators measuring aspects of performance to assess quality of care are often chosen arbitrarily. The present study aimed to determine what should b...
861KB Sizes 0 Downloads 3 Views