BMJ 2013;347:f5952 doi: 10.1136/bmj.f5952 (Published 25 October 2013)

Page 1 of 2

Editorials

EDITORIALS Mortality indicators used to rank hospital performance Should include deaths that occur after discharge J Nicholl professor of health services research, R Jacques research fellow, M J Campbell professor of medical statistics School of Health and Related Research (ScHARR), University of Sheffield, Sheffield S1 4DA, UK

There is considerable debate about the value of using hospital mortality rates adjusted for case mix as an indicator of the quality and safety of care provided by hospitals. A linked paper by Pouw and colleagues (doi:10.1136/bmj.f5913) investigates the inclusion of post-discharge deaths in these mortality indicators.1 The main doubts about their value are that standardisation for differences between hospitals in the characteristics of their patients (the case mix) doesn’t work, and that these indicators do not measure performance because they are not related to avoidable mortality. There is no doubt that the case mix adjustment is problematic. We know that different adjustment models lead to different results,2 and that important measures of case mix are missing from models based on routine data.3 We also know that these measures are at best weakly related to avoidable mortality—models show that they would begin to be useful for identifying poor quality of care only when at least 16% of hospital deaths are avoidable.4 Recent studies have shown that in the United Kingdom this figure is closer to 5%.5

Nevertheless, hospital standardised mortality ratios are being used to identify failing hospitals thanks to considerable social, political, and media pressure.6 We must therefore make the measures as robust as possible. The Department of Health in England has recently introduced a revised measure, the summary hospital mortality indicator.7 The main differences are that it includes nearly all conditions and mortality is recorded not only in hospital but up to 30 days after discharge. Whether deaths after discharge should be used when calculating hospital mortality indicators has been discussed for years. Studies that have compared results using both approaches for some specific clinical conditions have concluded that they give similar results overall but detect different statistical outliers.8 9 Recently, it was estimated that using mortality up to 30 days after admission, rather than in-hospital mortality, changed the quality rankings for only about 10% of hospitals, but that in-hospital measures are biased in favour of hospitals with shorter lengths of stay.10 Pouw and colleagues examined this question using data on more than one million admissions to 60 Dutch hospitals.1 They compared in-hospital mortality with mortality at 30 days after discharge and 30 days after admission. They found that 20-30%

of hospitals change their quality ranking when post-discharge deaths are included and confirmed a substantial correlation between the in-hospital measure and the average length of stay of patients in hospital. They concluded that in-hospital measures are subject to “discharge bias,” and that post-discharge mortality should be included in hospital mortality indicators. It is now clear that if post-discharge deaths are included the relative performance of some hospitals changes, and that short lengths of stay are associated with low in-hospital mortality and a discharge bias, so that it is not appropriate to use only in-hospital mortality. But this leaves at least three questions unanswered. Firstly, should a fixed time frame after admission or after discharge be used? The Department of Health chose the post-discharge option in the summary hospital mortality indicator because part of the care of patients who stay in hospital longer than 30 days is not assessed if 30 days post-admission is used. This might also lead hospitals to focus only on the quality of the first 30 days of care. However, fewer than 5% of patients stay longer than 30 days, and using a post-discharge time frame means that there is still a bias in favour of hospitals with shorter lengths of stay, albeit a smaller and possibly negligible bias compared with an in-hospital mortality measure. Pouw and colleagues have not published the correlation between length of stay and 30 day post-discharge mortality, which might help us judge how important any bias might be. Secondly, what time frame should be used? All the studies we know have used 30 days after discharge or after admission, but why 30 days? Clearly, the longer the time after discharge the smaller the influence of the quality of hospital care and the greater the influence of community care, or care in any subsequent hospital admission. It follows that as short a time frame as is necessary to pick up all the effects of the quality of hospital care should be used. English hospital episode statistics data for 2005-10 show that, for all deaths that occur up to 30 days after discharge, 7% occur in the first week, then 5%, 4%, and 4% in weeks two to four. This suggests that a two week window after discharge might be more appropriate.

A third question is whether post-discharge mortality should be combined with in-hospital mortality at all. Deaths after discharge are an indicator of the quality of care during the stay in hospital,

[email protected] For personal use only: See rights and reprints http://www.bmj.com/permissions

Subscribe: http://www.bmj.com/subscribe

BMJ 2013;347:f5952 doi: 10.1136/bmj.f5952 (Published 25 October 2013)

Page 2 of 2

EDITORIALS

the appropriateness of the discharge decision, and the quality of care provided by post-discharge community services. English hospital episode statistics for 2005-10 show that deaths in the 30 days after discharge varied from 12% to 30% of all deaths from admission to 30 days after discharge. This suggests that the appropriateness of discharge decisions or follow-up care may vary greatly. It might therefore be better to have two indicators of performance—an in-hospital measure and a two week post-discharge one. This would enable hospitals and commissioners to identify any problems with discharge decisions and post-discharge care. Competing interests: We have read and understood the BMJ Group policy on declaration of interests and declare the following interests: None. Provenance and peer review: Commissioned; not externally peer reviewed. 1

Pouw M, Peelen L, Moons K, Kalkman C, Linsma H. Including post-discharge mortality in calculation of hospital standardised mortality ratios: retrospective analysis of hospital episode statistics. BMJ 2013;347:f5913.

For personal use only: See rights and reprints http://www.bmj.com/permissions

2 3 4 5 6 7 8 9 10

Shahian DM, Wolf RE, Iezzoni LI, Kirle L, Normand SL. Variability in the measurement of hospital-wide mortality rates. N Engl J Med 2010;363:2530-9. Goodacre S, Wilson R, Shephard N, Nicholl J. Derivation and validation of a risk adjustment model for predicting seven day mortality in emergency medical admissions: mixed prospective and retrospective cohort study. BMJ 2012;344:e2904. Girling AJ, Hofer TP, Wu J, Chilton PJ, Nicholl JP, Mohammed MA, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf 2012;21:1052-6. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review. BMJ Qual Saf 2012;21:737-45. Kmietowicz Z. Health secretary puts 11 trusts in England into special measures. BMJ 2013;347:f4602. Campbell MJ, Jacques R, Fotheringham J, Maheswaran R, Nicholl J. Developing a summary hospital mortality index: retrospective analysis in English hospitals over five years. BMJ 2012;344:e1001. Borzecki AM, Christiansen CL, Chew P, Loveland S, Rosen AK. Comparison of in-hospital versus 30-day mortality assessments for selected medical conditions. Med Care 2010;48:1117-21. Rosenthal GE, Baker DW, Norris DG, Way LE, Harper DL, Snow RJ. Relationships between in-hospital and 30-day standardised hospital mortality: implications for profiling hospitals. Health Serv Res 2000;34:1449-68. Drye EE, Normand ST, Wang Y, Ross JS, Schreiner BS, Han L, et al. Comparison of hospital risk-standardized mortality rates calculated by using in-hospital and 30-day models: an observational study with implications for hospital profiling. Ann Intern Med 2012;156:19-26.

Cite this as: BMJ 2013;347:f5952 © BMJ Publishing Group Ltd 2013

Subscribe: http://www.bmj.com/subscribe

Mortality indicators used to rank hospital performance.

Mortality indicators used to rank hospital performance. - PDF Download Free
178KB Sizes 0 Downloads 0 Views