bs_bs_banner

Emergency Medicine Australasia (2015) 27, 300–306

doi: 10.1111/1742-6723.12425

ORIGINAL RESEARCH

Model to predict inpatient mortality from information gathered at presentation to an emergency department: The Triage Information Mortality Model (TIMM) David JO TEUBNER,1 Julie CONSIDINE,2 Paul HAKENDORF,3 Susan KIM4 and Andrew D BERSTEN5 1 Emergency Medicine and Prehospital Science, Flinders University, Adelaide, South Australia, Australia, 2School of Nursing and Midwifery/Centre for Quality and Patient Safety Research, Deakin University, Melbourne, Victoria, Australia, 3Flinders Medical Centre, Adelaide, South Australia, Australia, 4Flinders Centre for Epidemiology and Biostatistics, Flinders University, Adelaide, South Australia, Australia, and 5Intensive and Critical Care Unit, Flinders Medical Centre, Adelaide, South Australia, Australia

Abstract Objectives: To derive and validate a mortality prediction model from information available at ED triage. Methods: Multivariable logistic regression of variables from administrative datasets to predict inpatient mortality of patients admitted through an ED. Accuracy of the model was assessed using the receiver operating characteristic area under the curve (ROC-AUC) and calibration using the Hosmer–Lemeshow goodness of fit test. The model was derived, internally validated and externally validated. Derivation and internal validation were in a tertiary referral hospital and external validation was in an urban community hospital. Results: The ROC-AUC for the derivation set was 0.859 (95% CI 0.856– 0.865), for the internal validation set was 0.848 (95% CI 0.840–0.856) and for the external validation set was 0.837 (95% CI 0.823–0.851). Calibration assessed by the Hosmer– Lemeshow goodness of fit test was good. Conclusions: The model successfully predicts inpatient mortality from information available at the point of triage in the ED.

Key words: hospital mortality, mortality prediction, triage.

Introduction Mortality rates are used as a key performance indicator in medicine but must be compared with a standardised predictor to allow valid conclusions to be drawn. The present paper discusses a valid mortality prediction model for ED patients, which will provide a standardised baseline against which the whole of hospital function, including the ED, can be judged. A mortality prediction model based on the ED could be used to investigate an association between delays to admission or ambulance ramping on mortality. Mortality prediction models are integral to medical quality assurance management, particularly in the intensive care setting, where they have been integral to the function of units for many years. Intensive care models are used for comparing performance within and between units and for research. Models have also been developed for admissions to internal medicine units, with similar aims to the intensive care models. There is currently no accepted model available to predict in-

Correspondence: Dr David JO Teubner, Emergency, Flinders Medical Centre, Flinders Drive, Bedford Park, SA 5042, Australia. Email: [email protected] David JO Teubner, BMBS, FACEM, MClinEpi, Senior Lecturer; Julie Considine, RN, MNurs, PhD, Professor; Paul Hakendorf, BSc, MPH, Epidemiologist; Susan Kim, BSc (Hons), PhD, Epidemiologist; Andrew D Bersten, MBBS, FCICM, MD, Director. Accepted 3 May 2015

Key findings • We have developed an inpatient mortality prediction model based on information collected at triage and stored in administrative datasets. • The model includes all patients admitted to hospital from the ED. • The accuracy of the model is superior to previously published ED mortality prediction models.

hospital mortality in all patients presenting to the ED. Mortality prediction models have been used in the critical care setting for many years. One of them is the Acute Physiology and Chronic Health Evaluation (APACHE) methodology that was first described in 1981,1 then validated and published as APACHE II in 1985,2 and further refined as APACHE III in 19913 and APACHE IV in 2006.4 APACHE III uses a combination of demographic, premorbid, physiologic and diagnostic variables to produce a mortality prediction after the first 24 h of admission to a critical care unit. It is used by the Australian and New Zealand Intensive Care Society and the Australian Council of Healthcare Standards as the preferred model and benchmark.5 Attempts have been made to apply some APACHE methodology to ED patients, for example, the Rapid Acute Physiology Score (RAPS) and Rapid Emergency Medicine Score

© 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

301

THE TRIAGE INFORMATION MORTALITY MODEL

(REMS). The RAPS6 uses pulse rate, blood pressure, respiratory rate and Glasgow Coma Score, and the Rapid Emergency Medicine Score (REMS),7 which is an improvement of RAPS, adds age and oxygen saturation obtained from pulse oximetry to the RAPS measures; however, neither the RAPS nor the REMS include diagnostic information. External validation of the REMS and RAPS scores 8 has shown them to be less than optimal, with receiver operating characteristics (ROCs) of 0.64 and 0.74, respectively. Our prediction model is more accurate in terms of ROC compared with REMS or RAPS. Use of a model, which predicts mortality at the point of triage in the ED, at the beginning of a patient’s journey through the hospital, would allow evaluation of the impact of ED function upon mortality. Areas for evaluation would include impact of: waits for inpatient bed availability, total time in the ED and overcrowding in the ED upon inpatient mortality. It is possible to use administrative datasets to produce mortality prediction models that perform similarly to detailed clinical models. 9 The Critical Care Outcome Prediction Equation (COPE) has been developed to produce a simple, robust, risk-adjusted mortality prediction tool for critical care patients.5 The COPE model uses just five data fields collected for administrative purposes (patient age, unplanned hospital admission, hospital category, mechanical ventilation and major diagnostic category). The present study used a methodology similar to that used in COPE applied to an ED administrative dataset to derive, internally validate and externally validate a mortality prediction model from information that is available from all adult patients at their first point of contact with an Australian tertiary ED and then internally and externally validate the mortality prediction model using administrative datasets from two different Australian tertiary EDs. Importantly, the derived model was not dependent on ongoing physiological monitoring and investigation results.

Methods

Model derivation and validation

In this retrospective database model building study, we used administrative data from ED in two hospitals. Hospital 1 is a tertiary referral hospital with approximately 60 000 ED visits annually, with an admission rate of 40%. Data from a unique ED information system (EDIS) is automatically captured into a Microsoft Access (Microsoft Corporation, Redmond, WA, USA) database. Information is available on this database for every ED visit from June 1993. The database contains basic demographic information about each patient as well as ED functional information, such as time of arrival, time of nursing treatment initiation, time of medical treatment initiation, time of admission to hospital and the time of ED disposition (whether discharged to home or admission to hospital). It also contains diagnostic/complaint information recorded at the point of triage in the form of 100 triage codes (e.g. chest pain, abdominal pain, back pain and asthma). Very similar triage complaint codes are used across EDs in South Australia and Victoria, and, given the small number of codes that were significant in the model, replication is straightforward. Hospital 2 is an urban community hospital in a different Australian state than Hospital 1. Hospital 2 has approximately 70 000 ED visits annually with an admission rate of 25%. Data are captured from the EDIS into a statewide emergency medicine dataset, which contains very similar information to Hospital 1’s database.

Statistical analysis was performed using STATA 12MP (Statacorp, College Station, TX, USA). The derivation set data were broken down into calendar years, with each of the 10 years analysed separately. Starting with the data from 2000, variables (age at presentation, gender, Australasian Triage Scale (ATS) score, transport to the ED by ambulance, referral to the ED by a doctor and triage complaint category – a total of 108 variables) were assessed as predictors of inpatient mortality. Multivariable logistic regression was then used to identify significant mortality predictors (at a P-value of 0.10). This process was then repeated for the other nine, 1 year datasets. From the ten 1 year models, a parsimonious model was created by selecting only those variables that were statistically significant (at the 0.1 level) on multivariable logistic regression in each and every year, so that variables not found in one or more years were discarded. This approach avoided the possibility that the large sample size would include small, clinically insignificant effects. These parsimonious model variables were used as predictors for a multivariate logistic regression applied to the entire dataset. A term for year was added to the final model, as a continuous variable, after testing for the linearity assumption. To avoid bias from multiple re-admissions, ‘clustered’ logistic regression was performed by clustering data by individual patient record numbers. The coefficients from the final model logistic regression were applied to the patients from the validation sets to produce a logistic linear predictor and a predicted mortality for each patient was generated from this predictor using the inverse logistic link function. Predicted mortality for patients was validated both internally within the Hospital 1 database and externally using the Hospital 2 database. The study was approved by the Human Research Ethics Committees at both Hospital 1 and Hospital 2.

Inclusion/exclusion criteria Every patient aged 16 years and over admitted to any inpatient service of Hospital 1 from the ED in the 10 year period between 1 January 2000 and 1 January 2010 was included for analysis for derivation of the model (the derivation set). The cut-off of 16 years was chosen to reflect local practice at Hospital 1. Patients who died in the ED before admission were not included.

© 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

302

DJO TEUBNER ET AL.

Model validation Patients aged over 16 years admitted to the Hospital 1 from the ED between 1 January 2010 and 30 June 2013 (the internal validation set) were used to internally validate the model. Patients aged over 16 years admitted to the Hospital 2 from the ED between 30 June 2010 and 1 July 2012 (the external validation set) were used to externally validate the model. Patients who were sent home from the ED or transferred to other hospitals were excluded because outcome

TABLE 1.

Model performance The ability of the model to discriminate survivors from non-survivors was assessed using the ROC area under the curve (ROC-AUC). A ROC-AUC value of 1.0 indicates perfect outcome pre-

diction in all patients whereas a value of 0.5 means that the model is no better than chance. A ROC-AUC value of greater than 0.80 is desirable.5 The calibration of the model was assessed using the Hosmer– Lemeshow goodness of fit χ2 statistic. The standard Hosmer–Lemeshow approach of using 10 groups results in the test being very overpowered, with the result that small departures from the proposed model will be considered significant.10 Paul et al. recommend that, for large datasets, the number of groups should be least of:

Demographic data for the derivation, internal validation and external validation groups

Date range Number of presentations Number of discharges Number of transfers Number of admissions Number (%) of deaths Age, median (range) Male gender Arrived by ambulance

TABLE 2.

data were not available, as were paediatric patients (defined as those aged under 16 years). The primary outcome variable was inpatient mortality. The nature of the data sources meant that complete data were available for all patients.

Derivation group

Internal validation group

External validation group

1 January 2000 to 31 December 2009 424 316 232 850 28 142 163 324 4747 (2.90%) 62 (16–104) 77 742 (47.6%) 94 401 (57.8%)

1 January 2010 to 30 June 2013 179 082 95 856 9483 73 743 1658 (2.25%) 61 (16–104) 35 839 (48.6%) 42 845 (58.1%)

30 June 2010 to 1 July 2012

34 434 563 61 17 596 17 734

(1.64%) (16–103) (51.1%) (51.5%)

The Triage Information Mortality Model coefficients Odds ratio

Age 1.05 Male 1.34 Calender year 0.96 Arrived by ambulance 1.94 Triage complaint (odds ratios compared with other ED patients) Cardiac arrest 14.28 Syncope/collapse 1.57 Other cardiac 1.23 Sepsis 2.77 Other neurological 1.75 Stroke/transient ischaemic attack 3.01 Other respiratory 2.13 Malignancy 10.63 Malaise 3.27 Australasian Triage Scale (ATS) score (odds ratios compared with ATS 1) ATS 2 0.23 ATS 3 0.13 ATS 4 0.09 ATS 5 0.09 Constant

Coefficient

(OR 95% CI)

0.05 0.29 −0.046 0.66

1.05–1.06 1.26–1.42 0.94–0.97 1.78–2.12

2.67 0.45 0.21 1.02 0.56 1.10 0.76 2.36 1.18

11–18.9 1.41–1.75 1.08–1.40 2.10–3.66 1.50–2.04 2.69–3.37 1.95–2.35 9.06–12.46 2.62–4.09

−1.43 −2.01 −2.34 −2.50 −6.09

0.21–0.26 0.12–0.15 0.08–0.11 0.04–0.19

© 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

303

THE TRIAGE INFORMATION MORTALITY MODEL

TABLE 3.

Performance and calibration of the derivation and both validation sets

Derivation set Internal validation External validation

ROC-AUC

ROC-AUC 95% CI

Number of HL groups

HL χ2

P-value for HL χ2

0.834 0.848 0.837

0.829 to 0.839 0.840 to 0.856 0.823 to 0.851

2373 829 281

2324 626.5 177

0.76 1.00 1.00

HL, Hosmer–Lemeshow; ROC-AUC, receiver operating characteristic area under the curve.

1. The number of events divided by two. 2. The difference between (the number of subjects and the number of events) divided by two. 3. Two plus eight times (number of subjects/1000) squared.10 For the derivation and both validation sets, the optimum number of groups for the Hosmer–Lemeshow test was the number of deaths divided by two. A well-calibrated model will have a low χ2 value and hence a high P-value (>0.05).

Results The derivation set consisted of 156 077 consecutive admissions of patients aged over 16 years to Hospital 1 representing all the admissions during this time period, of whom 4747 (3.04%) died. The internal validation set included 73 108 admissions of whom 1658 (2.27%) died. The external validation set included 34 434 consecutive admissions to Hospital 2 of whom 563 (1.64%) died. The demographic data for all three groups are summarised in Table 1. The final model produced by the parsimonious model-building methodology included patient age, gender, the year, whether or not the patient had arrived by ambulance, the ATS category, and nine triage complaint codes (cardiac arrest, syncope/collapse, other cardiac, sepsis, other neurological, stroke/transient ischaemic attack, other respiratory, malignancy and malaise) (Table 2). The model fitted well to the validation data according to the Hosmer– Lemeshow goodness of fit statistics (P > 0.1). The ROC-AUC values showed that the model discriminate survivors from non-survivors well: 0.859 (95% CI 0.856–0.865) for the derivation set, 0.848 (95% CI 0.840–

Figure 1. Receiver operating characteristic curve and Hosmer–Lemeshow for the ), predicted (proportion). derivation group. ( ), Observed (proportion); (

0.856) for the internal validation set and 0.837 (95% CI 0.823–0.851) for the external validation set, all greater than the recommended 0.8. These

results and the Hosmer–Lemeshow goodness of fit statistics are summarised in Table 3 and illustrated in Figures 1–3.

© 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

304

DJO TEUBNER ET AL.

Figure 2. Receiver operating characteristic curve and Hosmer–Lemeshow for the ), predicted (proportion). internal validation group. ( ), Observed (proportion); (

Discussion The study found that a good mortality prediction model can be constructed using data collected at triage in EDs and stored in an administrative dataset. For an outcome prediction model to be useful, it should accurately predict outcomes in all patients in heterogeneous, large populations. 8 The present study shows that data collected for an administrative dataset can be used to create a mortality prediction model, which performs well in two quite different hospitals.

The accuracy of the model as assessed by the ROC-AUC was good (i.e. ROC-AUC > 0.8) in both validation datasets, significantly outperforming existing models like REMS and RAPS. The calibration of the model measured by the Hosmer–Lemeshow method shows that the model fits the data well in both derivation and validating datasets (P > 0.1). A number of other studies have attempted to derive a mortality prediction model on subsets of patients attending EDs. These subsets have included ambulance patients;8 patients admitted to a medical admis-

sion unit;11–14 patients admitted to an emergency care unit;15 and patients in a medical ED who had blood tests drawn.16 These studies have used a combination of demographic information, patient history, physiologic variables and blood tests to derive their models, and only two of them (the REMS17 and the Track and Trigger System13) have been internally and externally validated.18 The above models use various types of information, which can broadly be classified as demographic, diagnostic, physiologic or laboratory. The Triage Information Mortality Model (TIMM) does not rely on any physiologic or laboratory variables as these are either not recorded on the administrative dataset or are not available at the point of triage in the ED. Vital signs are not recorded in administrative databases but usually separately within the triage IT system. The specific vital signs are recorded according to the clinical judgment of the triage nurse and then, only the vital signs needed to make a triage decision are measured. Even if vital sign data could be linked with administrative data, the clinically justified variability in vital sign measurement limits their clinical utility. However, the model does include the triage complaint code, which serves some diagnostic function. It is clear from the literature that models that do include diagnostic information, such as APACHE,2–4 COPE5 and the Simple Clinical Score,11 tend to be more accurate than models that do not. Removing the triage complaint code from the TIMM significantly decreases its accuracy and the inclusion of this diagnostic information might explain why the model functions well in the absence of physiologic information. It is also possible that the ATS serves as a surrogate for physiological data as patients with physiological impairment tend to have lower ATS categories. A surprising finding was the fact that a number of non-specific triage codes, including ‘malaise’, ‘other respiratory’, ‘other cardiac’ and ‘other neurology’, were included in the model. The triage nurses use these codes when a more specific problem is not apparent. Why patients triaged to these

© 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

305

THE TRIAGE INFORMATION MORTALITY MODEL

Conclusion The TIMM predicts inpatient mortality from information available at the point of triage in the ED. It should be possible to use this prediction to identify an association with inpatient mortality and events occurring subsequent to the point of triage. These events include delayed access to inpatient beds or access block. Further research is required to identify an association with access block and patient mortality and also the possible use of TIMM to predict the access block.

Competing interests None declared.

References

Figure 3. Receiver operating characteristic curve and Hosmer–Lemeshow for the ), predicted (proportion). external validation group. ( ), Observed (proportion); (

categories have a higher mortality than those with less diagnostic uncertainty is not clear and needs to be the subject of further investigation. However, it is possible that patients with diagnostic uncertainty might suffer from a delay to definitive care, unlike those who have an easily identified condition, with clearly defined management strategies (e.g. acute coronary syndrome). The study does have a number of limitations. Firstly, it did not analyse mortality on those patients

who died after being discharged from the ED. However, the number of such patients is small and should not affect the validity of the findings. The model uses mortality as its sole outcome measure, and no attempt was made to assess morbidity, failure to return to previous residence or hospital length of stay. Mortality is a crude, but commonly used marker of quality and remains in common use because it is easy to measure and is an outcome of interest to the community.

1. Knaus WA, Zimmerman JE, Wagner DP, Draper EA, Lawrence DE. APACHE – Acute Physiology and Chronic Health Evaluation: a physiologically based classification system. Crit. Care Med. 1981; 9: 591–7. 2. Knaus WA, Draper EA, Wagner DP, Zimmerman JE. APACHE II: a severity of disease classification system. Crit. Care Med. 1985; 13: 818–29. 3. Knaus WA, Wagner DP, Draper EA, et al. The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest 1991; 100: 1619– 36. 4. Zimmerman JE, Kramer AA, McNair DS, Malila FM. Acute Physiology and Chronic Health Evaluation (APACHE) IV: hospital mortality assessment for today’s critically ill patients. Crit. Care Med. 2006; 34: 1297–310. 5. Duke GJ, Santamaria J, Shann F et al. Critical Care Outcome Prediction Equation (COPE) for adult intensive care. Crit. Care Resusc. 2008; 10: 41. 6. Rhee KJ, Fisher CJ Jr, Willitis NH. The Rapid Acute Physiology Score. Am. J. Emerg. Med. 1987; 5: 278– 82. 7. Olsson T, Lind L. Comparison of the Rapid Emergency Medicine Score and APACHE II in nonsurgical emergency department patients. Acad. Emerg. Med. 2003; 10: 1040–8.

© 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

306

DJO TEUBNER ET AL.

8. Goodacre S, Turner J, Nicholl J. Prediction of mortality among emergency medical admissions. Emerg. Med. J. 2011; 23: 372–5. 9. Aylin P, Bottle A, Majeed A. Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ 2007; 334: 1044. 10. Paul P, Pennell ML, Lemeshow S. Standardizing the power of the Hosmer–Lemeshow goodness of fit test in large data sets. Stat. Med. 2013; 32: 67–80. 11. Kellett J, Deane B. The Simple Clinical Score predicts mortality for 30 days after admission to an acute medical unit. Q. J. Med. 2006; 99: 771–81. 12. Kellett J, Deane B, Gleeson M. Derivation and validation of a score based

on Hypotension, Oxygen saturation, low Temperature, ECG changes and Loss of independence (HOTEL) that predicts early mortality between 15 min and 24 h after admission to an acute medical unit. Resuscitation 2008; 78: 52–8. 13. Smith GB, Prytherch DR, Schmidt PE, Featherstone PI. Review and performance evaluation of aggregate weighted ‘track and trigger’ systems. Resuscitation 2008; 77: 170–9. 14. Groarke JD, Gallagher J, Stack J et al. Use of an admission early warning score to predict patient morbidity and mortality and treatment success. Emerg. Med. J. 2008; 25: 803–6. 15. Duckitt RW, Buxton-Thomas R, Walker J et al. Worthing physiological scoring system: derivation and

validation of a physiological earlywarning system for medical admissions. An observational, populationbased single-centre study. Br. J. Anaesth. 2007; 98: 769–74. 16. Froom P, Shimoni Z. Prediction of hospital mortality rates by admission laboratory tests. Clin. Chem. 2006; 52: 325–8. 17. Olsson T, Terent A, Lind L. Rapid Emergency Medicine score: a new prognostic tool for in-hospital mortality in nonsurgical emergency department patients. J. Intern. Med. 2004; 255: 579–87. 18. Brabrand M, Folkestad L, Clausen NG, Knudsen T, Hallas J. Risk scoring systems for adults admitted to the emergency department: a systematic review. Scand. J. Trauma Resusc. Emerg. Med. 2010; 18: 8.

© 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine

Model to predict inpatient mortality from information gathered at presentation to an emergency department: The Triage Information Mortality Model (TIMM).

To derive and validate a mortality prediction model from information available at ED triage...
579KB Sizes 0 Downloads 8 Views