ORIGINAL ARTICLE

The Effectiveness of a Multicenter Quality Improvement Collaborative in Reducing Inpatient Mortality Eugene Kroch, PhD, Michael Duan, MS, John Martin, MPH, Richard Bankowitz, MD, MBA, and Marla Kugel, MPH Motivation and Background: This study examines the evidence that a particular quality improvement collaborative that focused on Quality, Efficiency, Safety and Transparency (QUEST) was able to improve hospital performance. Setting: The collaborative included a range of improvement vehicles, such as sharing customized comparative reports, conducting online best practices forums, using 90-day rapid-cycle initiatives to test specific interventions, and conducting face-to-face meetings and quarterly one-on-one coaching sessions to elucidate opportunities. Methods: With these kinds of activities in mind, the objective was to test for the presence of an overall “QUEST effect” via statistical analysis of mortality results that spanned 6 years (2006–2011) for more than 600 acute care hospitals from the Premier alliance. Results: The existence of a QUEST effect was confirmed from complementary approaches that include comparison of matched samples (collaborative participants against controls) and multivariate analysis. Conclusion: The study concludes with a discussion of those methods that were plausible reasons for the successes. Key Words: hospital quality, collaborative improvement, inpatient mortality

in QUEST, through a strategic partnership with the Institute for Healthcare Improvement (IHI), used a specific improvement model8 that attributes lack of improvement in health care as a “failure of will, a failure of ideas, or a failure of execution.” The QUEST framework also used experience gained in an earlier initiative, the Premier Hospital Quality Incentive Demonstration (HQID), a 6-year project with the Centers for Medicare and Medicaid Services.9 That demonstration was successful in achieving its primary goal, adherence to evidence-based medicine (EBM)1,11; however, an impact on mortality has yet to be demonstrated.5,16,17 In this article, we test the hypothesis that participants in a structured collaborative were more successful in reducing mortality than hospitals that did not participate. To do so, we start with a comparison of risk-adjusted mortality trends between the 2 cohorts of hospitals, the initial “charter” members of QUEST and a group of non-QUEST hospitals that had access to the same software quality improvement tools. We also test the hypothesis using a multivariate model that isolates the effect of QUEST from other factors that may have impacted mortality.4,6,7,13

METHODS Collaborative Execution Framework

(J Patient Saf 2015;11: 67–72)

A

t present, there is growing interest in the use of improvement collaboratives as a means of more rapidly achieving positive change, and several large-scale multicenter projects have begun. The evidence for the effectiveness of collaboratives as a means of accelerating improvement, however, has been mixed.3,14 The reasons for the lack of uniformity may include the highly variable nature of the collaboratives and the assortment of methods and strategies used. It is essential, therefore, that we begin to understand whether and what collaborative methodologies are effective and in what contexts. This article presents the results of one such collaborative in reducing inpatient mortality, details the methods used, and discusses possible reasons for the successes. On January 1, 2008, the Premier health care alliance launched a multiyear performance improvement collaborative focusing on Quality, Efficiency, Safety and Transparency (QUEST). The hypothesis was that, with the use of the power of collaboration, improvement would occur at a more rapid pace. The framework From the Premier, Inc, Charlotte, NC, and Leonard Davis Institute of the University of Pennsylvania, Philadelphia, PA. Correspondence: Eugene Kroch, PhD, Premier, Inc, 113034 Ballantyne Corporate Place, Charlotte, NC 28277 (e‐mail: [email protected]); or Leonard Davis Institute of the University of Pennsylvania, 3641 Locust Walk, Philadelphia, PA 19104, (e‐mail: [email protected]). The authors disclose no conflict of interest. This study had no external funding and was fully funded by the employer of all study authors, Premier Inc. Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License, where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially.

J Patient Saf • Volume 11, Number 2, June 2015

The methods used in the QUEST collaborative (described later) gave rise to the specific requirements for participation, namely, (1) commitment of senior leadership including the CEO, (2) use of a standard set of data analytic products that enabled capture of measurement data, (3) an agreement that all data would be transparent within the collaborative. The QUEST collaborative framework also required agreement on specific measures and methods to be used as well as agreement on the definition of top performance targets. With regard to the mortality improvement target, participants elected to study risk-adjusted, all-cause, hospital-wide mortality. The participants examined 3 potential methods for risk adjusting the data and chose to use the method initially developed by CareScience,10,15 which adjusts for palliative care and comorbid conditions among other factors. A target performance of an observed-expected (O/E) mortality ratio of 0.82 was chosen because it represented the lowest 25th percentile of mortality in the baseline period. Each participant received a report on a quarterly basis, highlighting the institution's O/E value and distance from the goal, which also included a breakdown of clinical subgroups that represented the highest areas of opportunity for improvement. Participants also had access to an analytical tool that allowed them to explore data in great detail. Participants could see the performance of all fellow participants and could drill into the data to find top performers in any given area. The CEO of each institution was also provided with a yearly performance summary. In addition, Premier staff examined the pooled data to determine the greatest opportunities for mortality reduction (conditions where the number of deaths greatly exceeded the model prediction). Through another aspect of the collaborative, staff also became aware of substantial variation in the approach to and documentation of secondary diagnoses and in particular palliative care. www.journalpatientsafety.com

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

67

J Patient Saf • Volume 11, Number 2, June 2015

Kroch et al

Multivariate Analysis

TABLE 1. Study Hospitals and QUEST Status QUEST Status Charter member Class 2009 Class 2010 QUEST subtotal Non-QUEST Total

Y2006 Y2007 Y2008 Y2009 Y2010 Y2011

0 366 366

0 373 373

141

141 29

141 321 462

170 324 494

136 28 36 200 392 592

136 28 35 199 424 623

Some hospitals may not have consecutive data across the entire time frame.

To support QUEST hospitals in improving their performance in the delivery of evidence-based care, Premier provided several offerings, from educational calls, customized action plans, Webbased resources, to “sprints” and “mini collaboratives.” A sprint is a short-term, rapid-cycle improvement education series designed to drive and sustain change in specific indicators or processes of care. Mini collaboratives are more intensive 6- to 9-month improvement initiatives focused on a specific condition, disease state, or process of care.

Measuring the Collaborative's Impact All observational information and data derived from a database maintained by Premier, which includes a pooled hospital cross-section time-series sample, consisting of approximately 36 million deidentified inpatient discharges from approximately 650 hospitals during a 6-year time frame. We took 2 approaches to measuring the impact of the QUEST collaborative on mortality. The first approach, largely descriptive, tracked hospital mortality trends over 4 years since the start of the collaborative, comparing QUEST participants with other Premier hospitals with access to the same software tools but not participating in the collaborative (non-QUEST group). For the descriptive trend analysis, mortality was measured in each calendar quarter by comparing the observed mortality rate and the expected risk-adjusted mortality rate at the hospital level. The O/E ratios were tracked across a baseline period of third quarter of 2006 through the second quarter of 2007 and a performance period of the first quarter of 2008 through the last quarter of 2011. The second approach was to conduct a multivariate analysis that uses a cross-section time-series regression model to isolate a QUEST effect from other factors that might explain both hospital effects and time trends. Table 1 summarizes the sample, giving the hospital counts by year, QUEST status, and cohort. Note that we have a full 2 years of non-QUEST data before the launch of QUEST in 2008. This formal inferential setting makes it possible to conduct hypothesis tests on the timing and strength of the QUEST effect. Because coding practices can affect the expected mortality generated from the risk adjustment predictive model selected for QUEST, an analysis of coding practices was performed on the 2 cohorts to determine any factors that might be contributing to the observed differences. Non-QUEST hospitals were matched on bed size, urban/rural location, teaching status, and geographical region, and we examined the International Classification of Diseases, Ninth Revision, diagnosis data from the 141 hospitals in QUEST cohort and 141 hospitals in the matched non-QUEST cohort. Because palliative care code is considered an important marker for mortality risk, we specifically examined the frequencies of that code (V66.7).

68

www.journalpatientsafety.com

In the multivariate analysis, the QUEST effect is inferred from a parametric estimation of a general regression model, which has the following functional form: yhq ¼xhq βþε hq ;∀hq where yhq is the O-E difference (O-E Diff) mortality rate of hospital h at quarter q. The vector xhq includes hospital characteristics, data evolving control factors, and QUEST indicators. β is the marginal effect of the independent variables on the O-E Diff rate, and εhq is the random error component of the model. In the model, hospital characteristics include bed size, teaching status, rural or urban location, and geographic area location. Data control factors are specified as yearly and quarterly (for seasonality) dummy variables. They are intended to capture the effect of general evolution of clinical practice and coding completeness during the study period. Descriptive statistics of the variables are in Table 2. The QUESTeffects on risk-adjusted mortality are modeled at 3 levels of parametric restriction. The first, most constrained model specification contains a fixed QUEST effect (specified as a binary QUEST flag) and a QUEST linear trend effect, represented by the number of quarters that have passed since a hospital joined QUEST. The flag is turned on for those quarters when the hospital participated in QUEST. For example, a hospital that joined QUEST in the first quarter of 2010 will get the flag turned on starting with that quarter and for all subsequent quarters of participation. If a hospital drops from QUEST, the flag is turned off. In the second model specification, the linearity of the trend effect is relaxed by interacting the QUEST flag with annual time effects (yearly dummy variables). The third model specification TABLE 2. Descriptive Statistics of the Regression Data

No. observations (hospital quarters) Observed mortality rate, average Expected mortality rate, average O-E Diff mortality rate, average Beds (0–99) Beds (100–199) Beds (200–399) Beds (400+) Teaching (COTH) Rural location Northeast Midwest South West Y2006 Y2007 Y2008 Y2009 Y2010 Y2011

QUEST

Non-QUEST

Overall

2851

7749

10,600

1.90%

2.03%

2.00%

2.62%

2.35%

2.42%

−0.72%

−0.32%

−0.43%

14.5% 18.7% 37.9% 28.8% 16.6% 16.2% 15.6% 29.1% 42.3% 13.0% 0.0% 0.0% 19.8% 23.9% 28.5% 27.8%

20.5% 21.3% 34.6% 23.7% 12.5% 26.6% 12.8% 18.8% 43.1% 25.3% 9.3% 18.9% 16.0% 16.0% 18.6% 21.1%

19.2% 25.1% 30.7% 25.1% 13.6% 23.8% 13.6% 21.5% 42.9% 22.0% 6.8% 13.8% 17.1% 18.1% 21.3% 22.9%

COTH, Council of Teaching Hospitals.

© 2015 Wolters Kluwer Health, Inc. All rights reserved.

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

J Patient Saf • Volume 11, Number 2, June 2015

Multicenter Quality Improvement Collaborative

is the least constrained, allowing for cohort effects, as well as flexible time effects by interacting the QUEST flag with the cohort, as well as the year. A hospital's cohort is determined by the year it joined QUEST, that is, charter membership (starting in 2008), the “class” that started in 2009, and the class of 2010. Another variant of the model introduces full hospital effects, which effectively removes the influence of all potential latent effects, thereby isolating the timing effects. This variant is applied to each of the 3 aforementioned model specifications—version “b” as distinct from the original version “a” described earlier. Version b is a type of Heckman specification2 to remove selection bias. In this setting, all hospital effects are represented by dummy variables, which replace all control traits, such as size, teaching status, location, and the like. Finally, version “c” of the model introduces hospital random effects to account for the correlation of observations over time within a hospital and the correlation of patients within a hospital. Hence, we allow for nonindependence of hospital disturbances over time, which, if treated as fixed, have the potential of inflating the significance of the QUEST effect. If the QUEST effect were purely based on self-selection into the collaborative, then the time effects would vanish in favor of sorting on hospital effects, whether fixed or random.

RESULTS There were 141 charter members in QUEST. Not all QUEST hospitals license data to be included in the Premier research database, and these were excluded. During the 4-year performance period, some charter members of QUEST dropped out of the collaborative. There were 136 QUEST charter members with at least one quarter of data between the third quarter of 2006 and the last quarter of 2011. There were 317 hospitals in the Premier database that did not join QUEST at any time during the baseline or performance period with at least one quarter of data in the same period. QUEST charter member and non-QUEST Premier hospital characteristics are shown in Table 3. The change in mortality during the baseline and performance period for these 2 hospital cohorts is shown in Figure 1 as a 4-quarter moving average. The average O/E ratio for the baseline period for QUEST hospitals and non-QUEST hospitals was 0.98 and 1.07, respectively. By the end of the 4-year performance TABLE 3. Hospital Characteristics of QUEST and Non-QUEST Cohorts No Constraints

Hospital Characteristic Bed size

The effectiveness of a multicenter quality improvement collaborative in reducing inpatient mortality.

This study examines the evidence that a particular quality improvement collaborative that focused on Quality, Efficiency, Safety and Transparency (QUE...
578KB Sizes 0 Downloads 3 Views