EDITORIAL

Performance Measurement and Optimal Care for Surgical Patients Frank G. Opelka, MD

I

n reading Henneman et al1 about ranking hospitals, the authors demonstrate that surgeons in nations around the world face a common movement to transform surgical care and its business models through performance measurement if this method of measurement is applied across conditions in various payment systems. The application of performance measurement to drive improvement and lower cost appears in various forms of national health policy around the globe. Using performance measurement for ranking surgeons in clinical and business decisions has definite impacts on patient care. Patients will have to rely on the information and decisions linked to performance measures. Surgeons must know and trust the information is reliable and valid. Multiple different stakeholders, not only patients and their surgeons, will feel the impact of these policies. Still, no group is counting on these policies more than patients to deliver on their promise of guiding patients to optimal quality at optimal cost. Other interested groups driving the changes through accountable clinical performance include health plans, purchasers, and policy experts who are insisting on public accountability with reporting of clinical performance. As surgeons, we must play a key role in performance measurement and the business decisions based on those measures to ensure optimal care for our patients. Performance measures are intended both to inform care delivery systems and providers about variation in care and to stimulate their improvement efforts. As performance measures become more prevalent, surgeons seek to understand their own risk-adjusted clinical outcomes so that they can ensure optimal use of their resources to improve care. Risk adjustment is a complicated science that requires constant review and should be applied, fit for purpose. Ultimately, everyone has to agree on which risk adjustment instrument best fits the purpose and can ideally demonstrate variability for the goal of accountability (public reporting) and which method best fits the purpose for driving improvement. It is important to recognize that performance measures are not solely serving the surgeons and their hospitals. Other stakeholders have interest in accountable health care. Patients wish to understand performance measures to determine whether they can use measures to establish choices in their health care. Patient choices include using information to make decisions about healthier lifestyles and selecting their surgeons. Payers and health plans seek performance measures to define how to direct patients to a higher performing, narrowed and tiered network of preferred providers. This means performance measures are guiding decisions about quality and cost used by insurers both to build networks for patients (access to care) and to develop pricing of insurance for premiums and co-pays (cost of care). As I think about the numerous comments shared with me by surgical colleagues, it seems that in some instances health plans are using performance measures already to rank providers and delivery systems based on cost. A cost basis is used because the quality metrics in surgery are not readily available. Limited quality metrics in areas such as surgery should create concern about measure adequacy and cause health plans to pause before allowing their metrics to rank and to narrow networks of care in a clinically meaningful way based primarily on cost. Patients should be informed that today’s clinical networks created by health plans are more reliant on cost measures. This is due to quality measure inadequacy to define clinical expertise. Henneman et al display the limits of accurate clinical quality rankability. Misinformation, misclassification, and misapplication of clinical quality due to measure inadequacy and resultant poorly designed surgeon rankability by health plans and government agencies could be detrimental to safe, quality, and efficient care. Henneman et al have drawn attention to the limitations of performance measurement in an important condition—colorectal cancer. Colorectal cancer care seems to have variable outcomes. Their article makes the case that the fairly low rankability that emerges from their analyses implies that comparing hospitals on case-mix–adjusted 30-day mortality is problematic. The analyses assume that 30-day mortality is an important outcome for the purposes of hospital comparisons but provides no rationalization for that choice. Given surgery for cancer, perhaps, the

From the Department of Surgery, Louisiana State University, New Orleans, LA. Disclosure: The author declares no conflicts of interest. Reprints: Frank G. Opelka, MD, Louisiana State University, Department of Surgery, 433 Bolivar St, New Orleans, LA 70112. E-mail: [email protected]. C 2014 by Lippincott Williams & Wilkins Copyright  ISSN: 0003-4932/14/25905-0850 DOI: 10.1097/SLA.0000000000000663

850 | www.annalsofsurgery.com

Annals of Surgery r Volume 259, Number 5, May 2014

Copyright © 2014 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.

Annals of Surgery r Volume 259, Number 5, May 2014

ultimate outcome of interest incorporates both the short-run mortality and the long-term disease-free survival or overall survival. The outcome of interest would seem to be long-term, discounted qualityadjusted life-year. It is impractical to compare hospitals directly on quality-adjusted life-years. So, the question is: What is the good surrogate colorectal cancer outcome for quality-adjusted life-years? Assuming the surrogate is 30-day mortality, the authors have forewarned those interested in public accountability about the shortfalls of low rankability. Essentially, low rankability means that (after casemix adjustment) there is more outcome variation “within” the typical hospital than there is among the hospitals. Henneman et al in their article describe the limitations with a serious note of caution when using current data to rank hospitals. It is important to recognize that performance measurement for informing improvement efforts differs greatly from its use in public accountability. Public accountability in health care presents great challenges in performance measurement and in assigning a value from that measurement that optimally classifies and ranks the care delivery system for use by patients, purchasers, payers, and providers while limiting the risk and impact of misclassification. Public accountability of performance measures has a higher bar for reliability and validity than performance measurement for improvement purposes. Misclassification can lead to wrongful narrowing and tiering networks of providers, misguiding patients, and can lead to misuse of limited incentive resources. Unanswered in the article is an explanation of why the rankability is low. Is it that 30-day mortality is not the best surrogate? Is it that after risk adjustment, there remain very little true outcome differences? Are there missing case-mix adjusters? The authors must have had volume in their data source. I wonder about its impact had they incorporated volume. It is difficult to select a single analytic approach for both accountability and driving improvements. Some would argue in favor

 C 2014 Lippincott Williams & Wilkins

Editorial

of fixed effects over random effects, or vice versa, depending on the intended use of the measurement. Random effects tend to cause “shrinkage” of the data by pulling up the bottom and pushing down the top toward the middle. They tend to fit choice or selection situations, where a prospective patient needs surgery and wishes to choose a hospital for the operation. Fixed effects may fit more in reward or quality improvement situations. Still, these differences are only part of the process when creating an applied science to rankability. Assessing adequacy of quality and rankability begins with the traditional Donabedian standards for performance measurement, which are the measures of structure, process, and outcomes. New additions to these standards include measures of appropriateness, patient engagement and experience of care, and resource use (cost). Compliance with structural measures such as certification by clinical boards or hospital agencies demonstrates meeting standards. Process measures further define quality demonstrating a level of compliance in day-to-day care. Risk-adjusted outcomes are the gold standard patients seek in surgical care. Combining risk-adjusted outcomes with other aspects of measurement may further add to the rankability efforts. Appropriateness measures should be included in defining rankability, too. Yet, appropriateness measures are some of the most complex and difficult measures to develop and implement. If measurement science hopes to be successful, patient experience of care must also be considered. Until we have reached competency in these future measurement systems, Henneman et al and others have defined the patient dangers in public use of isolated performance measures for current ranking systems.

REFERENCE 1. Henneman D, van Bommel ACM, Snijders A, et al. Ranking and rankability of hospital postoperative mortality rates in colorectal cancer surgery. Ann Surg. 2014;259:844–849.

www.annalsofsurgery.com | 851

Copyright © 2014 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited.

Performance measurement and optimal care for surgical patients.

Performance measurement and optimal care for surgical patients. - PDF Download Free
50KB Sizes 2 Downloads 3 Views