Research Original Investigation

Health Care Quality Attitudes

Invited Commentary

Virtual Quality The Failure of Public Reporting and Pay-for-Performance Programs Lara Goitein, MD

Is there too much focus on measuring and reporting quality rather than on the conditions needed for improving it? The Centers for Medicare & Medicaid Services (CMS) and other organizations require physicians and hospitals to publicly report performance on quality Related article page 1904 measures, and the CMS and private payers are tying reimbursement partly to data from such measures in pay-for-performance programs. However, as the director of an intensive care unit performance improvement program, I know that it is difficult—and sometimes counterproductive—to try to improve a complex system simply by rewarding or penalizing the results. Holding health care professionals and institutions accountable for quality metrics can backfire. For example, because reported quality measures are limited in number and reflect national rather than local priorities, they may divert attention from other, perhaps more important, problems in individual hospitals—a form of teaching to the test. Efforts to improve performance can also lead to gaming, through changes in documentation and coding, or even changes in clinical practice. As examples, health care professionals and institutions may avoid high-risk or nonadherent patients,1 base triage decisions on their effect on performance measures (such as choosing not to admit patients who are likely to be readmitted from the emergency department to reduce readmission rates), or omit screening that might identify conditions, such as hospital-acquired venous thromboembolism, that could reflect poorly on performance.2 What is less frequently discussed, but just as important, is that public reporting and pay-for-performance systems shift the focus of quality improvement to documentation. In so doing, these efforts take quality improvement out of the hands of clinicians and uncouple measurement from its clinical context. The hope is that the required measurements will jumpstart a cycle of continuous quality improvement in which data are used to hone practice. However, there is no guarantee that data will be so used. Indeed, the tasks of measurement and reporting fully occupy many hospital quality improvement departments, leaving few resources for actually improving medical practice. To ensure standardization, each measure generally requires a hefty manual to specify methods and sometimes its own information technology and specialized staff. This bureaucratic work usually falls to nonclinical (or nonpracticing) staff; they may have little understanding of, or authority over, processes on the wards. In practice, such staff may deal almost entirely with improvements based on building documentation into the flow of work or modifying coding, creating the illusion of improved performance. 1912

Hospital hallways are full of displays of charts showing progress on various quality measures; hospital leaders meet to discuss quality metrics, and administrators send newsletters that congratulate staff on accomplishing quality goals. However, in many hospitals, patient care is largely unaffected. Busy physicians and nurses rush by hallway displays and do not read newsletters that report quality metrics. When they pay attention, they tend to regard the data with skepticism: after all, they do not perceive much change save perhaps for some additional requirements for documentation. Few clinicians sit on quality committees, and still fewer have a role in the actual implementation of quality improvement projects. The findings of a study3 presented in this issue of JAMA Internal Medicine reinforce concerns about the unintended consequences of public reporting and pay for performance and also suggest a gap between quality improvement activities and patient care. Lindenauer et al3 surveyed hospital leaders (chief executive officers and executives responsible for quality) about publicly reported quality measures required by the CMS. Although most respondents said that they used the measures extensively, more than half were concerned that the measures encouraged teaching to the test, and almost half reported trying to maximize performance primarily through changes in documentation and coding. Also important is that half or more believed that the CMS measures did not meaningfully distinguish among hospitals or accurately reflect quality of care, even for conditions specifically targeted by the measures. In short, the study findings suggest that many hospital leaders doubt the clinical relevance of these measures. This skepticism is consistent with national data: studies of public reporting and payfor-performance programs in the United States have failed to demonstrate a clear connection to improved quality.4,5 How can these results be explained? The respondents may have understood that although publicly reported measures are highly influential, much of their effect does not reach the bedside. This may be clearest to those most closely involved in the mechanics of measurement and reporting. Executives specifically responsible for quality (eg, chief quality officers) were more than twice as likely as chief executive officers to believe that hospitals attempted to maximize performance on mortality and readmissions measures primarily by changing documentation and coding and much less likely to believe that the measures were clinically meaningful for differentiating among hospitals. There was generally less skepticism about the clinical relevance of measures of process and patient experience, such as use of venous thromboembolism prophylaxis and patient satisfaction, than about outcome measures, such as mortal-

JAMA Internal Medicine December 2014 Volume 174, Number 12

Copyright 2014 American Medical Association. All rights reserved.

Downloaded From: http://archinte.jamanetwork.com/ by a Purdue University User on 05/20/2015

jamainternalmedicine.com

Health Care Quality Attitudes

Original Investigation Research

ity and readmission rates. Most respondents agreed that process and patient experience measures accurately reflected quality of care (although close to half still disagreed that they meaningfully distinguished among hospitals). However, as the authors point out, the CMS has increasingly emphasized outcomes measures over process measures, precisely on the grounds that they are most relevant to overall quality of care. The study findings suggest that in practice this may not be the case; outcome measures are more strongly influenced by case mix and other uncontrollable factors and are less likely to directly and specifically lead to changes in care. In my view, current policies, although well intentioned, tend to make performance measurement an end in itself rather than a means to better care. The solution cannot be more or different measures: the problem is inherent to imposing performance measurement without regard to the context. For performance improvement programs to succeed, practicing clinicians should be actively engaged and the connection between measurement and improvement ensured. Data collection should be driven by specific clinical questions, with the intent to respond with modifications to practice. Although the relative merits of different approaches require study, performance improvement directed by practicing clinicians may be more likely to address areas of clinical ARTICLE INFORMATION Author Affiliation: Christus St Vincent Regional Medical Center, Santa Fe, New Mexico. Corresponding Author: Lara Goitein, MD, Division of Pulmonary and Critical Care Medicine, Christus St Vincent Regional Medical Center, 455 St Michaels Dr, Santa Fe, NM 87505 ([email protected]). Published Online: October 6, 2014. doi:10.1001/jamainternmed.2014.3403. Conflict of Interest Disclosures: None reported.

weakness and have influence over what actually happens on hospital wards. The CMS and other health insurers should shift their focus from the reporting of quality measures to the process of improving quality, as previously suggested by Werner and McNutt.6 Payers could provide incentives to hospitals to invest in performance improvement programs but with the latitude to set their own goals and design their own projects based on their specific needs. Such programs might have the following broad parameters: they should be led by clinicians with dedicated, paid time; be structured by clinical areas; have their own budget (perhaps based on a small percentage of the overall hospital operating budget); and be given centralized data collection, statistical, and information technology support. Instead of accountability for measures of performance, hospitals should be accountable for the support of robust performance improvement processes with periodic audits of their programs and accomplishments. Some will argue that not every health care institution is experienced or adept in performance improvement and that my suggestions for a change in focus permit too much latitude. A new focus, however, would encourage physicians and hospitals to invest in actual performance improvement rather than in merely creating its appearance. We need real, not virtual, quality.

2. Haut ER, Pronovost PJ. Surveillance bias in outcomes reporting. JAMA. 2011;305(23): 2462-2463.

remuneration for individual health care practitioners affect patient care? a systematic review. Ann Intern Med. 2012;157(12):889-899.

3. Lindenauer PK, Lagu T, Ross JS, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality [published online October 6, 2014]. JAMA Intern Med. doi:10.1001 /jamainternmed.2014.5161.

5. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long-term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366(17): 1606-1615.

4. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance-based

6. Werner RM, McNutt R. A new strategy to improve quality: rewarding actions rather than measures. JAMA. 2009;301(13):1375-1377.

REFERENCES 1. Friedberg MW, Safran DG, Coltin K, Dresser M, Schneider EC. Paying for performance in primary care: potential impact on practices and disparities. Health Aff (Millwood). 2010;29(5):926-932.

jamainternalmedicine.com

JAMA Internal Medicine December 2014 Volume 174, Number 12

Copyright 2014 American Medical Association. All rights reserved.

Downloaded From: http://archinte.jamanetwork.com/ by a Purdue University User on 05/20/2015

1913

Virtual quality: the failure of public reporting and pay-for-performance programs.

Virtual quality: the failure of public reporting and pay-for-performance programs. - PDF Download Free
103KB Sizes 1 Downloads 12 Views