International Journal of Health Care Quality Assurance Making quality improvement programs more effective Yoku Shaw-Taylor

Article information:

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

To cite this document: Yoku Shaw-Taylor , (2014),"Making quality improvement programs more effective", International Journal of Health Care Quality Assurance, Vol. 27 Iss 4 pp. 264 - 270 Permanent link to this document: http://dx.doi.org/10.1108/IJHCQA-02-2013-0017 Downloaded on: 01 February 2016, At: 03:59 (PT) References: this document contains references to 11 other documents. To copy this document: [email protected] The fulltext of this document has been downloaded 353 times since 2014*

Users who downloaded this article also downloaded: Monica Elisabeth Nyström, Rickard Garvare, Anna Westerlund, Lars Weinehall, (2014),"Concurrent implementation of quality improvement programs: Coordination or conflict?", International Journal of Health Care Quality Assurance, Vol. 27 Iss 3 pp. 190-208 http://dx.doi.org/10.1108/IJHCQA-08-2012-0085 Abdallah Abdallah, (2014),"Implementing quality initiatives in healthcare organizations: drivers and challenges", International Journal of Health Care Quality Assurance, Vol. 27 Iss 3 pp. 166-181 http:// dx.doi.org/10.1108/IJHCQA-05-2012-0047 Ann Elizabeth Esain, Sharon J. Williams, Sandeep Gakhal, Lynne Caley, Matthew W. Cooke, (2012),"Healthcare quality improvement – policy implications and practicalities", International Journal of Health Care Quality Assurance, Vol. 25 Iss 7 pp. 565-581 http://dx.doi.org/10.1108/09526861211261172

Access to this document was granted through an Emerald subscription provided by emerald-srm:199044 []

For Authors If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.

About Emerald www.emeraldinsight.com Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online products and additional customer resources and services. Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0952-6862.htm

IJHCQA 27,4

Making quality improvement programs more effective Yoku Shaw-Taylor

264

Georgia Scientific, Glenn Dale, Maryland, USA

Abstract Received 7 February 2013 Revised 9 June 2013 Purpose – In the past 25 years, and as recent as 2011, all external evaluations of the Quality Accepted 18 September 2013 Improvement Organization (QIO) Program have found its impact to be small or difficult to discern. Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

The QIO program costs about $200 million on average to administer each year to improve quality of healthcare for people of 65 years or older. The program was created to address questionable quality of care. QIOs review how care is provided based on performance measures. The paper aims to discuss these issues. Design/methodology/approach – In 2012, the author supported the production of quarterly reports and reviewed internal monitoring and evaluation protocols of the program. The task also required reviewing all previous program evaluations. The task involved many conversations about the complexities of the program, why impact is difficult to discern and possible ways for eventual improvement. Process flow charts were created to simulate the data life cycle and discrete event models were created based on the sequence of data collection and reporting to identify gaps in data flow. Findings – The internal evaluation uncovered data gaps within the program. The need for a system of specification rules for data conceptualization, collection, distribution, discovery, analysis and repurposing is clear. There were data inconsistencies and difficulty of integrating data from one instance of measurement to the next. The lack of good and reliable data makes it difficult to discern true impact. Practical implications – The prescription is for a formal data policy or data governance structure to integrate and document all aspects of the data life cycle. The specification rules for governance are exemplified by the Data Documentation Initiative and the requirements published by the Data Governance Institute. The elements are all in place for a solid foundation of the data governance structure. These recommendations will increase the value of program data. Originality/value – The model specifies which agency units must be included in the governance authority and the data team. The model prescribes in detail a data governance model to address gaps in the life cycle. These prescriptive measures will allow the program to integrate all of its data. Without this formal data governance structure, the QIO program will be undetermined by the persistent lack of good data for monitoring and evaluation. Keywords Data management, Quality improvement, Process redesign Paper type Case study

International Journal of Health Care Quality Assurance Vol. 27 No. 4, 2014 pp. 264-270 r Emerald Group Publishing Limited 0952-6862 DOI 10.1108/IJHCQA-02-2013-0017

Introduction On average each year, the US Centers for Medicare and Medicaid Services (CMS) spend over $200 million on the Quality Improvement Organization (QIO) Program. The Medicare program covers approximately 49 million people. The Nineth scope-of-work (SOW) that began in 2008 and ended in 2011 had more than a $1 billion budget. The program is the single largest public investment in medical quality improvement, which was created to address unnecessary and questionable service quality within a cost-based reimbursement system for Medicare beneficiaries – people who are 65 years or older. The intervention is conducted by Quality Review Organization staff in each State to monitor and report healthcare quality. The program has shifted from a purely regulatory/enforcement role to a collaborative one to assist providers with quality assurance, best practices, workflow and efficient care. Focus areas have expanded from hospital care to nursing homes, home health agencies and Medicare Managed Care (Medicare Advantage). The QIO sentinel events timeline is presented in Figure 1.

1965 The Medicare Program is created

1972 Professional Standards Review Organizations (PSROs) are created

2002 Current name of Quality Improvement Organization Program is adopted

1986 Beneficiary complaint process is implemented

2010

1960

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

1980

1990

Quality improvement programs 265

2000

1970 The Experimental Medical Care Review Organizations (EMCROs) are created 1982 The Utilization and Quality Control Peer Review Organizations (PRO) replaces the PSRO

When the Experimental Medical Care Review Organizations were created in 1970, staff provided a mechanism for Medicare to examine services to its beneficiaries. Professional Standards Review Organizations (PSROs) staff focussed on local care in their case reviews and Medical Care Evaluation Studies were conducted to assess service quality. The PSRO staffs were vested with the authority to deny payments to providers if unnecessary care was detected. The PSROs were replaced by Peer Review Organizations in 1982. In 2002, the program adopted its current name. Program impact Over the past 25 years, external program evaluators have looked at program coverage, delivery and monitoring ( Jencks et al., 2003; Hsia, 2003; Rollow et al., 2006). In 2012, the author helped to produce the program’s quarterly reports and reviewed the program’s internal monitoring and evaluation protocols. The task required attention to the concerns about program vulnerabilities, program complexities and possible ways to improve services. The author analyzed historical data and relied on conversations with program experts to move the dialogue from diagnosis to discovery. Generally, all external program evaluations conducted by staff in eminent organizations found program impact to be small or difficult to discern. In 1987, staff in the General Accounting Office (GAO) (now the United States General Accounting Office, 1987a, p. 1) reported that: Our review indicates that developing effective methods to measure and monitor quality of care will require the resolution of certain technical problems related to the availability of methods and information. It will require, in addition, consideration of the basic intent and operation of quality assessment in the Medicare program.

The GAO report to the US Senate Subcommittee on Health was entitled “Preliminary Strategies for Assessing Quality of Care.” A subsequent report released later in the same year cautioned that the program needed better internal controls for evaluations (United States General Accounting Office, 1987b). In 1995, staff in the Department of Health and Human Services, Office of the Inspector General

Figure 1. QIO program timeline

IJHCQA 27,4

266

(Office of the Inspector General US Department of Health Human Services, 1995, pp. i-ii) concluded that: As the Peer Review Organizations (PRO) program becomes increasingly committed to improving the overall practice of medicine, its ability to find and take action on poorly performing physicians and hospitals is questionable [y] The PROs themselves find much that is positive about the current direction of the program. But some express reservations about its impact on protecting Medicare beneficiaries from poor performers.

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

About ten years after the Inspector General’s report, the Institute of Medicine’s (Institute of Medicine, 2006, p. 58) Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs concluded that: Given the lack of consistent and conclusive evidence in scientific literature and the lack of strong findings from the committee’s analyses, it is not possible to determine definitively the extent of the impact of the QIOs and the national QIO infrastructure on the quality of healthcare received by beneficiaries. Many confounding factors make it difficult to attribute the results thus far [to QIOs].

In 2007, evaluators from the University of Chicago National Opinion Research Center (Sutton et al., 2007, p. iii) completed their assessment and concluded that: The review of (the body of literature on the QIO program) did not yield a conclusive answer as to whether or not the QIO program or specific QIO-led interventions resulted in higher quality, lower quality or no change in any given provider setting. While several QIO interventions or collaboratives suggest that QIO-directed quality improvement activities have been effective at improving process and outcome measures, the statistical significance varied [y] Most studies evaluating the effectiveness of the QIO program are fraught with methodological limitations – such as selection bias, confounding and attribution – that are inherent in the study designs.

In the latest assessment that concluded in 2011, Mathematica Policy Research (MPR) staff found that “QIOs’ work led to improvement in four of the twelve targeted measures of quality.” The caveat was that: “While the remaining eight quality improvement measures may have improved over the period, we could not attribute those improvements to QIO efforts” (Mathematica Policy Research (MPR), 2011, p. ix). The insights on data solutions presented here are based on program monitoring reviews, analyses of internal evaluation protocols and quarterly reports. Gaps The QIO program quarterly reports contain performance data based on several measures or aims including: (1)

beneficiary and family-centered care;

(2)

improved individual patient care: first, reducing healthcare-associated infections; second, reducing healthcare-acquired conditions in nursing homes; third, reducing adverse drug events; and fourth, quality reporting and improvement;

(3) (4)

integrated care for populations and communities; and improving health in populations and communities.

The measures are supported by: convening learning and action networks; providing technical assistance to providers, facilities and partners; and care reinvention through

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

innovation spread. As process analysis progressed, data gaps became evident. The need for specification rules for data conceptualization, collection, distribution, discovery, analysis and archiving became clear during the quarterly report preparation. Issues identified include: (1)

inconsistent data across quarters;

(2)

inability or difficulty integrating some data elements from one instance (onequarter) to another;

(3)

gaps in data availability for some measures;

(4)

data reported for the quarterly report were not based on calendar quarter but contract quarter for some measures;

(5)

insufficient data were provided and it was impossible to rank-order provider performance;

(6)

data were incomplete because not all providers submitted their performance information;

(7)

data were aggregated six-monthly, not quarterly and it was difficult to disaggregate data to show performance for the quarter;

(8)

data were not available for all performance sub-components; and

(9)

there were no formal targets for some measures, so that performance could not be compared to previous instances.

Regulatory compliance, therefore, will be difficult, leading to uncertainty about the program’s impact. The QIO staffs were reporting some successes in their case reviews and quality improvement efforts, but sometimes these successes were difficult to chart over time. The discrete event model showing data process flow revealed that data submissions were done through workarounds that by-passed extant data reporting systems such as the Case Review Management Information System, which is the centralized data repository for all case review activities conducted by QIO staff. The National Coordinating Centers (NCCs) and the National Coordinating Entities staff provided alternate pathways for data submission, aggregation and data reporting. The program reviews revealed that the extant data problems within the Tenth SOW could also be traced to extensive revisions to work statements prompted by a previous CMS administrator. Certain Tenth SOW sections had to be modified in 2012 to align them to the performance criteria schedule. The data collection process flowchart and data reporting discrete event model confirmed that the program needed an explicit and fully specified data governance strategy and data policy. MPR staff concluded as much: All QIOs we visited discussed trouble with late timing of data, problems caused by errors and associated recalls, and lack of detail within data (MPR, 2011, p. xx).

What prevails is akin to data chaos, with all the tell-tale signs: inconsistent data, incomplete data and reduced accuracy for measuring the program’s true effect. This data chaos does not explain why the program is not impactful, but data governance is a necessary solution to make the program more effective.

Quality improvement programs 267

IJHCQA 27,4

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

268

Solution The clear prescription for this situation is a formal data governance structure to integrate and document all data lifecycles. A data governance framework will increase the QIO program’s value. The specification rules for governance are exemplified by the Data Documentation Initiative (1995) and the requirements published by the Data Governance Institute (2004). The elements are all in place for a solid data governance structure. The model comprises the data governance authority and the data team on the program side, and data consumers on the public side. The governance authority is responsible for: (1)

policies related to regulatory compliance that include national quality strategy concepts such as clinical care, service coordination, community health, efficiency and cost reduction, safety, person and care-centered experience; and

(2)

developing measures or variables and data collection instances linked to quality.

The authority comprises program leaders, government task leaders or GTLs, who provide technical leadership and contract oversight; the NCC staff, who support the QIOs through learning action networks, the Office of Clinical Standards and Quality, which is responsible for quality measures; and the Quality Improvement Group or QIG that leads the effort to integrate CMS’s information systems. The data team serves as data stewards; they are responsible for all activities related to data management, including: (1)

deploying data collection systems;

(2)

naming conventions and harmonizing all terminologies and measures;

(3)

data cleaning;

(4)

rectifying missing and incomplete data;

(5)

archiving;

(6)

access and security; and

(7)

resolving all issues related to data quality.

Finally, data consumers provide feedback to the data team, which includes: SOW program representatives, Information Systems Group or ISG, NCCs and contracting officer representatives or CORs, who assist in contract monitoring. Figure 2 shows the governance model. Conclusion These prescriptive measures allow health quality improvement program staff to integrate all data through paradata (how data are collected) and metadata (data description). Most important, a governance structure supports the analytics for monitoring and evaluation. Without it, the program will continue to be undetermined by poor data. The QIO program’s impact will remain difficult to discern and its effect will be sub-optimal unless the data cycle is carefully governed.

Data Governance Authority

Data Team

Subject Matter Experts and Data Specialists

Data Collectors, Managers and Librarian

→The government task lead or GTL

→Information Systems Group or ISG

→Program “Drivers” and “Aims” Leaders

→National Coordinating Centers

→Office of Clinical Standards and Quality

→Program “Drivers” and “Aims” Leaders

→National Coordinating Centers

→Contracting Officer’s Representative or COR

Quality improvement programs 269

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

→Quality Improvement Group

Responsibilities →Concept approvals

→Data and document management

→Set Requirements

→Change management

→Define data concepts

→Data quality

→Provide access control

→Data distribution

Tools →Service contracts for data collection

→Version controls

→Data policies

→Metadata and change reports

References Data Documentation Initiative (1995), “The DGI Data Governance Framework”, available at: www.ddialliance.org/ (accessed October 24, 2012). Data Governance Institute (2004), “What is DDI?”, available at: www.datagovernance.com/ (accessed October 24, 2012). Hsia, D.C. (2003), “Medicare quality improvement: bad apples or bad systems?”, Journal of American Medical Association, Vol. 289 No. 3, pp. 354-356. Institute of Medicine (2006), Medicare’s Quality Improvement Organization Program: Maximizing Potential, National Academies Press, Washington, DC. Jencks, S.F., Huff, E.D. and Cuerdon, T. (2003), “Change in the quality of care delivered to Medicare beneficiaries 1998-1999 to 2000-2001”, Journal of American Medical Association, Vol. 289 No. 3, pp. 305-312. Mathematica Policy Research (2011), “Executive summary of the evaluation of the 9th Scope of Work of the Quality Improvement Organization Program”, presented to the Centers for Medicare and Medicaid Services, Mathematica Policy Research, Washington, DC. Office of the Inspector General, US Department of Health Human Services (1995), The Medicare Peer Review Organizations’ Role in Identifying and Responding to Poor Performers, US Department of Health and Human Services, Washington, DC. Rollow, W., Lied, T.R., McGann, P., Poyer, J., LaVoie, L., Kambic, R.T., Bratzler, D.W., Ma, A., Huff, E.D. and Ramunno, L.D. (2006), “Assessment of the Medicare quality improvement organization program”, Annals of Internal Medicine, Vol. 45 No. 5, pp. 342-353.

Figure 2. Data governance model for the quality improvement organization program

IJHCQA 27,4

Downloaded by New Mexico State University At 03:59 01 February 2016 (PT)

270

Sutton, J.P., Silver, L., Hammer, L. and Infante, A. (2007), “Toward an evaluation of the Quality Improvement Organization Program: beyond the 8th Scope of Work”, presented to the Office of the Assistant Secretary for Planning and Evaluation, National Opinion Research Center at the University of Chicago, US Department of Health and Human Services, Washington, DC,. United States General Accounting Office (1987a), “Medicare: preliminary strategies for assessing care”, briefing report to the Chairman, Subcommittee on Health, Committee on Ways and Means, House of Representatives, US General Accounting Office, Washington, DC. United States General Accounting Office (1987b), “Medicare: better controls needed for peer review organizations’ evaluations”, report to the Subcommittee on Health, Committee on Finance, US Senate, Washington, DC. Corresponding author Dr Yoku Shaw-Taylor can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Making quality improvement programs more effective.

In the past 25 years, and as recent as 2011, all external evaluations of the Quality Improvement Organization (QIO) Program have found its impact to b...
189KB Sizes 5 Downloads 5 Views