Author's Accepted Manuscript

Process Improvement in Surgery Christina A. Minami MD, Catherine R. Sheils BA, Karl Y. Bilimoria MD, MS, Julie K. Johnson PhD, Elizabeth R. Berger MD, Julia R. Berian MD, Michael J. Englesbe MD, Oscar Guillamondegui MD, FACS, Leonard H. Hines MD, FACS, Joseph B. Cofer MD, FACS, David R. Flum MD, MPH, Richard C. Thirlby MD, Hadiza S. Kazaure MD, Sherry M. Wren MD, Kevin J. O’Leary MD, Jessica L. Thurk BA, Gregory D. Kennedy MD, PhD, Sarah E. Tevis MD, Anthony Yang MD

www.elsevier.com/locate/cpsurg

PII: DOI: Reference:

S0011-3840(15)00152-5 http://dx.doi.org/10.1067/j.cpsurg.2015.11.001 YMSG508

To appear in:

Current Problems in Surgery

Cite this article as: Christina A. Minami MD, Catherine R. Sheils BA, Karl Y. Bilimoria MD, MS, Julie K. Johnson PhD, Elizabeth R. Berger MD, Julia R. Berian MD, Michael J. Englesbe MD, Oscar Guillamondegui MD, FACS, Leonard H. Hines MD, FACS, Joseph B. Cofer MD, FACS, David R. Flum MD, MPH, Richard C. Thirlby MD, Hadiza S. Kazaure MD, Sherry M. Wren MD, Kevin J. O’Leary MD, Jessica L. Thurk BA, Gregory D. Kennedy MD, PhD, Sarah E. Tevis MD, Anthony Yang MD, Process Improvement in Surgery, Current Problems in Surgery, http://dx. doi.org/10.1067/j.cpsurg.2015.11.001 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

PROCESS IMPROVEMENT IN SURGERY Christina A. Minami, MD1,2; Catherine R. Sheils, BA1,3; Karl Y. Bilimoria, MD, MS1,2; Julie K. Johnson, PhD1,2; Elizabeth R. Berger, MD1,4; Julia R. Berian, MD1,5; Michael J. Englesbe, MD6; Oscar Guillamondegui, MD, FACS7; Leonard H. Hines, MD, FACS8; Joseph B. Cofer, MD, FACS9; David R. Flum, MD, MPH;10 Richard C. Thirlby, MD11; Hadiza S. Kazaure, MD12; Sherry M. Wren, MD12; Kevin J. O’Leary, MD13; Jessica L. Thurk, BA14; Gregory D. Kennedy, MD, PhD15; Sarah E. Tevis, MD15, Anthony Yang, MD1,2

1

Surgical Outcomes and Quality Improvement Center (SOQIC), Department of Surgery, Feinberg School

of Medicine, Northwestern University, Chicago, IL 2

Center for Healthcare Studies in the Institute for Public Health and Medicine, Feinberg School of

Medicine, Northwestern University, Chicago, IL 3

University of Rochester School of Medicine, University of Rochester, Rochester, NY

4

Department of Surgery, Loyola University Medical Center, Maywood, IL

5

Department of Surgery, University of Chicago Medical Center, Chicago, IL

6

Department of Surgery, University of Michigan Health Systems, Ann Arbor, MI

7

Department of Surgery, Vanderbilt University School of Medicine, Nashville, TN

8

Department of Surgery, University of Tennessee College of Medicine, Knoxville, TN

9

Department of Surgery, University of Tennessee College of Medicine, Chattanooga, TN

10

Department of Surgery, University of Washington School of Medicine, Seattle, WA

11

Department of Surgery, Virginia Mason Medical Center, Seattle, WA

12

Department of Surgery, Stanford University School of Medicine, Palo Alto, CA

13

Division of Hospital Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL

14

Feinberg School of Medicine, Northwestern University, Chicago, IL

15

Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI

1

Corresponding Author: Karl Bilimoria, MD MS Surgical Outcomes and Quality Improvement Center (SOQIC) Department of Surgery and Center for Healthcare Studies Feinberg School of Medicine, Northwestern University 633 N St. Clair, 20th floor Chicago, IL 60611 Telephone: 312-695-4853 Fax: 312-503-4401 [email protected]

INTRODUCTION Throughout the last two decades, health care has faced steadily intensifying scrutiny of its safety, quality, and cost, resulting in increased interest in formal quality measurement and quality improvement (QI) programs. 1-3 The quality of healthcare became a major focus when the Institute of Medicine (IOM) released a report in 1999 called To Err is Human: Building a Safer Health System. 4 This report addressed patient safety, emphasizing that “error” that resulted in patient harm was not the sequelae of health care professionals’ competence, good intentions, or hard. Safety in healthcare, rather, is a direct consequence of a system of care. This, along with the 2001 follow-up report, Crossing the Quality Chasm: A New Health System for the 21st Century, illustrated a significant need for more formalized approaches to process and quality improvement within our healthcare system. 5 Along with the subsequent growth in interest in formal QI programs came a new awareness of the significant knowledge gaps amongst clinicians regarding the implementation of quality and process

2

improvement (PI) projects. Healthcare professionals work in complex, high-risk environments but have not been routinely taught on how to evaluate, analyze or improve these systems in which they play vital roles.6 As it was apparent that it would be necessary to develop sustainable, formalized approaches to QI education in order to prepare all practitioners to evaluate quality and outcomes data and implement QI initiatives, the IOM, the Accreditation Council for Graduate Medical Education (ACGME), American Board of Medical Specialties (ABMS)and the Lucian Leape Institute at the National Patient Safety Foundation called for the integration of patient safety and quality improvement training into the formal education of U.S. physicians.7 Additionally, the Association of American Medical Colleges in 2012 recommended a set of competencies for faculty educators in QI and patient safety.8 In surgery, emphasis on quality is also manifested in mandates from different governing bodies. The American Board of Surgery requires surgeons to self-monitor performance metrics for maintenance of certification (which can be fulfilled by participation in a national, regional, or local surgical outcomes study or quality assessment program.7 In addition, Joint Commission also tracks surgical safety and monitors surgeon performance as part of its credentialing process.9 The importance of a formal approach to QI training has thus been highlighted in the surgical world. The fundamentals of QI begin with an understanding of the underlying theoretical framework. Healthcare quality efforts commonly draw upon the Donabedian model of quality improvement.10 This model describes three approaches to assessing quality of care: studying the settings in which healthcare is administered (structure), the processes through which care is delivered, and the outcomes of medical care. Each of these three approaches to measuring and improving quality of care has distinct strengths and weaknesses. Structural measures are often the easiest characteristics to ascertain (e.g., hospital size or surgeon procedural volume), but these variables may not be immediately actionable from a provider’s perspective.11 Outcomes measures are the “ultimate validators of the effectiveness and quality of medical care,” and have the greatest buy-in from surgeons and the public.10 But while outcomes may indicate good or poor quality care, they do not provide insight into the source of variance in outcomes. Finally,

3

process measures, which describe the care patients receive, are often strongly associated with outcomes, and can be linked directly to quality improvement initiatives.11 Application of PI principles often take place at the structural level, implementing changes to process flows that in turns can be measured by process and outcomes measures. Different problemsolving frameworks, many of which were initially developed in the business world, are used in surgical quality improvement, including DMAIC, Six Sigma, and LEAN, in order to improve the processes of perioperative care. Examples of surgical process improvement tools that have been successfully applied include clinical mapping instruments (e.g., fast track protocols and enhanced recovery after surgery protocols), enhanced communication tools (e.g., checklists and pauses), and error reduction strategies (e.g., patient care planning and patient safety).12 Assessment and measurement of improvements in surgical care as a result of these interventions can be as important as the QI initiatives themselves. As a field, the relatively immediate and clear outcomes of surgical interventions lend themselves well to quality measurement and intervention, when compared with long term disease processes.13 Some professional surgical organizations have initiated QI efforts that assess the structure, process and outcomes for specialty-specific procedures, including the Society of Thoracic Surgeons and the Society of Vascular Surgery.14 On a national scale, the American College of Surgeons’ National Surgical Quality Improvement Program (ACS-NSQIP) has become one of the most widely recognized program in leading surgical quality assessment and improvement. ACSNSQIP has focused on reliable data collection and feedback of outcomes data to sites, and additionally disseminates information about evidence-based practices and interventions through its best practices initiatives.15 Many participating hospitals have utilized ACS-NSQIP data to launch quality improvement initiatives, 16 around the country, ACS-NSQIP and similar models have galvanized the formation of several state collaboratives that center around surgical QI efforts. PI/QI in surgery has thus become a major focus for both individual surgeons and hospitals as well as their professional organizations, the general public, insurers, and regulators. In this monograph, we aim to review aspects of PI/QI that are relevant to the currently practicing surgeon. We will discuss 4

commonly used QI data platforms, process improvement methodologies, examples of successful local QI initiatives, the role of regional surgical improvement collaboratives, and common barriers to QI efforts and potential solutions to overcome them.

COMMONLY USED DATA PLATFORMS IN SURGICAL QUALITY IMPROVEMENT To improve the quality of care, one must first be able to measure quality. Large datasets are often used to first identify gaps or differential performance in certain areas of healthcare and are thus an effective, if not indispensable tool in QI. Health care data broadly defined as either “clinical data” – those derived from ongoing patient care, e.g., from physician notes, procedure reports or test results, or “administrative data” – those collected primarily for administrative, not research, purposes, e.g., transactions, registration and record-keeping. Both clinical and administrative data have their respective strengths and weaknesses. Clinical datasets are prospectively abstracted from clinical data by trained reviewers (“abstractors”); they often house relevant outcomes for researchers and quality improvement leaders, and their granular, patient-level data lends itself well to risk-adjustment. There is, however, a limit to the number of variables that can be abstracted, and specific questions may simply not be able to be answered with the data that are given. Administrative data, in contrast to clinical data, does not have to be prospectively or retrospective abstracted. The data are readily available, capture a wide range of patients (and thus sidesteps the problem of selection bias that can occur in smaller clinical datasets), and capture a large number of patients, meaning the power of a study is never usually in question. However, there can be drawbacks to administrative data as well, which are mainly attributable to the fact that these datasets were not created for research purposes, but rather for “administrative” purposes such as billing. As a result, problems with administrative data includes the imprecision of the use of diagnosis and treatment codes, the absence of data points that are of interest (e.g. patient clinical risk facts, functional status, and quality of life

5

measures), and possible over-coding issues (i.e. coding of more complex diagnoses for the sake of greater reimbursement).17 Despite the fact that neither clinical nor administrative datasets represent perfect sources of data for research and QI purposes, they remain vital to the identification of QI/PI problems. In this section, we profile several of the commonly used clinical and administrative datasets that are relevant to surgeons and outline the strengths and weaknesses of each.

Clinical Datasets The American College of Surgeons National Surgical Quality Improvement Program Among the foremost clinical registries in medicine, the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) is a nationally validated, risk-adjusted, outcomebased approach to measure and improve surgical care. ACS-NSQIP provides a formal structure for hospitals to collect data on preoperative patient characteristics and 30-day postoperative outcomes. The data are risk adjusted for patient and procedure-related factors, benchmarked against other hospitals in the program, and reported back to the sites to help direct quality improvement efforts. ACS-NSQIP emerged from the National VA Surgical Risk Study (NVASRS), founded in 1991, which revealed significant improvements in surgical outcomes.18 The concept was expanded to hospitals in the private sector in 1999 as ACS-NSQIP, supported by a patient safety grant from the Agency for Healthcare Research and Quality (AHRQ). ACS-NSQIP has grown steadily over the years, now counting over 600 participating hospitals both in the United States and internationally. In 2002, the Institute of Medicine (IOM) named ACS-NSQIP “the best in the nation” for measuring and reporting surgical quality and outcomes. More recently, the ACS-NSQIP has been awarded the 2014 John M. Eisenberg Patient Safety and Quality Award in the category of Innovation in Patient Safety and Quality at the National Level. Data collection in ACS-NSQIP is performed by a Surgical Clinical Reviewer (SCR), who tracks preoperative through 30-day postoperative data on randomly sampled patients. The ACS provides training 6

for the clinical reviewer, ongoing education opportunities, and ACS-NSQIP clinical staff conducts random audits to ensure data is accurately collected and reported in the data registry. Blinded, riskadjusted information is shared with all hospitals, allowing them to nationally benchmark their complication rates and surgical outcomes. A recent publication in Annals of Surgery found that the majority of ACS-NSQIP hospitals achieve improvements in mortality, morbidity and surgical site infections over the course of eight years in the program.19 Two other studies examining ACS-NSQIP hospital participation and improvement postoperative outcomes did not find any association between the two,20,21 but these studies had some notable methodological flaws, relying on claims data and focusing on overall rates of complications, which obscures the fact that QI projects usually target specific problems.22 Perhaps what these studies do emphasize is that simply measuring outcomes does not guarantee meaningful quality improvement; hospitals must also have the knowledge, infrastructure, and culture to address their problem areas in a meaningful way. In addition to providing data to hospitals, ACS-NSQIP maintains resources to guide hospital QI, including monthly conference calls, best practice guidelines for the prevention and management of many postoperative complications, and opportunities for hospitals to form collaboratives based on geographic location, particular diseases, or surgical quality problems. Clinical data such as ACS-NSQIP demands more resources and effort for collection compared to administrative data, but it offers increased accuracy for identification of patient outcomes. A recent study comparing Medicare and ACS-NSQIP data revealed remarkably poor concordance between the two datasets when examining hospital risk-adjusted odds ratios for postoperative complications.23 Additional barriers to implementation of ACS-NSQIP on a broader scale include the high annual cost of participation, the cost of hiring SCRs to abstract data, and the perception of hospitals that ACS-NSQIP does not offer any advantages beyond the datasets they already possess.

7

The National Cancer Data Base The National Cancer Data Base (NCDB) is a joint project between the Commission on Cancer (CoC) and the American Cancer Society (ACS).24 The NCDB houses data over a longer time period than most clinical datasets (1985 to the present) from more than 1,500 CoC-accredited hospitals in the United States and Puerto Rico; in fact, the NCDB is the largest cancer registry in the world. As a facility-based clinical surveillance resource, the NCDB may be used for research but is fundamentally designed for quality improvement (QI) purposes, allowing hospitals to benchmark their performance on process and outcome measures against the other CoC-accredited hospitals.25 NCDB data are collected by trained data abstractors and represent approximately 70% of all newly diagnosed cancers in the United States,25 and is well-respected among all oncology specialties and institutions. Investigators associated with CoC-accredited programs may obtain NCDB data, containing deidentified, HIPAA-compliant patient demographics, comorbidity score, hospital characteristics, disease stage, and treatment specifics. Disease specific information is also available; for example, the breast cancer file contains “Estrogen Receptor (ER) assay” and “HER2: Immunohistochemistry Test Interpretation,” which may be important in the analysis and interpretation of various process measures and outcomes. Outcomes recorded in the NCDB include 30-day mortality, 90-day mortality, number of months from diagnosis to last contact or death, and overall “vital status,” which conveys whether the patient is dead or alive. The NCDB can help clinicians and researchers identify gaps in quality of care and outcomes across multiple cancer types,26-31 at both the patient-level and hospitals level, making it a valuable tool in the assessment of quality in cancer care.

The Surveillance, Epidemiology, and End Results Program Cancer diagnoses are tracked through the Surveillance, Epidemiology, and End Results Program (SEER), a project developed by the National Cancer Institute (NCI) that aims to capture the epidemiology of cancer. Started in 1973, SEER is a prospectively-maintained region-based database that was originally limited to seven regions. It has since grown to seventeen regional databases that include large 8

metropolitan areas and, in some cases, entire states. SEER represents approximately 28% of the U.S. population.32 Ethnic and racial minorities are oversampled in SEER data, offering a powerful tool to monitor cancer incidence among specific groups that may otherwise be overlooking in other large datasets. Patient demographics, disease variables and treatment codes are available. In addition to date of last follow-up or death and vital status, SEER also provides the ICD-10 code for underlying cause of death, allowing investigators to explore questions regarding disease-specific survival (DSS) in addition to overall survival (OS). As with any large database, users must bear in mind that some amount of error may be present in the coding; in the case of SEER, miscoding of PSA values were found during a quality check, thus prompting the development of a new protocol to assess the rate and impact of the errors.33

Administrative Datasets The Centers for Medicare & Medicaid Services The Centers for Medicare & Medicaid Services (CMS) provide their administrative data for research purposes and can be obtained through the Research Data Assistance Center (ResDAC), a CMS contractor providing free access to academic, government and non-profit researchers.34 Researchers interested in inpatient data may use Inpatient claims files or Medicare Provider Analysis and Review (Med-PAR) files. Inpatient claims data do not include Skilled Nursing Facilities and contains one record per claim. These data are often more complex and require more programming manipulation for the purpose of analysis than the fixed-format MedPAR file. For this reason most researchers opt for the MedPAR file, which includes a single, fixed-length record for each inpatient or skilled nursing facility stay. MedPAR data on charges are more highly aggregated, and generally easier to analyze. As over 98% of adults 65 years or older are enrolled in Medicare, data from CMS will yield information from a large, national population of patients with reliable, valid demographic information and allows for detailed sub-group analyses without loss of statistical power.35 In addition, it can be combined with other datasets and the data files are available relatively promptly compared to other data sources, with files for a given calendar year becoming available approximately six months later. Drawbacks of 9

using Medicare data include the aforementioned limitations of using administrative data in general, as well as the fact that data points are limited to the benefits covered by the Medicare program (e.g. not all beneficiaries will have information related to Part B services or Part D services due to the different enrollment options available).35

The Healthcare Cost and Utilization Project The Healthcare Cost and Utilization Project (HCUP) brings together several healthcare databases and related software tools and products through a Federal-State-Industry partnership and sponsored by the Agency for Healthcare Research and Quality (AHRQ).36 HCUP databases include the Nationwide Inpatient Sample (NIS), Kid’s Inpatient Database (KID), the Nationwide Emergency Department Sample (NEDS), and others amenable to health services research. The NIS is the largest all-payer inpatient database, containing data from more than 7 million hospital stays annually.37 Administrative billing data, primary and secondary diagnosis codes, procedure codes, total charges, primary payer, and length of stay are collected retrospectively. These data are limited to a patient’s inpatient stay, however, and do not lend themselves easily to studies looking at long-term outcomes, though it is a good tool for investigators interested in hospital costs or trends in procedure use. Investigators must also keep in mind that patient demographics and hospital characteristics are not housed in this dataset, and thus patient-level risk adjustment cannot be performed.

Linked SEER-Medicare Data More nuanced data can be obtained by linking datasets, such as SEER data with Medicare enrollment and claims files. The SEER-Medicare database contains de-identified data on individuals with cancer and a random 5% sample of Medicare beneficiaries who reside in a SEER area but do not have cancer.38 Linkages between the two databases are updated biennially, with the last linkage taking place in 2014. At that time, 93% of patients 65 years or older in SEER were able to be matched to the Medicare enrollment file. To gain access to this dataset, investigators must obtain approval from the NCI and pay a 10

fee for each file. All publications using this dataset are tracked by the NCI and posted online, allowing easy access to completed work. One must be aware of the limitations of this dataset (e.g., there are no data regarding services that are not covered by Medicare), but if investigators adjust the question at hand accordingly, powerful, risk-adjusted studies addressing health disparities,39,40 quality of care,41,42 and cancer costs43,44 can be performed.

Surgical Specialty Datasets Many specialty-specific clinical datasets have followed the success of ACS-NSQIP. Some have strong ties to the ACS, such as the National Trauma Data Bank (NTDB), which has the largest aggregation of trauma data ever assembled.45 Datasets are available to researchers and annual reports similar to ACS-NSQIP’s semi-annual reports are issued to hospitals containing information about both adult and pediatric trauma patient demographics and outcomes. Similarly, the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP), which represents a joining of the ACS and the American Society for Metabolic and Bariatric Surgery, collects data relevant to bariatric surgery. This program also acts as an accreditation program, accrediting bariatric programs based on procedure volume in addition to other various structural and process measures.46 Pediatric surgery quality and research data is also captured in the ACS NSQIP Pediatric program. ACS-NSQIP Pediatric started as a pilot program in 2008 with four sites, Phase II beta testing at 40 additional sites was completed recently, and is ACS-NSQIP Pediatric is currently actively enrolling children’s hospitals amenable to participation.47 There are also robust surgery-specific datasets that are housed outside of the ACS. One of the longest-standing outcomes registries, the Society of Thoracic Surgeons’ (STS) data registry was initiated in 1989 to track national, real-world outcomes in adult cardiac surgery.48 It is now the largest cardiothoracic surgery registry in the world, with well over 1,000 centers, representing over 95% of cardiothoracic surgery centers nationwide, and over 4.1 million patient records. The database is now supervised by the STS Workforce on National Databases, which oversees all aspects of registry 11

operations. Using information gathered in the STS Database, the first national risk models for coronary artery bypass graft (CABG) surgery were developed and risk-adjusted benchmarking was provided for local practices. Hospitals may track trends, note areas in need of improvement, and compare local results against national risk-adjusted benchmarks. New registries covering general thoracic surgery and congenital heart surgery have been added as well as the STS and American College of Cardiology Transcatheter Valve Therapy Registry (TVTR). In vascular surgery, the Society for Vascular Surgery (SVS) contributed to the development of ACS-NSQIP’s vascular-targeted modules, allowing hospitals to collect data on high-risk, high-volume procedures including carotid endarterectomy and stenting, open and endovascular abdominal aortic aneurysm repair, and open and endovascular peripheral revascularization. The SVS-Patient Safety Organization (SVS-PSO) developed a collaborative of quality groups known as the Vascular Quality Initiative (VQI) in order to improve the quality, safety, effectiveness and costs of vascular healthcare. The VQI collects self-reported, perioperative and one-year follow-up data through the cloud-based M2S database platform; the VQI provides bench-marked reports and determines best-practices.49 The SVSPSO has also partnered with the FDA and device manufacturers to help meet regulatory requirements by developing several device-evaluation and surveillance quality improvement projects. These medical device evaluation projects are available to VQI member centers. The vast majority of surgical specialties have now developed clinical data registries. Surgeons treating burn injuries may contribute to and utilize data from the American Burn Association’s National Burn Repository (NBR), the data registry for most burn centers in the United States and Canada.50 In 2002, the American Society of Plastic Surgeons launched its self-reported data registry, the Tracking Operations and Outcomes for Plastic Surgeons (TOPS) database.51 The American Academy of Ophthalmology recently developed the Intelligent Research in Sight (IRIS) Registry52 as the first comprehensive EHR-based eye care registry. In urology, the American Urologic Association Quality Registry (AQUA) is designed to measure and report healthcare quality and patient outcomes specifically for prostate cancer53. There are additional state and regional urology clinical data programs, such as the 12

Michigan Urological Surgery Improvement Collaborative (MUSIC) and the University of California San Francisco’s Cancer of the Prostate Strategic Urologic Research Endeavor (CaPSURE). As many more specialty databases have been established and continue to emerge, researchers will have more detailed, rich data for the purpose of outcomes, quality improvement and health services research.

METHODOLOGIES IN PROCESS IMPROVEMENT Large datasets may help to identify distinct problems in surgical quality, but responding to such problems requires knowledge of process improvement methodology. Any discussion of process improvement should acknowledge Dr. W. Edwards Deming, originally a PhD in physics, who developed an expertise in applied statistics. Sent to Japan to aid in efforts to rebuild the nation after World War II, Deming aided Japanese business managers in correcting persistent quality problems, and these efforts are often credited with being instrumental in the tremendous post-war growth of Japanese industry.54 America was slower to adopt Deming’s philosophy, but eventually also came to recognize the wisdom of his fourteen-point philosophy and many of his fourteen points can be seen in most commonly used PI methodologies used today.55 What follows is a brief overview of the most commonly employed PI methodologies in healthcare. These were originally employed in large industrial corporations, such as Motorola and Toyota, but were adapted and then adopted in healthcare to improve processes of care. It is thought that higher quality of care should follow from these improved processes (i.e. increased patient safety and better patient outcomes). Process improvement methodologies such as Plan-do-study-act (PDSA), Six Sigma, Lean, Lean Six Sigma, and the DMAIC framework employ different vocabularies, but they are all similar in their stepwise, data-focused approach to improving processes of care.

PDSA, Six Sigma, Lean, and Lean-Six Sigma

13

The roots of the PDSA method can be found in the Deming’s early work in Japan.56 It is a basic four-stage cycle that is meant to structure an iterative approach to change. Prior to entering the cycle, the FOCUS approach should be applied: the team should Find a process to improve, Organize a team that knows the process, Clarify current knowledge of the process, Understand causes of process variation, and Select the process improvement.56 After these are addressed, the team can engage in PDSA. In the Plan stage, an improvement effort is identified; in Do, the change is implemented and tested; in Study, the effects of the change are evaluated; in Act, the QI/PI team identifies possible adaptations and next steps.56 This approach is easily applicable to healthcare, perhaps in part due to its similarities to the scientific method (Plan as hypothesis formation, Do as data collection to test the hypothesis, Study as data analysis and interpretation, and Act as identifying area for future study).57 PDSA cycles can either function as a standalone framework for guiding healthcare improvement projects or can be used in conjunction with other approaches like Six Sigma and Lean. Six Sigma is a set of tools and techniques that are used for QI/PI. It is still commonly employed by manufacturing and business firms that seek to reduce variation and improve complex problems within large organizations.58 The concept was initially introduced by Bill Smith at Motorola in 1986, and Jack Welch made it central to his business strategy at General Electric in 1995.59 The driving objective of Six Sigma is to create near perfect products and services for customers through the elimination of variation and effects, thus driving production failures to six standard deviations (sigma) from the mean.60 To be more specific, the three main overall assertions of Six Sigma specific to healthcare are: (1) continuous efforts to achieve stable and predictable process results are of vital importance to success; (2) processes have characteristics that can be measured, analyzed, improved, and controlled and; (3) achieving sustained quality improvement requires commitment from the entire organization, particularly from toplevel management.61 Lean manufacturing or Lean Production, often referred to simply as “Lean”, is a systematic method for the elimination of waste within a manufacturing system and was first developed within the Toyota Production System (TPS) in the 1950s. Lean centers around a concern with waste (“muda”) or 14

processes that fail to add value to a product. The five guiding principles of Lean thinking are as follows: 1) to articulate the ultimate “value” of a process, reflecting what the customer, not the provider, values; 2) to identify the “value streams” or processes that add value to the product in question; 3) to create “flow” of processes by breaking down the boundaries between organizational silos to allow for uninterrupted processes; 4) to respond to the “pull” or demand of the customers over those of the suppliers, and 5) to continuously strive for “perfection,” emphasizing that Lean is a way of thinking that needs to be embedded within the culture of an organization 62. When applied to healthcare, here are specific challenges to employing this methodology, including 1) clearly identifying patient pathways and flows, and 2) actually finding the necessary resources to provide a truly Lean model (i.e. though a better pathway may be found, the resources to implement it may simply not exist).63 Despite this, Lean methodology has been successfully applied at many institutions to improve quality and efficiency. Entire healthcare organizations have done it successfully, like Virginia Mason Medical Center which embraced the Lean methodology and built an organizational culture of improvement and innovation over the course of the early 21st century.64 The concept of combining Lean with Six Sigma was first introduced by Michael George and Robert Lawrence in a book entitled Lean Six Sigma: Combining Six Sigma with Lean Speed.65 Lean-Six Sigma allows organizations to keep both quality of care and cost-efficiency as priorities, pushing for waste elimination through LEAN and improvement of processes through the Six Sigma tools. Lean Six Sigma relies on a collaborative team effort to improve performance by systematically removing the eight kinds of waste: time, inventory, motion, waiting, over-production, over-processing, defects, and skills.66 The three methodologies share a common ground in that each conceptualizes production as a complex interaction of individual activities. In addition, each seeks to pinpoint the failure points in processes and enact a productive response to drive toward the goal of efficient, effective production. 63 The parallels and application of such principles to providing high quality healthcare are immediately evident and have been shown to be successful in varied healthcare setting. Success with these

15

approaches, however, requires strong, invested leadership as well as engagement of employees in all components of the system.67 Six Sigma and Lean methodologies have been highly effective at improving processes in the manufacturing world and have been used in the healthcare setting since the 1990s. There are still challenges implementing these methodologies within healthcare due to problems such as ‘buy in’ of the stakeholders (for instance, providers, leadership, patients), the focused importance of a team approach, and the willingness of team members to change daily practice and to adapt new and innovative ways to deliver healthcare. PDSA, Six Sigma, Lean, and Lean Six Sigma all have the potential to successfully produce clinically significant improvement in the quality of care for surgical patients has been demonstrated multiple times over.

The DMAIC Framework The DMAIC (Define, Measure, Analyze, Improve, Control) process is one of two frameworks followed when implementing Six Sigma projects and is used for projects aimed at improving an existing business process.68 The other framework, known as DMADV (Define, Measure, Analyze, Design, and Verify), is utilized when implementing new products or process designs. Given that the former is used much more often than the latter, our discussion will focus on DMAIC. DMAIC is a validated process improvement strategy which can be applied in multiple clinical settings to implement changes that eliminate errors in complex healthcare environments.58 The process always begins by clearly defining the problem, then systematically moves through the next four phases, with each phase acting as milestones for the improvement project.69 The DMAIC roadmap offers a clear conceptual organization framework to process improvement that is based in the use of data to drive improvement, and clearly specifies the roles of the project leaders and project owners. Each phase is, in and of itself, can be a complex undertaking, integrating statistical quality tools, techniques like failure mode and effect analysis (FMEA), as well as statistical process controls.60 The 5 phases of the DMAIC

16

process will be explored in more detail below and while the methodology may appear linear and explicitly defined, it should be noted that an iterative approach may be necessary for process improvement. Define The main purpose of the ‘Define’ phase is to identify the gap in quality and articulate the project aim. It is also important to outline the goal of the project, potential resources needed to meet the goal, and to develop a high-level project timeline. The Project Charter, the document that outlines the project’s problem, goals, scope, required resources, key metrics, and team members, is built upon aim statements that identify specific quantitative goals. While creating the project aims, certain stakeholders should be identified that will be involved in the process and who will be affected by the changes being made. In general, a good DMAIC QI/PI project should address an important operational or clinical issue, have clear objectives that are tied to the organization’s overarching goals, be focused enough to be completed in 6-9 months, have a committed sponsor and process owner, have readily-available data for measurement, and an unknown solution. Tools that can be utilized in this phase include process flowcharts, stakeholder analyses, ‘voice of the customer’ gathering, and SIPOC (suppliers, inputs, process, outputs, and customers) diagrams. A SIPOC diagram, commonly referred to as a process map, summarizes the inputs and outputs of processes in table form. These diagrams are used during the Define phase to (1) give people who are unfamiliar with a process a high-level overview; (2) to reacquaint people whose familiarity with a process has become out-of-date due to process changes and (3) to help people in defining a new process.70 The team makeup should also be identified, including an executive sponsor (provides strategic oversight, addresses organizational project barriers), sponsors (responsible for the timely and successful implemental of the project, addresses departmental project barriers), clinical sponsor (aids in reaching consensus on clinical decisions), process owner (implements, controls, and measure project outputs and improvements), the improvement leader (acts as the methodology expert and is accountable for using DMAIC to complete deliverables), and the team members (contributes to the timely and successful implementation of the

17

project, contributes ideas, helps with data collection and analytics.71 Once the Define phase is complete, the project should have a well-organized, focused outline for all next steps. An example of effectively using the DMAIC process in a clinical setting is well described in an article by Toledo et al. where length of stay (LOS) in liver transplant patients is addressed.72 As this is a nice example of a formal QI project, we will use its elements to illustrate each DMAIC phase. This particular project was undertaken because many organ transplant centers must balance maintaining good postoperative outcomes while controlling cost in these challenging patients. LOS is often used as a benchmark for both quality and resource utilization in these centers. During the Define phase for this study, a project charter was created which clearly defined the problem, scope, measures, resources, and schedule for the project. LOS, as an oft-used benchmark for both quality and resource utilization in transplant centers, was identified as the key outcome metric. Measure The Measure phase is the data collection part of the process. In order to quantify improvement, a project must have a clear baseline for comparison. To avoid retrospective collection of data, it is critical to identify the key metrics at this stage. In general, there are three main types of measures. Outcome measures describe the overall performance process, input measures are items put into the process (resources, finances, technology), and process measures, which are taken at critical points within the process, provide data about the performance of the individual steps of the overall process being examined.73 The data collected can be either continuous (i.e. take on a continuous range of values), or discrete (i.e. represent and individual count). The type of data collected can limit the choices of the graphical and statistical tools used in analysis. During this phase it is also important to remember to validate the measurement system and to gather root causes. A process map is often created during this phase to graphically represent the steps of the process. These maps can help inform the selection of the measures and identify points where measurements can be taken.

18

In Toledo et al’s Measure phase, the team collected surveys and conducted interviews of key stakeholders. A process map was created which illustrated patient flow from time of transplant to discharge. This helped to identify delays in the process of discharging patients, affecting length of stay.72 The team also collected objective data on patients undergoing liver transplantation including length of stay as well as variables potentially associated with LOS. The LOS variable fulfilled many criteria of being a good outcome metric as it was specific, well-defined, and unambiguous; in addition, viewed from the customer/patient perspective, this variable matters. Analyze The Analyze phase of the DMAIC methodology is extremely dependent upon how well the Measure phase was executed. The main purpose of this step is to identify, validate and select root causes of the problem for analysis by examining the collected data. Data can be analyzed quantitatively and qualitatively, depending upon the type of data collected. All causes for errors are important; however root causes that account for the largest proportion of errors are the best targets for intervention since they have the largest potential benefit. Tools such as the “5 Whys” technique (the iterative asking “Why” to get to the root of a problem) are used to encourage team members to diligently drill down to the root causes of errors and variations, which are the most impactful, rather than settle for identifying superficial and less important causes of errors. Within the liver transplant project, the various data were analyzed qualitatively and quantitatively. The collected surveys and interviews were analyzed as part of a stakeholder study to identify potential champions for the project. Univariate and multivariate data analyses were conducted for the quantitative data which revealed the various factors that were correlated with length of stay. These analyses enable to implementation of new interventions during the improve phase. Improve Once root causes of the problem are identified in the Analyze phase, the Improve phase allows for brainstorming ideas to address the issues. It is crucial to identify, test and implement solutions to the problem in part or in whole. Multiple solutions should be always be produced and each should be 19

evaluated for simplicity of implementation and potential impact. Interventions that have the highest yield for the lowest effort or cost should be prioritized, as these are often the most impactful ones. Those interventions that are low cost should also be prioritized, even if low-impact, since the downside is negligible. It is possible for multiple solutions to be implemented together or sequentially, however a plan for implementation should be developed first. As previously mentioned, the DMAIC process is fluid and therefore as certain solutions are being implemented, their performance should be appropriately remeasured to determine future directions. The investigators in the liver transplant study identified potential root causes of increased LOS by using techniques such as brainstorming, an affinity diagram (a project management tool used to organize ideas), a cause-effect tree, and the 5 Whys. Once the interventions were devised on the basis of the identified root causes and the potential impact was analyzed, they were pilot tested on future transplant patients. It was found that the median LOS for patients in the pilot study improved significantly based upon the interventions. Thus, based upon this success, the team decided to continue with the interventions on all subsequent patients.72 Control This phase, which is critical to sustaining any gains achieved by the QI project over the longterm, focuses on the maintenance of the initiated improvements. The Project Charter becomes important in this phase, as it will help confirm if the goals of the project were addressed. Communication of the results with the stakeholders is also essential to ensure that gains will be sustained. In order to confirm sustainability of the newly implemented solutions, continued monitoring of performance is useful. This will also help determine if new problems have been created. Control mechanisms, graded from weakest to strongest include: communication, training, vigilance, checklists, standard procedures, monitoring, statistical process control, and mistake proofing.71 Relying on continual communication engages staff and creates awareness, not to mention happens at a low cost, but is a weak method of control given that it requires multiple layers of both active and passive communication and people may be easily missed. Training also engages staff, teaching them about the correct processes of care, but it does require 20

continued assessment, comes at high time and sometimes financial costs. Vigilance usually takes place on a smaller personal scale, in which staff take it upon themselves to uphold QI efforts; unfortunately, it relies on proper training, it does not prevent errors, and can often succumb to fatigue. Checklists, which will be discussed in more detail later in the monograph, can be implemented at low cost and can serve as a standard reference and record; yet even if it accurately reflects the process, individual use may not always be up to standards, and it does not, in and of itself, prevent errors. Standard operating procedure, which documents the standard process, helps to control a process by reducing human variation and providing consistency. These procedures, however, require a high level of maintenance for compliance. Monitoring, in which an overarching body measures outcomes and provides feedback, reinforces accountability on the part of an individual or unit, and is helpful in providing data that can be acted upon. It does, however, require timely review and interpretation of results with subsequent corrective action. Statistical process control consists of measurements taken over time, with the identification of significant variation; requiring disciplined measurement, timely feedback, and corrective action, this is a more uncommon form of control. It is important to note that none of these first seven methods of Control actually prevent errors. Only mistake/error proofing, in which defects are in effect designed out, can actually result in a 0% error rate. While this is obviously desirable in process improvement, error proofing is very difficult to create and implement, often requiring a new technology. In order to monitor maintenance in the study conducted by Toledo et al., the team opted to take the monitoring approach, as they continued to measure LOS for a significant period of time after implementation. In addition, there were still some patients who had longer LOSs and thus a new DMAIC team examined factors that contributed to excessive LOS in this subset of patients, illustrating the iterative nature of QI/PI.

21

Conclusion This discussion of some of the main methodologies and frameworks used in surgical should provide a basic understanding for approaching QI/PI in healthcare. All of these methodologies rely on an organized stepwise progression to improving processes: identification of a problem in a given process is followed by further investigation to discover the underlying root cause of the problem, which leads to identification of ways to improve the process based on the root cause. After implementation of these process modifications, the improvements are sustained through constant monitoring and through the iterative nature of the PI methodologies. Further illustration of the application of these can be seen in the proliferation of local QI initiatives throughout the country. In the next section, practical examples of hospitals making meaningful improvements in their quality of care are detailed by those who actually carried out the projects.

22

SUCCESSFUL LOCAL QI INITIATIVES IN SURGICAL PATIENTS The following case studies are examples of the successful use of PI methodology to improve the quality of care provided to surgical patients. Each of these case studies were written by surgeons, trainees, and PI experts who played significant roles in the success of each project. Catheter Associated Urinary Tract Infection Process Improvement (University of Wisconsin) Background Catheter Associated Urinary Tract Infections (CAUTIs) are the most common Healthcare Associated Infection (HAI) in the United States. The Centers for Disease Control estimates that CAUTIs account for 30% of HAIs and >13,000 deaths in U.S. Hospitals annually.74,75 In addition, CAUTI contributes to patient morbidity by increasing hospital length of stay and has a substantial impact on costs.76 Furthermore, CAUTI may be the most preventable HAI, with estimates that up to 70% of catheter associated infections are preventable.77 The Centers for Medicare and Medicaid Services (CMS) began a program focused on the elimination of preventable healthcare associated infections (HAIs) as part of their pay for performance program in 2008. As such, a program that eliminates reimbursement for the cost of CAUTI treatment was initiated.78 At the University of Wisconsin Hospital, we began a process improvement project to eliminate CAUTI hospital-wide in 2011. Using the FOCUS-PDCA process improvement construct, we began by collecting and analyzing data. Through this process we found 4.7 CAUTIs per 1000 indwelling catheter days over a 6 month period, with 85 CAUTIs reported. Of the 22 hospital units, 9 were found to be underperforming in comparison to the National Database of Nursing Quality Indicators (NDNQI) benchmarks. Therefore, we sought to improve CAUTI rates at our institution by implementing evidence based protocols, improving utilization of the electronic medical record (EMR) in regards to urinary catheters, and educating all providers on the CAUTI problem. Our root cause analysis led us to begin the process by focusing on the implementation of a catheter removal and bladder management protocol. The Beginning: Developing a team and collecting data 23

An interdisciplinary CAUTI team was developed to evaluate practice patterns in the management of urinary catheters hospital wide. The team consisted of clinical nurse specialists, physicians, an infection control practitioner, nursing informatics, a quality improvement analyst, and administrative support. Current practices were evaluated by reviewing all patients with urinary catheters on one surgical and one medical unit over a one month period of time. Symptomatic and asymptomatic CAUTI were identified using the 2012 National Healthcare Safety Network criteria. Urinary Catheters on a Surgical Unit The Healthcare Infection Control Practices Advisory Committee (HICPAC) recommends removal of urinary catheters as early as possible in the post-operative period, preferably within 24 hours.79 Therefore, on surgical units the primary goal was catheter removal by post-operative day 1. The multidisciplinary team was educated regarding the goal, about the benefits of intermittent catheterization for urinary retention as compared with indwelling catheters, and that thoracic epidural catheters do not necessitate urinary catheter continuation. Prior to implementation of strategies to improve CAUTI rates, 47% of patients on the surgical unit had their catheters removed by post-operative day 1 and 53% of those patients required intermittent catheterization after catheter removal. Intervention First, existing institutional catheter removal and bladder management protocols were modified to reflect evidence based practices. Input from clinical nurse specialists and nurses ensured the protocols were straightforward and easy to follow. Physician approval was obtained from both medical and surgical physicians and the protocols were standardized hospital wide. The EMR was utilized to add the protocol to pre-existing order sets, link the protocol to the order in the chart, and an icon was created to identify patients with urinary catheters and those on the bladder management protocol.80 Each unit identified unit-based CAUTI champions to act as leaders and educators around the CAUTI protocol. Over 60 clinical nurses, 13 nurse managers, and 12 clinical nurse specialists attended a 3 hour educational program and acted as CAUTI champions. A CAUTI toolbox was also created on the hospital intranet site. The toolbox was not only a resource for nurses looking to review the protocol and 24

other CAUTI related educational materials, it also served as a resource to access patient education material around urinary catheters and CAUTI.80 In order to ensure the initiative was implemented successfully, the Quality and Safety clinical nurse specialist participated in daily patient rounds on all patients with urinary catheters for the first two weeks of the initiative. Over the next 2 weeks, the unit based clinical nurse specialists and the nurses took a more active role in addressing urinary catheters with patients, physicians, and at interdisciplinary rounds. Catheter specific rounding was no longer needed by week 4, as EMR reports indicated that catheters were being removed appropriately. To provide regular feedback and track progress with catheter removal, monthly unit based scorecards outlining catheter utilization and CAUTI rates continue to be generated. Outcomes An easy to use, standardized urinary catheter management protocol was successfully implemented hospital wide. Daily conversations about urinary catheter duration and CAUTI are now prevalent and nursing staff are empowered to ask about catheter necessity and the use of intermittent catheterization in lieu of placing an indwelling catheter. Average urinary catheter days decreased from approximately 9 in 2011 to 3 in early 2013. While not statistically significant, CAUTI rates and device utilization have been trending down since the implementation of the catheter utilization and bladder management protocol (Figure 1). Conclusions In summary, development of an interdisciplinary team is an important first step in quality improvement initiatives. The FOCUS-PDCA cycle was utilized to identify an area of improvement. Development of protocols based on evidence based guidelines, obtaining nursing and physician approval, and standardization of protocols are essential for successful prevention efforts. Clinical nurse specialist champions are instrumental in rolling out initiatives and need to be provided with appropriate resources and tools to educate both nursing and physician colleagues. Finally, frequent assessment of performance both with protocol compliance as well as the outcome of interest are essential for continued initiative 25

success. To further our work, we have implemented insertion and maintenance of urinary catheters protocols and have developed and implemented a protocol that standardizes the use of urinary cultures. The addition of these processes to the work has resulted in a significant reduction of CAUTI in our institution. Using the tools of process improvement we have been able to become compliant with all measures and decrease the level of CAUTI to acceptable levels.

Venous Thromboembolism Quality Improvement Project (Northwestern Memorial Hospital) Background Venous thromboembolism (VTE), which includes deep vein thrombosis (DVT) and pulmonary embolism (PE), can occur after 1% of major surgeries.81-83 With 300,000 to 600,000 new cases and 100,000 to 300,000 deaths per year,84 VTE is a common and potentially preventable cause of postoperative morbidity and mortality.85 VTE rates can be reduced by about 80% with proper chemical and/or mechanical prophylaxis, and can reduce all-cause mortality in the surgical population.84,86,87 Though evidence-based guidelines provided by the American College of Chest Physicians (ACCP)79 advise clinicians as to the appropriate type, duration, and dose of prophylaxis, widespread underutilization of thromboprophylaxis has been found. The Epidemiologic International Day for the Evaluation of Patients at Risk of Venous Thrombosis in the Acute Hospital Care Setting (ENDORSE) study found that only 62% of the 18,461 post-operative patients in the received the recommended prophylaxis (per the 2004 ACCP guidelines) in the acute care hospital setting.88 As part of the movement to enforce evidence-based prophylaxis guidelines to address this failure of implementation, VTE measures were developed as a part of a Joint Commission and National Quality Forum project. Data collection began with hospital discharges from January 1, 2013. The six VTE measures are listed in Table 1. At Northwestern Memorial Hospital (NMH), baseline compliance rates with these measures were determined using data from the Enterprise Data Warehouse (EDW), which houses institutional data from the electronic medical record. Team and Intervention Development

26

A formal DMAIC project was initiated to boost compliance with the VTE measures. Team members included an internist with QI expertise, a surgeon with QI expertise, a process improvement leader, a quality measures specialist, a data analyst, a surgical resident, a nurse education coordinator, a clinical pharmacist, and a hospital medicine team leader. The project was sponsored by the hospital’s Chief of Staff and an anesthesiologist. In the Measure and Analyze phase, the issues associated with failures of each measure were identified and addressed. For example when examining VTE-1, the measure of patients receiving VTE prophylaxis or possessing documentation of the reason behind the lack of VTE prophylaxis, it was found that patient refusals made up a significant proportion of missed VTE prophylaxis doses. A total of fifteen interventions were put into place as a part of the NMH VTE project. They were rolled out over the course of a year and a half; the first was implemented in May 2013, with the last two implemented in September 2014. These interventions fell into one of two major categories: EMR additions/adjustments and clinician education. EMR interventions included order set adjustment or additions to facilitate the ordering of appropriate VTE prophylaxis through decision support and forcing functions. To prevent provider-related fatigue from inappropriate system alerts, the alerts were customized, incorporating lab exclusions and contraindications into the decision to fire any VTE-related alert. Educational interventions targeted the nurses and physicians. To address the patient refusals driving failures of VTE-1, nurses were coached to avoid offering a prophylaxis injection as if it were optional and to educate the patients about the importance of thromboprophylaxis. In addition, it was found that many of the measure failures were coming from medical, rather than surgical, floors. Many of the nursing failures were due to cultural differences in the units; thus the process improvement leaders undertook an educational campaign that took a “myth-busting” approach. For instance, many of the nurses believed that patients wearing SCDs were at higher risk of falling, and that SCDs did not need to be applied if their INR was greater than 2 or if the patient was on a therapeutic heparin drip. The DMAIC team also held multiple discussions with medicine attendings, who felt that the new requirements 27

specified by the EMR changes were not consistent with their body of literature regarding the necessity of both mechanical and chemical prophylaxis in their patient population. In response to their concerns, the algorithm of the order set was adjusted such that the order for mechanical prophylaxis was only prompted if the patient had a risk factor for VTE and chemoprophylaxis was contraindicated. Outcomes Compliance was tracked on a monthly basis using EDW data and performance were fed back to respective units. A custom EDW-generated report was created for nursing unit managers to be able to receive feedback about failures on a daily basis so that they could address the failures down to the level of specific providers/patients on a real-time, ongoing basis. Failures were discussed in project meetings. Prior to implementation, performance on several of the VTE measures were already high. Taking data from the five months prior (January 1, 2013 to May 1, 2013), hospital-wide measure compliance was ranged from 28.6 to 98.1% for measures 1-5. Compliance rates in the six months following the intervention period (October 1, 2014 to April 1, 2015) increased across the board for these measures, from 88.3 to 99.2%. VTE-6, which measures the proportion of patients diagnosed with hospital-acquired VTE who did not receive VTE prophylaxis, decreased from pre-intervention to post-intervention. Conclusion This project was a truly multidisciplinary effort given that it was hospital-wide and not restricted to a particular unit or floor. Buy-in from both surgery and medicine providers were paramount to its implementation, and one of the stumbling blocks that was encountered was the difference in prophylaxis culture between these two disciplines. This required adjustment of the EMR alerts that were fired in the charts of patients on the medical floors as well as different educational sessions for these units’ nursing staff. The 1.5 year implementation period highlights the difficulty of coordinating a large hospital-wide project and the truly iterative nature of the DMAIC process. Strong leadership, invested clinical sponsors, and dedicated PI staff led the charge, and frequent data review and team meetings continue in order to extend the improvements that have been made thus far.

28

Pneumonia Prevention in the Inpatient Surgical Ward (Stanford University) Infection is a common complication after surgery, and pneumonia is one of the most common causes of postoperative infection.89 Postoperative pneumonia is associated with high morbidity, sometimes leading to acute respiratory distress syndrome, a necrotizing infection, or septic shock. Mortality ranges from 20-50%, and cost of care has been estimated as high $46,400 per surgical patient with a pneumonia.90-92 Pneumonia is among index complications currently used to assess readmission rates for hospital profiling and reimbursement as endorsed in healthcare reform legislation.93 Much of the clinical and epidemiologic data regarding pneumonia prevention are from other medical disciplines or the intensive care unit. In the surgical literature, guidelines and interventions aimed at reducing the postoperative risk for pneumonia have traditionally focused on the critical care population, specifically those on a mechanical ventilator. However, pneumonia frequently occurs in non-ventilated patients who make up the majority of surgical admissions. Until recently, there has been a lack of focus on prevention strategies for the typical surgical patient cared for on the ward. In fact, prior to our initial publication describing a pneumonia prevention program94 implemented in the surgical ward of the Veteran’s Affair hospital in Palo Alto (VA-Palo Alto), we were only able to find a single report dedicated to the prevention of postoperative pneumonia among patients in surgical ward. The VA-Palo Alto is a tertiary hospital, a designated complex hospital, and regional referral center in the VA system. Its case mix is similar to that of other tertiary care medical centers except for the exclusion of severely injured trauma patients. The pneumonia prevention quality improvement task force at VA-Palo Alto was formed in December 2006 to address pneumonia prevention for surgical patients in its surgical ward. The task force consisted of a surgeon (assistant chief of the Surgical Service), VA Surgical Quality Improvement Program (VASQIP) nurse, and the surgical ward nurse manager. After education and orientation of relevant hospital personnel, evidence-based ward-acquired pneumonia prevention strategies were implemented in April 2007. The program consisted of the following eight steps:

29

1.

Initial and ongoing education of all surgical ward nursing staff about their role in pneumonia

prevention; 2.

Coughing and deep-breathing exercises with incentive spirometer;

3.

Twice daily oral hygiene with chlorhexidine;

4.

Ambulation with good pain control;

5.

Head-of-bed elevation to at least 30 degrees and sitting up for all meals (“up to eat”);

6.

Quarterly discussion of the progress of the program and results for nursing staff;

7.

Pneumonia bundle documentation in the nursing documentation; and

8.

Automated computerized physician pneumonia-prevention order set in physician order entry

system. Prior to implementation of the prevention program in 2007, the VA-Palo Alto averaged 8.7 cases of pneumonia per year in its VA-SQIP sample derived from approximately 4000 total cases per year. After successful implementation of the standardized bundled program, we reported initial results demonstrating an 81% decrease in ward-acquired pneumonia occurrence in the 18-months following the introduction of the program. The program has since been in place without changes to its protocol. Updated results were published in 201495 and were compared to pneumonia rates captured in the multiinstitutional American College of Surgeons National Surgical Quality Improvement Program (ACSNSQIP) database. Between years 2008-12, there were 18 cases of postoperative pneumonia among 4099 at-risk patients hospitalized on the VA-Palo Alto surgical ward, yielding a case rate of 0.44% and a 43.6% decrease from our pre-intervention rate. The pneumonia rates in all years were lower than the preintervention rate. In contrast, the overall pneumonia rate in ACS-NSQIP was 2.56%, which was 582% higher than the post-intervention rate at VA-Palo Alto surgical ward. Using a national average of $46,400 in attributable health care cost of postoperative pneumonia and a benchmark of a 43.6% decrease in pneumonia rate achieved at our facility over the 5-year period since the introduction of the pneumonia prevention program, we estimated that a similar percent decrease in pneumonia occurrence at ACS-

30

NSQIP hospitals would represent approximately 6118 prevented pneumonia cases and a cost savings of more than $280 million. The introduction of the program and its successful integration into the standard care of all ward postoperative patients has led to a substantial and sustained decrease in the pneumonia rate at VA- Palo Alto. Indeed, there was only one case of pneumonia in year 2012 among more than 750 analyzed postoperative patients hospitalized in the VA-Palo Alto surgical ward indicating that the goal of zero case of pneumonia in the surgical ward is achievable. A study from a large tertiary hospital in Boston published after the initial description of the VA-Palo Alto pneumonia prevention program indicates the reproducibility of our results; the authors used a pneumonia prevention program similar to ours and achieved a similar reduction in ward pneumonia rates.96 Which specific component of the program accounts for its success has not been evaluated. The continued ongoing education of nursing staff and leadership about the program with real time results appear to be critical elements to its success. We believe that the monthly report received by nursing leadership and the surgeon champion from VASQIP of any ward pneumonia cases are essential to the success of the program. The other lesson learned was the necessity to automatically embed the program in standard ward admission sets so resident and surgeon education would not be necessary on an ongoing basis. The use of these auto-loading electronic order sets ensures that all admissions have the nursing interventions and mouth care ordered. Lastly, the inclusion of the prevention bundle as part of the standard shift documentation done by the ward nurses continues to enforce compliance. The ongoing success of this quality improvement program as a joint venture between surgeons and nursing demonstrates the lasting effect such a program can have. The program has continued to function successfully through nursing leadership and personnel changes and still reflects the commitment to improved patient outcomes eight years since its inception.

31

Academy for Quality and Safety Improvement (Northwestern Memorial Hospital) Overview The Institute of Medicine, the Accreditation Council for Graduate Medical Education, and other professional societies have called for integrating quality improvement (QI) into physician training.97-99 An important challenge to this goal is the relative lack of faculty with advanced knowledge and skills related to QI. Northwestern Medicine created the Academy for Quality and Safety Improvement (AQSI) to equip healthcare professionals with the knowledge and skills needed to effectively lead QI. AQSI is a professional development program that combines didactic training with team-based, experiential learning over a 7-month period. During the AQSI program, participants execute a QI project and apply the concepts and methods learned. AQSI participants have given high ratings to instructors, topics, and the overall program. A multiple choice test and an adapted Quality Improvement Knowledge Assessment Tool (QIKAT) have shown statistically significant improvements as a result of the program. Postprogram surveys show that participants continue to apply the knowledge and skills learned and have gone on to lead additional QI efforts. History AQSI began in 2012 as a collaborative effort among three entities at Northwestern Medicine. The Department of Medicine desired to embed quality improvement in practice by training a cadre of QI leaders among its faculty and healthcare providers and so partnered with Northwestern University’s Master’s Program in Healthcare Quality and Patient Safety and Northwestern Memorial Hospital’s Performance Improvement Department. AQSI combines the strengths of each, offering a foundation in both QI principles and process improvement methodologies, with the goal of empowering participants to execute QI projects and lead change in their clinical area.

AQSI is unique in encouraging

interdisciplinary team participation, with teams applying through a competitive application process to execute a project of their choosing. Now in its third year, the AQSI program has expanded to include teams from the departments of Medicine, Surgery, Emergency Medicine, and Neurology. A total of 19 32

teams comprised of 89 individuals from diverse backgrounds (attendings, fellows, residents, nurses, pharmacists, administrators) have completed the program. Program Structure The AQSI program consists of class work and project work. Class work: Program participants attend 11 classroom sessions, meeting every other week. Five sessions address core quality topics while four are dedicated to teaching Six Sigma (DMAIC). For core quality topics, pre-work includes 1-2 short readings. For each DMAIC session, participants complete a brief Internet module developed by our Performance Improvement group for AQSI. Class sessions are highly interactive and use exercises to help teams apply lessons to their projects. All course material is accessible through our learning management system. Project work: Each team receives guidance from a senior Clinical Mentor and a Process Improvement Coach. AQSI teams use a dedicated data analyst to run administrative database queries and have direct access to the Manager of Clinical Informatics to facilitate interventions. Teams present project updates twice during the AQSI program to the hospital’s Improvement Council, receiving feedback from senior organizational leaders, including the Chief Medical Officer, Chief Nurse Executive, and the VicePresident for Quality. Teams also present updates to one another during mid-term and final sessions.

Outcomes We evaluated AQSI in several ways to assess the impact of our efforts and to enhance its value to learners over time: •

Surveys for each session: Participants complete surveys at the end of each classroom session to assess the effectiveness of the instructor and strategies used. For the item, “The session met the learning objectives,” participants in the first 3 years rated all sessions a mean score of 4.1 or higher on a 5-point Likert scale, (1=strongly disagree; 5=strongly agree). The mean across all sessions was 4.6. For the item, “From this session, I have gained skills, abilities, and strategies for

33

executing quality improvement projects,” participants rated each session with a mean score of 3.8 or higher. The mean across all sessions was 4.4. •

Surveys upon completion of the program: Participants complete surveys immediately following the program to assess overall perceptions and solicit recommendations for further enhancement. Participants in the first 2 years (N=48) gave high ratings to the AQSI program. At least 98% of participants agreed with each of the items stating that “AQSI has improved my ability to...” (1) “design quality/safety improvement interventions” (2) “implement quality/safety improvement interventions,” and (3) “evaluate the effectiveness of quality/safety improvement interventions.” All participants agreed that “AQSI has equipped me with skills to improve quality/safety in my clinical area.”



Pre- and post-AQSI knowledge assessment: We created a 14-question multiple choice test using the National Board of Medical Examiner Guidelines and administered it to participants in year 2. We found significant improvement in performance (77% pre-vs. 82% post-AQSI; p=0.04)



Adapted Quality Improvement Knowledge Assessment Tool (QIKAT): We adapted the QIKAT, a tool used to assess knowledge of key process improvement principles.4,5 The QIKAT consists of three short case scenarios, each followed by 2-3 questions. Short answers are scored using a standardized rubric. We found significant improvement in scores post-AQSI (11.5±2.7 vs. 13.7±3.2; p=0.01).



Surveys of participants 6 months after AQSI: We administered a survey 6 months after program completion to assess integration of AQSI skills and involvement of participants in additional quality improvement projects and roles. A vast majority of AQSI participants feel that AQSI has had a lasting impact on their skills and professional development.



Leape Ahead Award: AQSI received the American Association for Physician Leadership’s 2015 Leape Ahead Award for dedication to improving the quality of health care by developing future health care leaders.

34

Conclusions AQSI is a professional development program which equips physicians and other healthcare professionals with the knowledge and skills needed to effectively lead QI. In addition to directly training a number of residents and fellows, the program addresses the dire need to provide faculty with the knowledge and skills needed to effectively teach QI to the next generation of physicians. The unique aspects of the AQSI (team participation, experiential learning, interdisciplinary collaboration) make it a vehicle for facilitating cultural change and influencing the informal curriculum at our institution.

Resident Quality Improvement Training Program (Northwestern Memorial Hospital) Overview While the Academy for Quality and Safety Improvement program is an example of a formal QI/PI training program that is designed to train physicians and hospital staff from diverse backgrounds, various residency programs are developing QI training programs specifically targeting residents. The Accreditation Council for Graduate Medical Education (ACGME) requires that each resident “systematically analyzed practice using quality improvement methods, and implement changes with the goal of practice improvement.”100 The Northwestern Resident Quality Improvement training program is an example of a highly structured approach to fulfilling this requirement, combining formal didactic QI learning with the real-world experience of carrying out a clinically-relevant QI project under close faculty mentorship. History Prior to 2008, trainees in Northwestern Memorial’s General Surgery residency program were integrated into hospital QI committee projects with dedicated QI/PI staff. Given that the QI committee meetings were usually during regular business hours, however, residents found that their involvement was severely limited by the time constraints imposed by their clinical duties. A transition to resident-led projects was made in 2008. In order to equip resident to lead their own projects, they were required to undergo DMAIC training. Training sessions were staggered over the 35

course of the year, running concurrent with the implementation of a clinical QI project of their choosing. In groups of two or three, residents identified a clinical quality issue, and learned to Define, Measure, Analyze, Implement, and Control over the following twelve months. They were required to present their project and outcomes at the Department of Surgery’s Grand Rounds at the end of the academic year. It became clear, however, that one year was insufficient to carry out a meaningful project and the time line was gradually lengthened to encompass the first four years of residency. Program Structure Didactics: The program is currently introduced to the Northwestern general surgery residents in spring of their R1 year. At this point, they have had enough clinical experience to identify a quality issue that interests them. The subsequent DMAIC training is thus more readily applied to a nascent, yet concrete, idea. The DMAIC training takes place over two three-hour sessions led by trained process improvement experts employed by the hospital. These experts not only provide the didactic training but are closely involved in all resident projects, offering advice and support during the entire process. Structured modules and exercises allow for an interactive, practical approach to learning the DMAIC methodology. QI Project: The didactics are truly geared toward properly equipping residents with the knowledge necessary to carrying out a meaningful QI project using DMAIC methodology. Residents first clearly map out of their project charter, understanding that the course of process improvement is rife with road blocks, and that the charter is an evolving document. Each project must have a faculty sponsor who has meaningful involvement, though the process is owned by the resident team. Quarterly check-in sessions take place through the R2, R3, and first half of the R4 year with faculty trained in QI/PI and the hospital PI team. Though clinical duties may push QI efforts toward the bottom of residents’ priority, these sessions motivate residents by offering a formal forum in which they can present their project process and receive help regarding any problems they may have encountered. All residents must present their QI project in the department of surgery’s grand rounds during the spring of their R4 year. Traditionally the process improvement team and hospital leadership are invited to 36

and attend the resident's grand rounds presentations. For most of the Northwestern residents, these grand rounds fall during their first of two dedicated research years; this is timed so that they are able to construct a high-quality presentation of their hard work and to publish a manuscript on their project. To date, a total of 44 residents have completed the program, representing 16 QI projects (Table 2). Projects have proved to have durability; about 70% of the projects have been perpetuated in some capacity. Conclusion The Northwestern Resident QI Training program is an example of a formal program geared specifically toward training clinical residents in the fundamentals of QI/PI. This model may not only be successful in training a new generation of quality-minded surgeons at Northwestern, but it may also find success in other institutions and in other fields of medicine. Fundamental to the program’s endurance and evolution is a high-level of dedication to QI on the part of the institution, faculty, and PI staff.

37

COLLABORATIVE APPROACHES Evidence indicates that hospitals participating in regional collaborative improvement programs improve quality far more quickly than hospitals on their own,101 and since the formation NNECDSG, QI collaboratives have proliferated on the state-level. Though each collaborative has its own emphasis and design, they all subscribe to the idea that improved quality can result from data transparency and strong provider networks. Each of the following sections were authored by several of the key members in each respective collaboratives and highlights the unique aspects of their approach. The Michigan Perioperative Transformation Network (MPTN) section highlights the theoretical underpinnings of a successful collaborative. The Tennessee Surgical Quality Collaborative (TSQC) followed in close suit with its own emphasis on the importance of inter-personal bonds. Washington State’s Surgical Care and Outcomes Assessment Program (SCOAP) features a stand-alone data system and highlights several successful QI collaborative projects. Newest to the collection is the Illinois State Quality Improvement Collaborative (ISQIC), which is a learning collaborative, and focuses on disseminating QI methodologies and providing support to its participating hospitals. The Michigan Perioperative Transformation Network Michigan has one of the first statewide surgical quality improvement collaborative and remains one of the most robust and active statewide efforts in quality improvement. At the core of collaborative is Blue Cross Blue Shield of Michigan (BCBSM). BCBSM has created innovative partnerships with physicians, physician organizations, and hospitals that are accelerating quality improvement and transforming healthcare. These “Value Partnerships” represent the engagement between BCBSM and providers to create highly functioning systems of healthcare. In Michigan, the transformation of perioperative care began with the MSQC (Figure 2). This collaborative of 72 Michigan hospitals focuses on improving quality through systematic sharing of best practices. Thereafter, anesthesiologists joined the effort with ASPIRE (Anesthesiology Performance Improvement and Reporting Exchange), and a partnership with the Michigan Value Collaborative (MVC) incorporated the cost-efficiency of surgical 38

episodes. Collectively, these groups comprise the Michigan Perioperative Transformation Network (MPTN) (Figure 3). The most recent full year of data (2012) indicates that MPTN efforts yielded statewide savings of $193.7 million (across all payers including Medicare and Medicaid). From 2008 to 2013, the entire portfolio of BCBSM collaborative programs generated $597 million in healthcare cost savings while improving and prolonging the lives of thousands of patients. The three cornerstones of Michigan’s successful clinical transformation network are 1) culture change, 2) data sharing, and 3) best practice implementation (Figure 4). A culture of transparency and collegiality is critical. In Michigan, this has been achieved by strengthening existing relationships from training and patient care collaborations (Figure 5). Such relationships facilitate shared learning that transforms clinical practice. In this optimized cultural milieu, hospitals and providers collect, share, and analyze data from well-established clinical registries. Granular clinical data, from diverse hospitals statewide, allow more robust analyses of care processes and patient outcomes. Guided by actionable data, providers assess and subsequently optimize care by developing and implementing best practices that yield better outcomes at a lower cost for many common and expensive diagnoses and procedures. 1) Culture change: As in Michigan, the emerging regional networks are built around personal and professional relationships (culture). Cultural transformation is achieved by quarterly meetings with best practice sharing. All focus is on high performance, implementation of best practices, and data reassessment to measure clinical and financial effects of clinical transformation. 2) Data Sharing: Quarterly collaborative meetings focus on data sharing, collaborative learning, best practices and implementation strategies. Within regions, hospitals explain how they practice specific to areas of high performance. The pride with which hospitals and providers share details of their practices in areas of exceptional performance is evident. In a transformational culture, leaders also share information regarding aspects of care where they underperform relative to peers. This transparent culture, built around measuring “to improve” rather than “to judge” raises the performance of all participants in the PTN. 3) Best Practice Identification and Implementation: High-performing hospitals in specific domains of care should be identified from core measures in the registry. Illustrating this approach, one Michigan 39

hospital had unexpectedly low transfusion rates for elective surgical cases. The hospital submitted their transfusion protocol; a collaborative review team found the protocol to be exceptional and likely responsible for low transfusion rates. The protocol was shared, modified for local contexts, and adopted by network hospitals with a subsequent 22% reduction in perioperative transfusions. Each year the collaborative group decides to prioritize 2 domains of focus. Toolkits and inter-institutional networks for change are established. Progress and clinical outcomes are closely followed.

The Tennessee Surgical Quality Collaborative In May 2008 a 10-hospital collaborative was established in Tennessee, with a structure resembling the Michigan Surgical Quality Collaborative. As in Michigan, Tennessee’s collaborative, the Tennessee Surgical Quality Collaborative (TSQC) represented a partnership between insurers and academic/private sector organizations, including the Blue Cross Blue Shield of Tennessee, The Tennessee ACS Chapter, and the Tennessee Hospital Association. The Tennessee founders emphasized that the collaborative would be an effort controlled by surgeons and would be built on three factors: funding, data management across multiple institutions, and data sharing. Funding came from the Blue Cross/Blue Shield Foundation of Tennessee, and this financial support went toward salaries for nurse reviewers, a stipend for the surgeon champions, and for administrative support through the Tennessee Center for Patient Safety (TCPS). Data used for the collaborative was managed by the TCPS program. Though sharing of the data throughout the collaborative was facilitated by the development of a website, the Tennessee collaborative emphasizes that human collaboration underlies their success. The initiation of a successful collaborative rests on the knowledge that effective use of data can improve patient care and is a high priority for individual surgeons as well as institutions. How this data is then utilized rests on a shared vision of improvement. The concept that data sharing amongst the hospitals without fear of retribution was one shared with the Michigan consortium. No data would be used for marketing or advertising and the good of the collaborative is recognized to come before the individual institution. 40

Also key to these underlying principles is recognition of the camaraderie between the surgeons. Using the state chapter of the ACS as the seed of the collaborative was a key choice because of the preexisting esprit de corps. The familiarity of the surgeons with one another through the state chapter meetings of the Tennessee ACS allowed for very candid discussions regarding surgical quality in the state. These principles seem to have served the collaborative well, as the Tennessee hospitals have been able to demonstrate reductions in post-operative occurrences, including surgical site infections, acute renal failure, and graft/prosthesis/flap failure as well as process variables including prolonged ventilator use.102 Washington State’s Surgical Care and Outcomes Assessment Program (SCOAP) The Surgical Care and Outcomes Assessment Program (SCOAP) is a surgeon-led QI initiative designed to track and reduce variability in surgical and other interventional practice and outcomes. SCOAP was developed in Washington State and is administered by the non-profit, Foundation for Healthcare Quality (FHCQ). SCOAP was motivated by investigations performed from 2000-2004 at the University of Washington’s Surgical Outcomes Research Center demonstrating significant variability in surgical outcomes across Washington State. These claims-based analyses lacked meaningful risk adjustment, process of care information and granular clinical details needed to actually improve quality. These reports were helpful in encouraging a surgeon-led, grassroots collaborative around performance surveillance using medical record-based data. SCOAP was started in 2005 and now includes more than 50 hospitals (~90%) across Washington State (and more recently several centers in California and Oregon). Based in part through a grant from Washington State’s Life Science Discovery Fund in 2007 and driven by the input of various stakeholders in the SCOAP community, SCOAP expanded its initial focus on general surgery to include urology, vascular care, obstetrics, pediatric surgery and spine interventions. As summarized in Table 3, the total number of cases abstracted since 2005 is 108,600. SCOAP is supported by hospital-paid subscription fees based on the number of clinical modules they participate in and case volumes. Data collection is completed by a designated data abstractor at each hospital. The abstraction does not require clinically-trained staff, and to accommodate different levels of 41

clinical experience and ease abstraction, each metric is defined using a narrow data dictionary. SCOAP data are reported back to hospitals on a quarterly basis through a web-based alert system and in real time at the hospital level. Hospitals view their own data, compared to the anonymized aggregate, and the reports include information about their benchmarked performance compared to other hospitals with flagged, color-coded metrics to understand performance “at a glance”. The quarterly reports are riskadjusted to allow for an “apples to apples” comparison and are linked to actionable procedure specific and generic process of care performance. A separate section of the reports addresses the appropriateness of the indications for the procedures using criteria established by surgeons and other stakeholders. A unique aspect of SCOAP is that more recent modules (spine and prostate) include patient reported outcomes (pain and function) at baseline and for up to two years after surgery. SCOAP’s stand-alone data collection and feedback system has allowed participating hospitals to coordinate multiple quality improvement initiatives. Examples of SCOAP-led interventions include the 2010 universal adoption of a surgical checklist, modified to include process metrics (e.g., glucose testing and glycemic control in all surgical patients with diabetes) that SCOAP data revealed to be underperformed. In 2012, and in partnership with investigators at the University of Washington, SCOAP surgeons deployed Strong for Surgery (S4S, www.strongforsurgery.org) in their offices, a checklist initiative focused on preoperative risk reduction. A program supported by Agency for Healthcare Research and Quality's (AHRQ) called the Comparative Effectiveness Research Translation Network (CERTAIN, www.becertain.org) was developed in 2011, building a set of “learning healthcare system” activities at SCOAP hospitals, enhancing the ability to perform comparative and cost effectiveness research and implementation research. A CERTAIN cost impact evaluation (conducted when hospitals were still joining the program) compared SCOAP and non-SCOAP hospitals and identified greater than $50 million in savings from avoided complications over a three year period. CERTAIN’s interest on prehospital risk minimization through S4S, its emphasis on decision and implementation science to influence clinician and patient decision making, and a focus on diseases rather than procedures aims to extend the impact and approach of SCOAP across the care continuum. 42

Illinois State Quality Improvement Collaborative (ISQIC) In 2014, the Illinois Surgical Quality Improvement Collaborative (ISQIC) was formed as a joint partnership between Northwestern’s Surgical Outcomes and Quality Improvement Center, the Illinois and Metropolitan Chicago American College of Surgeons chapters, and Blue Cross Blue Shield of Illinois (BCBSIL). Similar to other state collaboratives, ISQIC’s mission is to help participating hospitals identify opportunities for improvement using quality data, examine areas of poor performance, and design quality improvement initiatives that improve care to surgical patients. Through interviews with key stakeholders in Illinois hospitals, ISQIC leadership recognized several barriers to implementing quality improvement initiatives. These barriers include the start-up costs associated with joining ACS-NSQIP, lack of guidance and training in designing and implementing quality improvement initiatives, and lack of mentorship for the Surgeon Champions at each site. ISQIC leadership thus sought to address these gaps through a variety of unique offerings, while maintaining structures that had proven successful in other statewide collaboratives. To address these particular barriers, as well as other aspects that were felt to be integral to facilitating surgical QI, ISQIC leaders articulated 21 components of the collaborative, which were organized into five different domains: guided implementation, education, comparative reports, networking, and financial support. The details of each of the components can be found in Table 4.

To guide the evaluation of these 21 components, a conceptual model was formulated, drawing from the Model for Understanding Success in Quality (MUSIQ).103 The contextual factors that influence QI are articulated in this model and are broken down into hospital-related factors, surgical QI team factors, and peri-operative microsystems. Using this conceptual model, a mixed-methods evaluation of the adaptation and implementation of ISQIC will be carried out as the collaborative develops. Site visits (which include observations, interviews, and focus groups), artifact analysis (evaluation of the materials 43

used during the ISQIC adaptation, such as the hospital ISQIC application, quality committee meeting minutes, and mentor and coach call documentation forms), and surveys (including the Safety Attitudes Questionnaire, a Leadership Engagement Survey, and a Quality Improvement Resources and Support Survey) will all be used. ISQIC, which currently has 55 participating hospitals, uniquely expands upon the offerings of other statewide collaboratives by focusing on guided implementation and educational objectives. Central to these goals is the mentorship and coaching program termed the Illinois Mentoring, Training, and Coaching (I-MENTAC). I-MENTAC incorporates three elements: 1) Surgeon Mentor: Each hospital is assigned a surgeon mentor with experience and training in conducting quality improvement initiatives in a similar hospital. Surgeon mentors undergo a rigorous selection and training process, and must communicate with their assigned hospitals at minimum 4 times/year. 2) Quality and Process Improvement Training: At each hospital site, the quality improvement team will receive formal training in quality improvement methodologies, both through online modules as well as inperson training. 3) Quality and Process Improvement Coaching: coaching and consultation with the ISQIC coordinating center staff, via phone calls and webinars, is available to individual hospital teams as they formulate and implement quality improvement initiatives. ISQIC additionally provides financial support to hospitals by a number of mechanisms. Stipends from BCBSIL cover the ACS NSQIP participation fee and state collaborative fee, and are tied to improvement in surgical outcomes. ISQIC also provides pilot grants to individual sites that may help overcome upfront costs associated with implementation of quality improvement initiatives.

44

OVERCOMING BARRIERS TO SURGICAL QUALITY IMPROVEMENT EFFORTS Quality improvement is a philosophy as well as a collection of processes and tools. While it has gained acceptance as a global strategy for improving healthcare services and patient outcomes, challenges to the broad application of QI continue to occur. The aim of this section is to examine the factors and processes that facilitate QI in the care of surgical patients. The section starts with an example – the safe surgery checklist – to illustrate relevant questions about the adaptation and diffusion of quality improvement. Then the section turns to a discussion of the diffusion of innovation and why it is so challenging to spread QI interventions, and the factors associated with successful QI applications. Three factors associated with successful implementation of QI initiatives are discussed – transformative leadership, organizational culture, and teamwork. The Case of the Safe Surgery Checklist The use of the checklist has directly contributed to improvements in safety and quality of clinical care as well as in other fields, most notably aviation, where checklists have made a dramatic impact on the testing and development of new aircraft and the safety of the thousands who travel by air on a daily basis 104. The World Health Organization (WHO) initiative to promote use the Surgical Safety Checklist is a compelling example of the evolutionary process of the spread of QI as well as the challenges to full diffusion and institutionalization.104 The successful use of checklists in the aviation industry led to the initial efforts by Pronovost to test the use of checklists to reduce central line infections and other adverse events in Intensive Care Units.105 Pronovost’s successful efforts, which started in 2001, led to experimentation to extend the use of checklists to other patient safety issues. A global initiative was launched under the auspices of the WHO to develop and apply checklists to improve surgical safety. The development of the surgical checklist is a good example of the application of QI philosophies and processes – demonstrating leadership and interdisciplinary teamwork, using a form of a PDSA improvement cycle to test, learn and improve by enlisting a broad range of expertise to improve safety on a global scale. To measure success, a quasi-experimental study was undertaken that encompassed a large 45

number of healthcare facilities practicing surgery under a wide range of conditions. Haynes et al. demonstrated an overall reduction in complication rates from 11.0% at baseline to 7.0%, and a reduction in death rates from 1.5% to 0.8% with the introduction of the surgical safety checklist in eight hospitals in eight cities, worldwide.106 Similar results were demonstrated in six hospitals in the Netherlands (the SURPASS collaborative group) including almost 4,000 surgical patients before and after implementation of a surgical checklist. The Dutch checklist extended process boundaries to include activities within and outside the operating room. Findings included a statistically significant (p

Process improvement in surgery.

Process improvement in surgery. - PDF Download Free
566B Sizes 1 Downloads 14 Views