Quality Matters

Introduction to Quality Improvement Part Two: Making and Maintaining Change Peter J. Chung, MD,* Rebecca A. Baum, MD,† Neelkamal S. Soares, MD,‡ Eugenia Chan, MD, MPH§

T

his is Part 2 of a case study in quality improvement (QI) in developmental–behavioral pediatrics. The purpose of this series was to provide the reader with the tools necessary to create effective change. In Part 1, we reviewed the initial stages of the QI process, including foundational steps such as assembling a team, defining the problem, and setting meaningful and realistic goals. We followed a fictional “improvement team” as they tackled the issue of attention-deficit hyperactivity disorder (ADHD) follow-up visit attendance. Our team learned the importance of using data to understand both their current state and their degree of progress, as well as the importance of gathering feedback from multiple perspectives throughout the QI process. Part 2 of this article picks up where we last visited with our team—they had defined their project aim, identified the key “drivers” or leverage points necessary to achieve their aim, and developed interventions to drive improvement, summarized in the team’s key driver diagram (Fig. 1). In this article, we will focus on our team’s progress as they implement their interventions. How will they know if their interventions are successful, and how can they make systematic improvements along the way to meet their goal? To understand the foundational concepts behind the team’s progress, the reader is encouraged to read Part 1 of this series.1 More information on the QI tools referenced in both articles can be found in the resources listed in Table 1.

The QI team leader, clinician Harry, reviews the key driver diagram with the rest of the team, which includes Eric, the senior clinician; Julie, another clinician; Jeffrey, the office manager; Lilia, the clinic coordinator; Marcus, the scheduler; and Jennifer, the nurse. They realize they have done the necessary planning and are now ready to make some changes. (J Dev Behav Pediatr 35:543–548, 2014) From the *Department of Pediatrics, Mattel Children’s Hospital, University of California Los Angeles, Los Angeles, CA; †Nationwide Children’s Hospital, Department of Pediatrics, The Ohio State University, Columbus, OH; ‡Geisinger Health System Danville, PA; §Division of Developmental Medicine, Boston Children’s Hospital, Boston, MA. Disclosure: The authors declare no conflict of interest. Address for reprints: Rebecca A. Baum, MD, Nationwide Children’s Hospital, Department of Pediatrics, The Ohio State University, 700 Children’s Drive, Columbus, OH 43205; e-mail: [email protected]. Copyright  2014 Lippincott Williams & Wilkins

Vol. 35, No. 8, October 2014

Together the team decides that reminder calls to patients with a history of missed appointments should be a reasonable intervention to try first. Jeffrey expresses concern that too much administrative staff time will be spent making phone calls, and Harry suggests using minutes spent making reminder phone calls each day as a balancing measure. Harry then introduces the Plan-Do-Study-Act (PDSA) worksheet for the team to complete together. “Didn’t we just do this?” Eric asks, exasperated. “I don’t think we need to go through the bureaucracy of filling out another form.”

The PDSA model is a framework designed to help QI projects achieve their intended goal using iterative “tests of change” that start small and gradually progress to larger scope and/or scale, ultimately leading to systemic change. Figure 2 illustrates the iterative and continuous process of QI. Using a series of PDSA cycles can help QI teams to divide a larger overall goal into smaller, more manageable chunks. This is an optimal way to approach improvement for several reasons: • Making small changes and studying their effects allows for the testing of hypotheses and proposals on a microsystem level. • An iterative approach allows the team to note and evaluate unintended side effects of changes (e.g., disruption to workflow) and learn from them. • The repeated making and testing of hypotheses generates data useful for monitoring and evaluating improvement projects. • A slowly expanding model is more likely to lead to lasting cultural and systemic shifts, especially given the natural resistance that systems and people have toward change. Broad, sudden, and sweeping changes are likely to cause significant disruption and distress; even if these efforts are successful, they will likely require significant ongoing energy to sustain over time. The initial PDSA cycle should have a limited aim and change hypothesis for testing. These limits might include (1) the clinician(s) who are participating in the intervention, (2) the number and/or type of patients involved in the process, (3) the clinic setting where the www.jdbp.org | 543

Figure 1.

Key driver diagram.

intervention takes place, and (4) the time frame. For example, the initial PDSA cycle might be limited to one clinician and one nurse with the first patient each day for a week. Despite their initial misgivings, the QI team sets up a plan to test their first hypothesis over the next 4 weeks. Eric, a new believer in the importance of evidence, is particularly convinced by the concept of making small changes and collecting data to measure their impacts. They complete the first half of the PDSA worksheet and then set out to carry out their plan (Fig. 3). A week after their first PDSA cycle starts, the team reconvenes to follow-up on the results. Lilia, who has been fairly energetic about making steps toward improvement, is particularly negative about their first attempt. She says angrily, “It took way too much time for me to make these calls, and I couldn’t get any of my regular work done! I really wanted to give up after Tuesday.” Everyone is disappointed to hear her reaction, especially when the no-show rate showed no improvement. Julie, however, surprises everyone with a positive attitude. “Hey, I think the idea is still a good one — let’s just try making some adjustments to the plan to make it work better,” she says. “Maybe we just bit off a little more than we could chew for this first time around.” She helps the group complete the PDSA cycle 1 worksheet with a new focus on process measures (Fig. 4). Each PDSA cycle should end with one of 3 possibilities: 1. The intervention did not work: that is, the change hypothesis was disproven and/or the change resulted in significant and unacceptable disruption to the existing process. The team must then decide 544 Introduction to Quality Improvement: Part 2

what new hypothesis should be tested and what intervention strategies could be used, and ideas can be drawn from the key driver diagram. QI teams can often become discouraged if interventions fail to make a difference, especially if they are early in the QI project. However, it is crucial that teams remember QI is an iterative process—failure is not only expected but is also an important learning tool. 2. The results were indeterminate: that is, the change hypothesis was neither proven nor disproven. This may result from unforeseen difficulties in the process, a poor fit between the intervention strategy and the players in that cycle, or even natural variation in the system. The team may choose to revise the intervention strategy to adjust for these issues or even to run the exact same intervention again to see whether there might be different results. 3. The intervention did work: that is, the change hypothesis was proven. If the change hypothesis was proven, the next PDSA cycle should apply the same intervention with a change in scale or scope. A change in scale means that the intervention remains the same but is expanded in 1 dimension to encompass a larger population. One rule of thumb is to use a factor of 5 as a starting point for any increase in scale. For example, an intervention that worked with one clinician and one nurse for one patient with ADHD on one day of the week can be expanded to include one clinician and one nurse for 5 patients with ADHD on one day of the week. With a measured increase in scale, the QI team will be able to study the effects (both intended and unintended) of the intervention on the system. A change in scope means that the intervention is expanded to encompass a more diverse population. For example, an intervention that worked with one clinician and one nurse for one patient with ADHD on one day of the week can be broadened to include one clinician and one nurse for one patient with ADHD and one patient with autism spectrum disorder on one day of the week. When changes in scope are Journal of Developmental & Behavioral Pediatrics

Table 1. Additional QI Resources Online Institute for Health Care Improvement (www.ihi.org) National Initiative for Children’s Healthcare Quality (www. nichq.org) Health Resources and Services Administration Toolkit (http:// www.hrsa.gov/quality/toolbox/index/html) Duke Center for Instructional Technology (http:// patientsafetyed.duhs.duke.edu/module_a/module_ overview.html) Healthcare Improvement Skills Center (http://www. improvementskills.org/) The Team Handbook (www.teamhandbook.com/) In print Langley GJ, Moen RN, Nolan KM, et al. The Improvement Guide. 2nd ed. Jossey-Bass; 2009. Tague N. The Quality Toolbox. 2nd ed. ASQ Quality Press; 2005. Nelson EC, Batalden PB, Godfrey MM. Quality by Design: A Clinical Microsystems Approach. Jossey-Bass; 2009. Balestracci D. Data Sanity. Medical Group Management Association; 2009. Brassard M, Ritter D. The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning. 2nd ed. Goal/QPC; 2010. Scholtes PR, Joiner BL, Streibel BJ. The Team Handbook. 3rd ed. Oriel; 2010.

made, expanding only 1 factor at a time allows for sufficient study of the system. Large or simultaneous changes in scale and scope run the risk of causing too much disruption, making it difficult for long-standing improvements to take hold. Serial PDSA cycles that gradually increase in scale or scope can have greater likelihood of leading to long-lasting cultural and institutional change.

Figure 2. Plan-Do-Study-Act cycles should occur in sequence toward creating system and sustainable change (Langley GL, Moen RN, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide. 2nd ed. Jossey-Bass; 2009.). Adaptations are themselves works protected by copyright. So in order to publish this adaptation, authorization must be obtained both from the owner of the copyright in the original work and from the owner of copyright in the translation or adaptation. Vol. 35, No. 8, October 2014

As changes in each cycle are planned, teams must also consider how their observations and measurements may need to be adjusted. By anticipating these effects before the cycle begins, the team can put measures in place to verify or disprove these assumptions. For example, as an intervention is expanded from patients with ADHD to patients with autism spectrum disorder, the team should anticipate what new or different effects might result. Will the children with autism spectrum disorder have a higher frequency of unpredictable behavior patterns, resulting in a higher rate of cancellations? If so, a new clinic policy on rescheduling appointments for stimulant medications may fail to function under the greater unpredictability. A week later, the QI team is pleased to hear that Marcus was able to complete his QI tasks in a timely fashion. Although he was also not able to speak directly with many parents, several families mentioned to the front staff at check-in that the reminder voicemail was the only reason they remembered their appointment that day. The team is re-energized by these reports, especially when they see that the noshow rate had decreased for that week. Eric wonders whether reminder phone calls should be a regular practice for all patients, not just the ones who have no-showed in the past. However, after a discussion on scale and scope, the team decides to increase the scale of the intervention from 1 to 5 weeks. As this marks the end of PDSA cycle 2, they complete the following worksheet (Fig. 5).

Measuring and Maintaining Results After a few more PDSA cycles, the QI team has come up with a patient reminder solution that has gradually been incorporated into the clinic’s workflow and has been embraced by the clinic staff. Subsequent data collection has demonstrated the noshow rate has reached their goal. The team is elated by their progress—Eric even hangs a “mission accomplished” poster in the break room. They decide to continue with the current plan for the next few months, but keep monitoring no-show rates using a run chart. However, over time, they are surprised and displeased to see the run chart results (Fig. 6). Harry rallies the team. “It is a living, breathing process,” he reminds them, “and sometimes it means that we might have to revisit what we are doing.”

In addition to evaluating the baseline, run charts are also a simple tool that can track response to change over time. Figure 6 illustrates the importance of surveillance after a QI project has met its aim. Serial checking (e.g., every quarter) may be required to monitor for any regression to old habits, which commonly occurs when © 2014 Lippincott Williams & Wilkins

545

Figure 3.

Plan-Do-Study-Act 1 worksheet.

coordinated efforts or campaigns have been discontinued. Systems may require “booster sessions” to keep meeting their goals for quality, with specific projects or campaigns designed to keep the system on track. If the aim continues to be met, the QI team might then identify a new aim for improvement. By systematically incorporating QI activities into a practice, teams can effect a paradigm shift toward a culture of improvement. Adding a “sustain” goal into the project aim can be helpful in reminding the team to monitor progress after the project goal has been met.

Figure 4.

Knowing Is Half the Battle Our team’s QI story illustrates some general principles for improvement projects, including the following: • Involving multiple stakeholders from the ground level • Taking time to understand the existing process using a variety of tools and perspectives • Developing a Specific, Measurable, Actionable, Realistic, and Timely (SMART) aim statement to guide the project

Plan-Do-Study-Act 1 worksheet, completed.

546 Introduction to Quality Improvement: Part 2

Journal of Developmental & Behavioral Pediatrics

Figure 5.

Plan-Do-Study-Act 2 worksheet.

• Using a key driver diagram to summarize the project’s planned interventions and how each relates to the SMART aim • Using serial PDSA cycles to test changes on a small and limited scale initially, and gradually expanding the scale or scope • Seeing opportunities to learn from failures and successes • Continuing to monitor after meeting goals to ensure that success is maintained over time. While our case example used the Institute for Healthcare Improvement’s Model for Improvement as its QI methodology, there are many other approaches to QI, including Six Sigma and LEAN (Table 1).

Figure 6.

We hope this series of articles will inspire developmental–behavioral clinicians to use QI methods to address vexing issues in clinical practice and workflow. We encourage clinicians and trainees to develop QI projects on any scale, large or small, not merely to meet certification or training requirements but with the aim of learning from challenges and successes, and sharing the learning with colleagues. Although the example provided focuses on a particular disorder (ADHD) and issue (improving visit attendance), the fundamental principles of the QI approach can be applied to a broad array of clinical challenges. Potential goals include improvements in process (e.g., shortening report writing time or improving DSM documentation during Autism Spectrum Disorder assessments)

Run chart, with the dotted line representing project goal.

Vol. 35, No. 8, October 2014

© 2014 Lippincott Williams & Wilkins

547

or outcomes (e.g., increasing family satisfaction with wait times or reducing unplanned Emergency Room visits for children with spina bifida). With any QI project, the team approach should be at the forefront, from the formulation of the problem to the development of interventions and monitoring progress. Data also drive the QI process; they help teams understand where they should focus their energies and if their changes are resulting in improvements. Small tests of change, if successful, can be scaled or expanded. If unsuccessful, plans can be adjusted and tested to allow for further progress.

548 Introduction to Quality Improvement: Part 2

In summary, QI methods offer a data-driven, results-oriented, and thoughtful approach to many of the challenging problems encountered in developmental– behavioral pediatrics. We believe that this type of approach will enable our field to improve and sustain the high-quality family-centered care valued by our providers, our patients, and their families. REFERENCE 1. Chung PJ, Baum RA, Soares NS, et al. Introduction to quality improvement part one: Defining the problem, making a plan. J Dev Behav Pediatr. 2014;35:460–466.

Journal of Developmental & Behavioral Pediatrics

Introduction to quality improvement part two: making and maintaining change.

Introduction to quality improvement part two: making and maintaining change. - PDF Download Free
486KB Sizes 1 Downloads 4 Views