Practical Radiation Oncology (2013) 3, 164–166

www.practicalradonc.org

Invited Commentary

Patient safety improvement efforts: How do we know we have made an impact? Stephanie Terezakis MD a,⁎, Eric Ford PhD b a

Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, Maryland b Department of Radiation Oncology, University of Washington, Seattle, Washington Received 1 February 2013; accepted 5 February 2013

The safe delivery of therapeutic radiation is both complex and inherently high risk. Given the number of steps required for radiation delivery, it is remarkable that so few severe errors actually occur. The paucity of such events has hampered the development of the “science of safety” in radiation oncology. Without meaningful clinical endpoints to measure, how do we know if our interventions are making patients safer? In the manuscript “Patient safety improvements in radiation treatment through five years of incident learning” by Clark et al, 1 an analysis of 2506 incident reports over 5 years recorded both actual and near-miss events, minor and major. The majority of events were relatively minor incidents and only approximately 2% were categorized as “non-minor” actual incidents. The authors should be commended for developing a robust incident reporting structure within their clinic that appears to be well accepted by staff. The authors use root-cause analysis to identify the origin of clinical incidents and inform mitigation strategies. Clear categorization of incidents into actual, actual/nonminor, and near-miss events also helped to prioritize interventions. The authors summarize the major interventions implemented as a result of the information gained from the incident reporting system, including a laterality policy, changes to staffing levels in areas at risk for error, and more See Related Article on page 157. Conflicts of interest: None. ⁎ Corresponding author. 401 N. Broadway, Ste 1440, Weinberg Comprehensive Cancer Center, Johns Hopkins School of Medicine, Baltimore, MD 21231. E-mail address: [email protected] (S. Terezakis).

peer review of contours and treatment plans. This sophisticated system demonstrated aptly how incident learning can be a powerful tool to improve patient safety and can fuel new strategies and initiatives to address system weaknesses. The results of Clark et al 1 can be viewed in context with other studies of voluntary reporting systems in radiation oncology. 2,3 The interest extends beyond radiation oncology and even beyond health care. Incident reporting systems have been essential to quality improvement in technologically sophisticated fields such as the airline industry, nuclear power industry, and anesthesiology. 4,5 When combined with prospective risk assessment to identify potential failure modes, voluntary reporting systems are powerful tools to enhance quality and safety. Despite the concerted efforts like Clark et al 1 and other authors, we are still left wondering: are our patients actually “safer” now?

Clinical endpoint: fewer reported incidents? Answering this question requires a measurable endpoint for harm. Our field has struggled to define an endpoint that convincingly demonstrates the impact of safety improvement strategies. Clark et al 1 use the number of incidents reported in their event reporting system as a surrogate for patient safety improvement, acknowledging the absence of a measurable clinical outcome. They demonstrate that the number of actual reported incidents and the severity of those incidents decreased over the course of 5 years. They attribute this to improving safety performance. This raises a fundamental question: which is safer, a clinic with more incident reports or a clinic with fewer incident reports?

1879-8500/$ – see front matter © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.prro.2013.02.002

Practical Radiation Oncology: July-September 2013

At first glance it may appear that fewer reports are desirable. However, fewer reported events do not necessarily mean fewer errors. We know that strong biases lead to variability in reporting. Multiple studies from health care show that voluntary reporting fails to capture many incidents. In 1 inpatient study, for example, only 12% of adverse drug events were voluntarily reported. 6 Equally important, an analysis of the large database from Agency for Healthcare Research and Quality (AHRQ) indicates that hospitals that register more incident reports actually have fewer events leading to patient harm. 7 In other words, more reported incidents may actually correlate to a safer clinic; we cannot address problems unless we are aware of the incidents occurring. It is clear that Clark et al 1 understand the importance of creating a safety culture to enhance incident reporting as they have made considerable efforts to make incident reporting easy and nonpunitive to encourage reporting. Interestingly, one of our departments has had a similar incident reporting system in place since 2007. After a concerted effort to enhance the safety culture by decreasing reprisals for incident reporting and encouraging physician buy-in, the number of incidents recorded in our system significantly increased. Instead, we are of the firm opinion that this increase in voluntary reporting reflects the comfort of our departmental staff with reporting the incidents they discover, not that there are more incidents to report. Although incidents invariably provide an insight into the state of safety and quality of the department, simply using the number of incidents recorded in a system may not be a good surrogate for patient safety.

What is a good indicator? Despite clear impressions of quality improvement with the use of voluntary reporting systems, it is unclear if these systems actually make our patients safer without objective, measurable clinical outcomes as opposed to subjective patterns of voluntary reporting. The literature demonstrates that a reduction in clinical events leads to a direct impact on patient-related morbidity. 8 For example, Pronovost et al 8 demonstrated a measurable reduction in catheter-related bloodstream infections using a collaborative cohort study across a total of over 100 intensive care units. Using Poisson regression modeling, the analysis compared infection rates before, during, and up to 18 months after implementation of an evidence-based safety intervention. Demonstrating a clear reduction in infection rates after the safety intervention convincingly demonstrates that the intervention actually made patients safer. “Safety science” is driven by the analysis of patient-driven clinical outcomes with robust statistics to demonstrate quantitative improvements in patient safety. The Radiation Oncology Institute has recognized the importance of conducting “research to establish a set of

Impact of patient safety improvement efforts

165

quality indicators for major radiation oncology procedures and evaluate their use in radiation oncology delivery”. 9 A meaningful outcome that can be measured before and after an intervention is essential for testing the effectiveness of the intervention. At this time, radiation oncology lacks objective outcomes or quality metrics. Quality metrics that examine variability in treatment planning and contouring will be particularly important to measure clinical outcomes and study patient safety improvements, and some such studies have begun to appear. 10 This is a step in the right direction as toxicity and local failure patterns represent clinically meaningful endpoints. At the same time, however, it must be remembered that the process of care in radiation oncology extends far beyond creating a high-quality treatment plan. Another unmet need is that of standard error definitions, categorizations for incidents, and a radiation oncologyspecific incident severity scale. Some progress has been made on standardizing severity definitions, but more work remains to be done. Without consensus on these standard definitions, it is difficult to compare “outcomes” across studies. For example, we note that numerous minor and major incidents that occurred in Clark et al, 1 had also occurred in our departments, but our severity scale use and description of the incidents differed. 11 A system for grading severity exists within AHRQ but it does not handle potential severity of incidents nor is it specific for radiation oncology taking into account dosimetric deviations that are not expected to translate into clinical harm. Furthermore, in an incident that does not reach the patient (a “near-miss”) it is especially challenging to score the severity because events could unfold in many ways. We need to have a dialogue about shared, similar experiences and common near-misses, and errors that put our patients at risk. Clark et al 1 provide an excellent example of the utility of voluntary reporting systems, the lessons we can learn, and quality improvements that can be made as a result of this information. Ultimately, to further the science of safety in our field, we need standardized definitions and severity scoring across the field that will enable us to develop a body of literature that compares and contrasts experiences across institutions. We are unable to effectively measure clinical endpoints that demonstrate how our efforts translate to the individual patient. Until such outcome measures are developed, we will be limited in our ability to prove that our interventions actually make patients safer.

References 1. Clark BG, Brown RJ, Ploquin J, Dunscombe P. Patient safety improvements in radiation treatment through 5 years of incident learning. Pract Radiat Oncol. 2013;3(3):157-163. 2. Mutic S, Brame RS, Oddiraju S, et al. Event (error and near-miss) reporting and learning system for process improvement in radiation oncology. Med Phys. 2010;37:5027-5036.

166

S. Terezakis, E. Ford

3. Yeung TK, Bortolotto K, Cosby S, Hoar M, Lederer E. Quality assurance in radiotherapy: Evaluation of errors and incidents recorded over a 10 year period. Radiother Oncol. 2005;74: 283-291. 4. Barach P, Small SD. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ. 2000;320:759-763. 5. Botney R. Improving patient safety in anesthesia: a success story? Int J Radiat Oncol Biol Phys. 2008;71(Suppl 1):S182-S186. 6. Classen DC, Pestotnik SL, Evans RS, Burke JP. Computerized surveillance of adverse drug events in hospital patients. JAMA. 1991;266:2847-2851. 7. Mardon RE, Khanna K, Sorra J, Dyer N, Famolaro T. Exploring relationships between hospital patient safety culture and adverse events. J Patient Saf. 2010;6:226-232.

Practical Radiation Oncology: July-September 2013 8. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006;355:2725-2732. 9. Jagsi R, Bekelman JE, Brawley OW, et al. A research agenda for radiation oncology: results of the radiation oncology institute's comprehensive research needs assessment. Int J Radiat Oncol Biol Phys. 2012;84:318-322. 10. Peters LJ, O'Sullivan B, Giralt J, et al. Critical impact of radiotherapy protocol compliance and quality in the treatment of advanced head and neck cancer: results from TROG 02.02. J Clin Oncol. 2010;28:2996-3001. 11. Terezakis SA, Harris KM, Ford E, et al. An evaluation of departmental radiation oncology incident reports: anticipating a national reporting system. Int J Radiat Oncol Biol Phys. 2013;85: 919-923.

Patient safety improvement efforts: How do we know we have made an impact?

Patient safety improvement efforts: How do we know we have made an impact? - PDF Download Free
104KB Sizes 0 Downloads 0 Views