EQUIPPED: QUALITY IMPROVEMENT

Using data to improve care C Ronny L H Cheung,1 Claire Lemer2

1

Department of Paediatrics, Imperial College Healthcare NHS Trust, London, UK 2 Claire Lemer, Evelina Children’s Hospital, Guy’s and St Thomas’ NHS Trust, London, UK. Correspondence to Dr C Ronny L H Cheung, Department of Paediatrics, Imperial College Healthcare NHS Trust, London W2 1NY, UK; [email protected] Received 26 September 2013 Accepted 7 October 2013 Published Online First 25 October 2013

To cite: Cheung CRLH, Lemer C. Arch Dis Child Educ Pract Ed 2013;98:224–229.

224

ABSTRACT We look at the role of data in improving the quality of care for children and young people: how they can help to identify a problem; guide design of solutions; and evaluate changes in practice. We introduce some principles for measurement in the field of quality improvement, and discuss how to use and present data to maximise their value and impact in quality improvement initiatives.

INTRODUCTION In God we trust; all others must bring data—WE Deming (attrib.)1

It is the ambition of every modern healthcare system to provide the highest quality care based on current best evidence. Over the past four decades, the evidence-based medicine revolution has imbued in clinicians and healthcare professionals a reverence for the ‘conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’.2 Clinicians understand the value of up-to-date analysis of methodologically sound, clinical research data in delivering high quality healthcare: from predictive values of the diagnostic tests we perform to the risks of adverse outcomes from the procedures we carry out and the likely prognosis of diseases we encounter. To improve the overall quality of care for children, clinicians must begin to apply the same ‘conscientious, explicit and judicious’ approach to evaluating and improving systems of care as we would with specific clinical interventions. Many seemingly non-clinical interventions— such as ‘time from general practitioner referral to outpatient appointment’ or ‘same-day cancellations of surgery’—are in fact profoundly important for children and families, directly affecting health outcomes such as timely diagnosis, school performance and family well-being. And without measurement, there can be no improvement.

It may be useful to illustrate concepts in measurement for improvement using a simple case study (box 1). THE ROLE OF DATA IN QUALITY IMPROVEMENT Measurement and improvement are two sides of the same coin3

Data and measurement permeate every aspect of quality improvement. The Institute for Healthcare Improvement’s Model for Improvement4 outlines three simple questions which should inform the planning of any improvement project, followed by a description of an iterative model of incremental change (figure 1). A thoughtful approach to the use of data is central to answering these three questions and for the effective implementation of plan-do-study-act (PDSA) cycles (table 1). WHAT IS SO SPECIAL ABOUT QUALITY IMPROVEMENT DATA? Practicality and usefulness are the sine qua non of quality improvement data, which exist explicitly to develop knowledge-in-application: to inform efforts at planning, implementing and evaluating locally-defined interventions in a real-life setting.5 Translating clinical research knowledge into improvements in practice and health outcomes remains by and large a patchy and protracted affair.6 7 Clinical practice is influenced by a combination of clinician knowledge, attitudes and behaviours, each of which can act as barriers to change. Potential impediments include the volume and accessibility of research evidence relevant to practice; presumed lack of applicability to clinical practice; and lack of resource to implement innovations in evidence-based care, leading to inertia.8 Being descriptive rather than predictive, quality improvement data focus on identifying and overcoming these impediments through real-time, rapid assessment of change.9

Cheung CRLH, et al. Arch Dis Child Educ Pract Ed 2013;98:224–229. doi:10.1136/archdischild-2013-304328

Equipped: quality improvement Box 1 Measurement for improvement: a clinical case study ▸ A junior doctor on a busy postnatal ward in a UK district general hospital is frustrated at the inefficiency of routine neonatal checks which every newborn baby has to undergo prior to discharge. She notices that a significant amount of time is taken up locating clinical notes and finding babies around the ward; there is no system to identify and prioritise potential early discharges; parents wait for many hours by their bed as they have no idea when their baby’s check will be done. Her neonatal colleagues and the midwifery team share her frustrations, and come together to improve the service within the context of limited staffing levels, using the Model for Improvement framework for quality improvement (figure 1). ▸ They agree a project aim: to improve the efficiency and experience of the routine neonatal checks on the postnatal ward. They identify key indicators to measure progress, including: mean duration of each check ( per day); time to completion of all same-day discharge neonatal checks; parental experience; and multidisciplinary staff experience. ▸ Using the plan-do-study-act cycle, they test the new pathway for scheduled neonatal checks for a 5-day period, resulting in reduction of the mean duration of each check (calculated as (time to completion of all checks)/(number of checks performed)) from a median of 19.9–14.4 min, with generally high satisfaction reported from parents. ▸ The team makes modifications on the basis of rapid analysis of the data, including increased system flexibility to account for bottlenecks caused by emergency calls. The second iteration makes a trade-off for the sake of practicality and parent experience, at the expense of slight decrease in efficiency compared with the first cycle. They are finally able to fully implement an agreed workable solution.

The selection of indicators, as well as how each indicator is measured, should therefore reflect a rigorous but agile approach to evaluating local change. Nelson et al3 describe some key principles to maximise the practicality and usefulness of quality improvement data (box 2). To these, we would propose the following additional principles: 1. Use routinely collected data where possible ▸ There is an opportunity cost to data collection however effectively it is built into daily practice. Data already being collected for another purpose (such as clinical coding data, organisational-level process measures or national audit data) may provide freely available resource and data infrastructure to

Figure 1 Model for Improvement (http://www.ihi.org).

augment your improvement measurements. For this reason, it is often worth having a conversation with your organisation’s finance, IT or coding department at the start of your project. 2. Keep an open mind about ‘soft intelligence’ and be flexible about changing your indicator set—“Not everything that counts can be counted, and not everything that can be counted counts”—Albert Einstein ▸ Quality improvement is a practical science, and information may lie in informal channels. There are times when a change brings about an unintended consequence—which, by definition, is unanticipated and therefore would not form part of the original bundle of measures of change. The observant practitioner should be flexible enough to modify their data collection for future iterations of the PDSA cycle. For example, in the neonatal examination example above, the improved efficiency of the neonatal clinic led to unexpected pressures on maternity staff from mothers who had an expectation of earlier discharge. ‘Soft intelligence’ among clinical

Cheung CRLH, et al. Arch Dis Child Educ Pract Ed 2013;98:224–229. doi:10.1136/archdischild-2013-304328

225

Equipped: quality improvement Table 1

Examples of how data can be used to inform each stage of the Model for Improvement quality improvement framework

Model for Improvement stages

Potential role of data in design and implementation

What are we trying to accomplish?

Defining a problem using data eg, ‘Rate of reported catheter-related blood stream infections in paediatric intensive care’ Prioritising improvement efforts and informing resource allocation by allowing meaningful comparison between different initiatives eg, using reporting rates and outcome data to explore potential ‘yield’ for interventions aimed at reducing hospital-acquired infections versus medication errors Using existing data to diagnose a problem and/or benchmark current practice eg, against comparable services or national standards Evaluating the desired effect of change using preselected indicators Evaluating any anticipated deleterious effect on performance using balanced countermeasures eg, measuring readmission rates alongside length of stay data Evaluating unintended consequences of change on other related indicators (eg, on secondary drivers identified in the project driver diagram) Inform modifications to the change model in each PDSA cycle Using data to diagnose the root cause of the problem eg, identifying through process mapping that the bottleneck for a discharge pathway is lack of senior decision making Using data to inform SMART (specific; measurable; achievable; realistic; time-limited) objective-setting Unearthing new approaches for future improvement (through the use of a bundle of related indicators) eg, using qualitative data on patient experience to evaluate changes in one part of pathway may uncover deficiencies in others

How will we know a change is an improvement?

What changes can we make that will result in improvement?

PDSA, plan-do-study-act.

colleagues alerted the team to introduce changes in process and measurement in future cycles.

PRESENTING DATA TO DRIVE IMPROVEMENT Quality improvement data, interpreted and presented compellingly and judiciously, can influence whether or not knowledge is assimilated and welcomed by an organisation, turning a complex and dry area into a powerful lever for improvement. A supportive organisational milieu is a key predictor of the success of any quality improvement initiative: the degree of engagement of all stakeholders, strategic leadership and support for a culture of improvement, and structures to nurture the energy and enthusiasm of staff are essential ingredients for improving performance. Technical knowledge about the state of local health services is rarely able to drive improvement in performance alone. 1. Present data visually in a clear and simple manner

A run chart is one method of presenting improvement data in a compelling way. It is essentially a line graph demonstrating performance or outcome over time. Figure 2 is a run chart from the case study described earlier. In it, one can clearly see the overall reducing trend in duration of the average neonatal check after implementation of the change, and is a powerful visual means of communicating the effectiveness or otherwise of an intervention. 2. Present data to key stakeholders

To help implement and sustain your project, present data frequently and judiciously to influential stakeholders, including staff, patients and families, the service manager and clinical lead. Data are a key aspect of any case for change—and if the data that you present can be attributed to specific individuals or 226

teams, then it becomes all the more powerful both as an impetus for improvement and to reward achievement. 3. Feedback data frequently and in near real-time

Rapid reporting of results allows the team to modify and test incremental changes in a much more responsive way. Moreover, rapid (as near to real-time as possible) feedback to staff and families, through quality dashboards or run charts displayed visibly around clinical areas, can communicate the positive effect of change especially during what is sometimes a challenging transition. UNDERSTANDING VARIATION IN QUALITY IMPROVEMENT Variation in complex systems is ubiquitous, and in healthcare this has major implications for efficiency, equity and safety. Unearthing and understanding variation is a vital diagnostic step in improving quality. One way of conceptualising variation is to consider variation between different healthcare services and variation within a service over time. Variation between healthcare systems

Variation between areas or services plays an important role in allowing clinicians to assess their outcomes and performance with comparable services. This type of variation can be influenced by a number of factors outwith the control of the service itself—population demographics and socioeconomic status, or effectiveness of other aspects of the care pathway. But persistent variation in the context of minimal differences in population factors highlights areas of ‘unwarranted variation’—variation which ‘cannot be explained by variation in patient illness or patient preferences’.10

Cheung CRLH, et al. Arch Dis Child Educ Pract Ed 2013;98:224–229. doi:10.1136/archdischild-2013-304328

Equipped: quality improvement Box 2 Principles to maximise the value of data and measurement in quality improvement (adapted from Nelson et al)3

many formats, using registry, audit and official statistics, and are a ready launch pad for many improvement initiatives.12–14 Variation within healthcare systems

▸ Seek usefulness, not perfection, in measurement – Data sets which are more limited but are pragmatic and adequate for measuring a local change are preferable to detailed but unwieldy data which are hard to use ▸ Use a balanced set of process, outcome and cost measures – The clinical value of an intervention depends on evaluating both outcomes and cost indicators. Process measurements help with future iterations of change ▸ Keep measurement simple: think big, but start small – Strike a balance between the size and scope of indicator sets and the agility and speed of improvement cycles in the context of systemsand causation-complexity ▸ Use both quantitative and qualitative data as appropriate – Effects of change may not always be quantifiable: qualitative data to assess patient/staff experience are vital measures of outcome and process ▸ Write down the operational definitions of the measures – Defining an indicator specifically and simply makes for reproducible measurement, with greater face- and construct-validity ▸ Measure small, representative samples – Emphasis is on usefulness not perfection—small representative samples can allow for rapid feedback loops and agility of implementation with limited resources ▸ Build measurement into daily work – Building data collection into routine clinical practices (or tailoring your data requirements to fit existing clinical practices) will help reduce the burden of collection and improve sustainability of the project ▸ Develop a measurement team – Where possible, this is the most sustainable way to embed measurement. Even where an extra resource is not available, sharing the burden of measurement among team members gives support and ownership of an improvement initiative to a wider group

As with variation between systems, the key is distinguishing between variation which is amenable to intervention and that which is inevitable, and is inherent in all complex systems like healthcare. The theory of exploring variation within a service, measured over time, originated with statistical process control (SPC)15 analysis in industrial practices focused on increasing reliability and eliminating waste.1 In healthcare, these concepts translate as safety and efficiency of care. In SPC, this predictable variation is known as ‘common cause’ variation and can never be completely eliminated.15 ‘Special cause’ variation describes unpredictable variation, or variation which is not explained by the design of the system itself but due to an extrinsic factor. Special cause variation presents clinicians with opportunities for improving the safety, efficiency and quality of care. Special cause variation can be distinguished from common cause variation using an SPC chart—a run chart in which statistical limits for common cause variation are presented (see figure 3). Special cause variation manifests as data points which fall outside the control range (ie, between the two statistical limits for common cause variation), or specific patterns of data points which, although they lie within the control range, behave unusually enough to betray an underlying extrinsic issue. Detecting abnormal signal in SPC charts can be rather complex and beyond the remit of this article—for details on how to interpret SPC charts see the ‘Further reading’ section below. The importance of identifying special cause variation is twofold. First, it is important to ensure a process is stable (ie, not subject to special cause variation) prior to implementing incremental change and improvements using a PDSA cycle. An unstable system is a variable system, and any improvement following a quality improvement intervention cannot be accurately interpreted. Second, being able to identify factors which cause special cause variation can therefore either help to target improvements or (if it is not amenable to improvement, eg, seasonal variations in demand) implement contingency plans in advance to mitigate the effect of variation.

From a quality improvement perspective, it highlights inequitable healthcare provision and provides a starting point for services to benchmark their performance against peers, and to stimulate learning from higherperforming areas or services.11 Data on regional, national and international outcomes are published in

CONCLUSIONS Data and measurement in quality improvement serve a very specific purpose: to inform the design, implementation and evaluation of local interventions in a responsive and agile way. Although improvement science requires the nature of data and the measurement methodologies to be different to clinical research, they are no less robust or effective. Making

Cheung CRLH, et al. Arch Dis Child Educ Pract Ed 2013;98:224–229. doi:10.1136/archdischild-2013-304328

227

Equipped: quality improvement

Figure 2

Run chart showing the mean duration of baby check per day before and after implementing the first cycle of change.

the measurement process more efficient, and presenting improvement data compellingly, can further maximise their impact and value. Data, intelligently interpreted, are the lifeblood of quality improvement, just as clinical research, intelligently interpreted, is the lifeblood of evidence-based medicine. Measurement drives improvement, and improvement leads to more measurement. If clinicians are as relentless in their ‘conscientious, explicit and judicious use’ of quality and performance data as with clinical evidence, continuous service improvement will rapidly become engrained in the culture of healthcare to the ultimate benefit of our patients, their families and carers.

producing a simple run chart very easy. It can be downloaded online (http://www.ihi.org/knowledge/Pages/Tools/ RunChart.aspx). The National Health Service (NHS) Improvement System allows users to produce more complex SPC charts with their template, subject to registration with an NHS email account (http://system.improvement.nhs.uk/ ImprovementSystem/Login.aspx). The NHS Institute for Innovation and Improvement website has an introductory guide to SPC charts (production and interpretation) at: http://www.institute.nhs.uk/quality_and_service_ improvement_tools/quality_and_service_improvement_ tools/statistical_process_control.html, based on the work of Donald Wheeler.16

Further reading and useful links

Acknowledgements The authors acknowledge the contribution of Dr Rebecca Ling and others in the postnatal and neonatal team at Croydon University Hospital, UK, for the case study

The Institute for Healthcare Improvement has developed a Microsoft Excel template which makes

Figure 3

228

An example of a statistical process control chart.

Cheung CRLH, et al. Arch Dis Child Educ Pract Ed 2013;98:224–229. doi:10.1136/archdischild-2013-304328

Equipped: quality improvement described in the article. We would also like to thank Dr Bob Klaber and Dr Ian Wacogne for their support and critical review of the manuscript. Contributors CRLHC and CL were both involved in the planning and design of the article, the production of revised drafts and final approval of the version published. Competing interests None. Provenance and peer review Commissioned; internally peer reviewed.

7

8 9

Data sharing statement Any audit data included are available from the corresponding author on request.

10

REFERENCES

11

1 Hastie T, Tibshirani R, Friedman J. The elements of statistical learning. 2nd edn. Springer, 2009. http://www-stat.stanford.edu/ ~tibs/ElemStatLearn/download.html (accessed 26 Sep 2013). 2 Sackett D, Rosenberg W, Gray JAM, et al. Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71. 3 Nelson EC, Splaine ME, Batalden PB, et al. Building measurement and data collection into medical practice. Annals Int Med 1998;128:460–6. 4 Langley G, Nolan K, Nolan T, et al. The improvement guide: a practical approach to enhancing organizational performance. San Francisco, USA: Jossey Bass Publishers, 2009. 5 Batalden PB, Davidoff F. What is “quality improvement” and how can it transform healthcare? Qual Saf Health Care 2007;16:2–3. 6 Institute of Medicine, Committee on Quality of Health Care in America. Crossing the quality chasm: a new health system for

12

13

14 15 16

the 21st century. Institute of Medicine, 2001. http://www.iom. edu/report.asp?id_5432 (accessed 26 Sep 2013). McGlynn EA, Asch SM, Adams J. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635–45. Lang ES, Wyer PC, Haynes RB. Knowledge translation in emergency medicine. Ann Emerg Med 2007;49:355–63. Byers JF, Beaudin CL. The relationship between continuous quality improvement and research. J Healthcare Quality 2002;24:4–8. Wennberg JE. Tracking Medicine. A researcher’s quest to understand healthcare. Oxford University Press, 2010. Cheung CR, Gray JAM. Unwarranted variation in healthcare for children and young people. Arch Dis Child 2013;98: 60–5. NHS Right Care. NHS Atlas of variation in healthcare for children and young people March 2012. London, UK, 2012. http://www.rightcare.nhs.uk/index.php/atlas/ children-and-young-adults/ (accessed 26 Sep 2013). Epilepsy 12 National Audit. http://www.rcpch.ac.uk/child-health/ standards-care/clinical-audit-and-quality-improvement/ epilepsy12-national-audit/results (accessed 26 Sep 2013). Paediatric Intensive Care Audit Network. http://www.picanet. org.uk (accessed 26 Sep 2013). Shewhart WA, Deming WE. Statistical method from the viewpoint of quality control. Dover publications, 1986. Wheeler D. Understanding variation. Knoxville: SPC Press Inc, 1995.

Cheung CRLH, et al. Arch Dis Child Educ Pract Ed 2013;98:224–229. doi:10.1136/archdischild-2013-304328

229

Using data to improve care.

We look at the role of data in improving the quality of care for children and young people: how they can help to identify a problem; guide design of s...
618KB Sizes 0 Downloads 0 Views