Best Practice & Research Clinical Obstetrics and Gynaecology 29 (2015) 1132e1138

Contents lists available at ScienceDirect

Best Practice & Research Clinical Obstetrics and Gynaecology journal homepage: www.elsevier.com/locate/bpobgyn

11

Measuring quality of maternity care Katherine J. Collins, MBBS, MRCOG, Clinical Research Fellow and Senior Registrar Obstetrics and Gynaecology *, Timothy Draycott, MD, FRCOG, Consultant Obstetrician 1 Department of Women's Health, The Chilterns, Southmead Hospital, Bristol, BS10 5NB, UK

Keywords: quality quality indicators clinical dashboard process measures outcome measures

Health-care organisations are required to monitor and measure the quality of their maternity services, but measuring quality is complex, and no universal consensus exists on how best to measure it. Clinical outcomes and process measures that are important to stakeholders should be measured, ideally in standardised sets for benchmarking. Furthermore, a holistic interpretation of quality should also reflect patient experience, ideally integrated with outcome and process measures, into a balanced suite of quality indicators. Dashboards enable reporting of trends in adverse outcomes to stakeholders, staff and patients, and they facilitate targeted quality improvement initiatives. The value of such dashboards is dependent upon highquality, routinely collected data, subject to robust statistical analysis. Moving forward, we could and should collect a standard, relevant set of quality indicators, from routinely collected data, and present these in a manner that facilitates ongoing quality improvement, both locally and at regional/national levels. © 2015 Elsevier Ltd. All rights reserved.

* Corresponding author. Tel.: þ44 0117 4146760. E-mail addresses: [email protected] (K.J. Collins), [email protected] (T. Draycott). 1 Tel.: þ44 0117 4146760. http://dx.doi.org/10.1016/j.bpobgyn.2015.03.021 1521-6934/© 2015 Elsevier Ltd. All rights reserved.

K.J. Collins, T. Draycott / Best Practice & Research Clinical Obstetrics and Gynaecology 29 (2015) 1132e1138

1133

Introduction Purchasers, policymakers and patients are rightly demanding greater accountability for the money spent and for the quality of care provided. This is particularly the case for maternity care, which could and should be safer [1]. The National Health Service Litigation Authority (NHSLA) in England published 10 years of maternity claims, with >5000 claims from 2000 to 2009, expected to cost up to £3.1 billion. This represents a £600 litigation surcharge for each and every infant born in that decade [2], and this has risen to £700 for each baby in the current decade [3]. Previous scandals have led to calls for health-care organisations to develop and implement robust systems to measure and monitor the quality of their maternity services [4], and the UK government has mandated regular reporting of health-care quality indicators (QIs) [5]. However, the measurement of quality is difficult: quality is multifaceted [6], and we must ensure that measurement is broad enough to include what is important to all stakeholders, and not merely what can easily be measured. Finally, although health services can be awash with data, there is very little guidance about how best to analyse and present information to staff, and in particular other stakeholders including patients. This article will discuss the definitions of quality, quality measures and QIs of quality of care, the use of maternity dashboard systems for monitoring quality and performance and the importance of patient contributions with regard to maternal perception of quality of care. How do we define quality of care? Recent years have seen unprecedented efforts to measure health-care quality, and the methodological and pragmatic complexities of these efforts have led to major debates: which ‘dimensions’ of quality to measure; whether to focus on processes or outcomes; which outcomes to prioritise-traditional clinical outcomes or more patient-centred ones; and, perhaps most important, how to link measurement to action through policy, professional and management levers [7]. The Health Foundation has identified that many current systems for the measurement of quality are rather one-dimensional: ‘what we currently measure is not how safe healthcare systems are now but how harmful they have been in the past’ [6]. This is no less a problem in maternity care where there have been a number of calls for a comprehensive approach to the measurement of quality [8], which should contain the multiple perspectives involved in maternity care [9], including those of staff [10]. Process measures Process (e.g. caesarean section (CS) rate) and system (e.g. size of unit) measures are commonly employed in quality measurement, at least partly because they are easy to measure. There is also an implicit assumption that the hospitals that perform best on selected process measures will have the best health outcomes. Recently, this assumption has been challenged in maternity care [11], and a US research group has demonstrated that although process measures may be associated with an adverse outcome, the hospitals that performed best on those measures did not have the best risk-adjusted rates of obstetric morbidity [12]. We are not suggesting that process measures are not valid or should not be measured and/or reported; process measures may provide valuable insight into a hospital service, and they could be usefully combined with clinical QIs to provide balanced measures. Clinical QIs The use of a suite of clinical indicators or outcomes is one way to measure the quality of a clinical service. Historically, maternal mortality rate was used as the earliest measure of the quality of obstetric care [13]. This remains a crude but important indicator, still employed today in international

1134

K.J. Collins, T. Draycott / Best Practice & Research Clinical Obstetrics and Gynaecology 29 (2015) 1132e1138

comparisons. However, the steep decline in maternal deaths over the last few decades in the UK, and many developed countries, limits its value. A number of quality measurement outcome tools have been proposed to improve accountability and information sharing in maternity care [14]. These include the Adverse Outcome Index (the percentage of deliveries with one or more specific adverse events), the Weighted Adverse Outcome Score (WAOS) and the Severity Index (SI) that describes the severity of the outcomes [15]. However, they do not appear to have been widely implemented. Legal claim analyses (LCAs) provide an important but narrow perspective of adverse clinical outcomes, and they could possibly be used as part of a portfolio of indicators; however, by their nature, they suffer from a significant lag time, which hinders timely feedback into clinical services [16]. Clinical outcome measures are appealing, but there can be issues with appropriate case mix or population risk adjustment, and at least one group of surgeons has asked the NHS to reconsider the publication of mortality rates [17]. Certainly, CS rates in the UK vary with different population demographics [18]. Maternity risk managers also highlighted lack of accurate population risk adjustment as a significant obstacle to the measurement of clinical quality [10]. Problems of appropriate risk adjustment notwithstanding, effective quality monitoring relies on the identification of appropriate QIs based on high-quality data. Ideal QIs should be relevant to the area of care being monitored, measurable using routinely collected data and alterable by best practice. National best practice guidance has been published to help teams devise and employ good QIs within the UK health-care setting [19]. Although many QIs have been proposed and are in use in maternity care, there are no standardised, uniformly agreed sets of indicators. Many calls have been made for a standard set of QIs both internationally [8,9,20,21] and in the UK [11,22]. However, the current lack of structure and rigour has resulted in an enormous variation in the QIs monitored and definitions used: 290 clinical indicators were identified within 96 clinical categories with up to 18 different definitions in four sets of nationally recommended intrapartum QIs from the UK, Australia, USA and Canada [22]. Moreover, in one UK region comprising 10 maternity units, there were 352 different QI definitions, covering 37 different QIs with up to 39 different definitions for each indicator! [23]. This is clearly an unnecessary variation, and it should be streamlined. There is an urgent requirement for a national and international core set of maternity QIs. Suites of indicators have been developed using robust methodologies: systematic review [21] and Delphi panels [8,22]. The USA has also developed a National Quality Forum Perinatal Care Core Measure Set that includes five very limited quality measures [24], which would appear to be relatively unambitious in UK practice. Once a set of QIs has been selected, it is imperative that they are analysed using robust statistical methods. Unfortunately, this may not always be the case, and in one review of a single UK health region [23], the overwhelming majority of units used arbitrary thresholds for adverse outcomes and there was no benchmarking. A number of researchers have recommended the cumulative sum control chart (CUSUMS) method as the most appropriate method to monitor the relatively low-frequency adverse outcomes in health care [25] and maternity care [26,27]. Further guidance is urgently required to inform alert thresholds for adverse outcomes. Overall, clinical indicators that are measurable and alterable with best practice are essential to the useful measurement of quality, and there is at least one example from maternity care that demonstrated monitoring of QIs to be both feasible and beneficial: an adverse trend in infants born with a low Apgar score was identified, thereby allowing for timely corrective action and improvement in perinatal outcomes [28]. Patient-reported outcome measures Quality measures must also have a direct relevance to patients' lives, including their experience of and satisfaction with the care they receive [7]. Satisfaction also depends on the values placed on different biomedical outcomes, which can vary widely between different cultures and individuals [9]. For example, CS may be the preferred mode of delivery amongst a studied population of Brazilian women, but it is conversely perceived as a highly undesirable outcome amongst certain sub-Saharan African populations [9].

K.J. Collins, T. Draycott / Best Practice & Research Clinical Obstetrics and Gynaecology 29 (2015) 1132e1138

1135

Various surveys and tools exist to evaluate these patient perceptions of service, and since October 2013, all NHS-funded maternity services have asked patients to answer a single question about how likely they would be to recommend the services they have received to friends or family (Friends and Family Test) if they needed similar care or treatment. The UK's Care Quality Commission (CQC) conducts triennial surveys of maternity service users in the UK. Its most recent survey collated the experiences of over 23,000 women who had a live birth between January and March 2013. The report measured quality issues for patients centred around their physical care both antenatally and postnatally, care of their babies, attention to pain management and discharge arrangements as well as the professionalism and competence of staff [29]. Ideally, patient-reported measures, such as results from the CQC's survey on women's experiences of maternity care, would be integrated with, and provide additional context for, a holistic interpretation of numerical indicators [30]. Data quality ‘Garbage in ¼ Garbage out’ was originally coined to describe the unquestioning approach used by computer's logic boards; it is equally relevant to the measurement of quality. Data quality is key to meaningful measurement of quality, whether QIs or process measures, and the dangers of poor data have recently been highlighted after the publication of a report Patterns of Maternity Care in English NHS Hospitals, which identified 11 performance indicators to compare performance between NHS maternity units. The data were derived from the NHS Hospital Episode Statistics (HES) system, which has significant problems with data completeness; key data fields such as gestational age and birthweight were missing in over 20% of records. An accompanying editorial concluded that HES data cannot be used to undertake this kind of analysis [30]. Other authors have highlighted concerns regarding the accuracy of HES data; in 2009e2010, there were 17,000 recorded male inpatient admissions to the UK obstetric services [31], which seems unlikely. Moreover, the HES data do not collect neonatal data. However, local databases in the UK contain most of the data missing in HES [32], and they are amongst the most accurate data sets in the NHS with over 94% agreement with the notes in some analyses [33,34]. Therefore, it would seem appropriate to aggregate local databases into higher-order data sets to measure and, importantly, benchmark quality measurement between units. This has been feasible in the 10 maternity units across a whole NHS region [35]. Good-quality routinely collected data are available across the NHS, and they should be harnessed to avoid costly and unnecessary manual population and duplication [36]. Finally, a core Maternity Services Data Set has been proposed by the Health and Social Care Information Centre in the NHS (http://www.hscic.gov.uk/maternityandchildren/maternity), which, at least in theory, is mandatory from May 2015. However, it has yet to be implemented, and it will take time to get accurate data coverage of sufficient quality to use. There will also be insufficient historical data for comparison, at least initially. Therefore, local data will be important for the foreseeable future. Presentation Presentation of information to stakeholders is an essential part of quality measurement, but there is a dearth of data to inform best practice. Graphical displays and tools to represent health outcomes date back at least to the 1800s when in 1858 Florence Nightingale employed a graphical display (polar-area diagram) to present her findings that the majority of deaths were due to poor sanitation in military hospitals, and not casualties in battle [37]. This revolutionised care provided in military hospitals in the Crimea, and the use of visual data tools and displays is equally powerful in modern health-care systems. Clinical dashboards are frequently proposed, and they facilitate this process within UK maternity settings [38]. A maternity dashboard was first described in UK practice in 2005 for a hospital after several preventable maternal deaths, to help measure and manage what was described as serious clinical underperformance [39]. In response to this, the Chief Medical Officer's report into intrapartum

1136

K.J. Collins, T. Draycott / Best Practice & Research Clinical Obstetrics and Gynaecology 29 (2015) 1132e1138

deaths recommended that dashboards be piloted at several sites nationwide, to monitor standards of care in maternity units [40]. In 2008, the Royal College of Obstetricians and Gynaecologists (RCOG) recommended that all maternity units implemented a dashboard to plan and improve their maternity services' [38]. Within this guidance, the RCOG included an example dashboard, which utilised a redeamberegreen (RAG) colour coding system to alert users of changes in rates or frequencies of selected events and QIs, against locally agreed standards, on a monthly basis. Although recommended for use in all UK maternity settings by the RCOG, there are very few data in the literature related to maternity dashboard use and development. There is an optimistic description of the implementation of a maternity dashboard in a London teaching hospital [41] and a much more guarded survey of dashboards across the South West NHS region [23]. A recent description of the feasibility of a simple dashboard using a standardised set of QIs appears to show great promise for a practical and pragmatic solution to the collection, measurement and presentation of clinical and process indicators [35]. However, there were no patient-reported outcome measures (PROMs), and more research is definitely required in this important area. Conclusions and summary High-quality health-care systems are those that produce the best outcomes with the fewest interventions, to the satisfaction of their patients, within a cost-effective framework. In a re-imagined approach, quality measurement in health care would be integrated with care delivery, address the challenges that confront staff every day and reflect individual patients' preferences and goals for treatment and health outcomes [42]. There is a truism that ‘We can only improve the things that we can measure’ [43]. One recent commentary [11] recognised that the measurement of quality in maternity care should be made easier, more timely and more understandable, in order to make rapid quality improvement feasible. Moreover, we should rely less on the self-assessment of risk processes, and begin to prioritise what matters most: clinical outcomes and patient experience. We should collect and produce a standard, relevant set of QIs, ideally from routinely collected data, and present these in a manner that facilitates ongoing quality improvement.

Practice points  A balanced suite of quality indicators (QIs) should include both clinical outcome and process measures.  Patient-reported measures should ideally be integrated into a holistic interpretation of clinical and process indicators.  Dashboards have been suggested to present these data as information, but these should be statistically robust for validity.  Local databases should be aggregated into higher-order data sets to measure and benchmark quality measurement between units.

Research agenda  National and international consensus to define a core set of maternity quality indicators.  Development of accurate methods to generate alert thresholds for adverse outcomes.  Identify and prioritise consumer views on outcomes.

K.J. Collins, T. Draycott / Best Practice & Research Clinical Obstetrics and Gynaecology 29 (2015) 1132e1138

1137

Conflict of interest None declared. References [1] The King's Fund. Safe births: everybody's business. An independent inquiry into the safety of maternity services in England. London: Kings Fund; 2008. [2] NHS Litigation Authority. Ten years of maternity claims: an analysis of NHS litigation authority data. London: NHS Litigation Authority; 2012 October. [3] National Audit Office. Maternity services in England. London: Department of Health; 2013 Nov 10. [4] Healthcare Commission. Investigation into 10 maternal deaths at, or following delivery at, Northwick Park Hospital, North West London Hospitals NHS Trust, between April 2002 and April 2005. Healthcare Commission; 2006 August 14. p. 1e120. [5] Department of Health. High quality care for all. Department of Health; 2008. *[6] Vincent C, Burnett S, Carthey J. The measurement and monitoring of safety. The Health Foundation; 2013 May. *[7] Mountford J, Shojania KG. Refocusing quality measurement to best support quality improvement: local ownership of quality measurement by clinicians. BMJ Qual Saf 2012;21(6):519e23. *[8] Boulkedid R, Sibony O, Goffinet F, et al. Quality indicators for continuous monitoring to improve maternal and infant health in maternity departments: a modified Delphi survey of an international multidisciplinary panel. PLoS One 2013; 8(4):e60663. *[9] Pittrof R, Campbell OMR, Filippi VGA. What is quality in maternity care? An international perspective. Acta Obstet Gynecol Scand 2002;81(4):277e83. [10] Simms RA, Yelland A, Ping H, et al. Using data and quality monitoring to enhance maternity outcomes: a qualitative study of risk managers' perspectives. BMJ Qual Saf 2014;23(6):457e64. *[11] Draycott T, Sibanda T, Laxton C, et al. Quality improvement demands quality measurement. BJOG Int J Obstetrics Gynaecol 2010;117(13):1571e4. [12] Grobman WA, Bailit JL, Rice MM, et al. Can differences in obstetric outcomes be explained by differences in the care provided? The MFMU Network APEX study. Am J Obstet Gynecol 2014;211(2):147.e1e147.e16. [13] Drife J. Quality measures for the emergency obstetrics and gynaecology services. J R Soc Med 2001;94(Suppl. 39):16e9. [14] Gee RE, Winkler R. Quality measurement: what it means for obstetricians and gynecologists. Obstetrics Gynecol 2013. [15] Mann S, Pratt S, Gluck P, et al. Assessing quality in obstetrical care: development of standardized measures. Jt Comm J Qual Patient Saf 2006;32(9):497e505. [16] Fox R, Yelland A, Draycott T. Analysis of legal claimseinforming litigation systems and quality improvement. BJOG Int J Obstetrics Gynaecol 2014;121(1):6e10. [17] Boseley S. Surgeons ask NHS England to rethink policy of publishing patients' death rates. The Guardian Saturday 31 January 2015. Sect. NHS. [18] Bragg F, Cromwell DA, Edozien LC, et al. Variation in rates of caesarean section among English NHS trusts after accounting for maternal and clinical risk: cross sectional study. BMJ 2010;341(Oct 06 1). c5065-c. [19] Association of Public Health Observatories. The good indicators guide: understanding how to use and choose indicators. Association of Public Health Observatories & NHS Institute for Innovation and Improvement; 2008. [20] Bogossian F. An urgent call to implement systematic monitoring of a comprehensive set of quality indicators for maternity services. Women Birth J Aust Coll Midwives 2010;23(1):36e40. [21] Bonfill X, Roque M, Aller MB, et al. Development of quality of care indicators from systematic reviews: the case of hospital delivery. Implement Sci IS 2013;8:42. *[22] Sibanda T, Fox R, Draycott TJ, et al. Intrapartum care quality indicators: a systematic approach for achieving consensus. Eur J Obstetrics, Gynecol Reprod Biol 2013;166(1):23e9. [23] Simms RA, Ping H, Yelland A, et al. Development of maternity dashboards across a UK health region; current practice, continuing problems. Eur J Obstetrics, Gynecol Reprod Biol 2013;170(1):119e24. [24] The Joint Commission. Specifications manual for Joint Commission National Quality Measures (v2013A1). Available at http://manual.jointcommission.org/releases/TJC2013A/PerinatalCare.html [25] Spiegelhalter D, Sherlaw Johnson C, Bardsley M, et al. Statistical methods for healthcare regulation: rating, screening and surveillance. J R Stat Soc Ser A Stat Soc 2012. *[26] Sibanda T, Simms R, Draycott T, et al. Monitoring healthcare quality in an obstetrics and gynaecology department using a CUSUM chart. BJOG Int J Obstetrics Gynaecol 2011;118(3):379e80 [author reply 80-1]. [27] Boulkedid R, Alberti C, Sibony O. Quality indicator development and implementation in maternity units. Best Pract Res Clin Obstetrics Gynaecol 2013. [28] Sibanda T, Sibanda N, Siassakos D, et al. Prospective evaluation of a continuous monitoring and quality-improvement system for reducing adverse neonatal outcomes. Am J Obstet Gynecol 2009;201(5):480.e1e6. *[29] Healthcare Commission. Towards better births - a review of maternity services in England. London. 2008. *[30] Chappell LC, Calderwood C, Kenyon S. Understanding patterns in maternity care in the NHS and getting it right. BMJ 2013. [31] Brennan L, Watson M, Klaber R, et al. The importance of knowing context of hospital episode statistics when reconfiguring the NHS. BMJ 2012;344:e2432. [32] Kenney N, Macfarlane A. Identifying problems with data collection at a local level: survey of NHS maternity units in England. BMJ 1999;319(7210):619e22. [33] Cleary R, Beard R, Coles J, et al. Comparative hospital databases: value for management and quality. Qual Health Care 1994;3(1):3e10. [34] Cleary R, Beard RW, Coles J, et al. The quality of routinely collected maternity data. Br J Obstet Gynaecol 1994;101(12): 1042e7.

1138

K.J. Collins, T. Draycott / Best Practice & Research Clinical Obstetrics and Gynaecology 29 (2015) 1132e1138

[35] Simms R. An automated maternity dashboard; development & implementation with a qualitative analysis of staff opinions [MD]. Bristol: Bristol; 2015. [36] Haelo. Maternity safety thermometer pilot data publication. Haelo; 2014. http://www.safetythermometer.nhs.uk/index. php?option¼com_wrapper&view¼wrapper&Itemid¼370. [37] Nightingale F. Notes on matters affecting the health, efficiency and hospital administration of the British Army. London. 1858. [38] Arulkumaran S, Chandraharan E, Mahmood T, et al. Maternity dashboard - clinical performance and governance scorecard. RCOG 2008 J. London: RCOG. [39] Healthcare Commission. Investigation into 10 maternal deaths at, or following delivery at, Northwick Park Hospital, North West London Hospitals NHS Trust, between April 2002 and April 2005. London: Healthcare Commission; 2006. [40] Chief Medical Officer. Intrapartum-related deaths: 500 missed opportunities. London: NHS; 2007. [41] Chandraharan E. Clinical dashboards: do they actually work in practice? Three-year experience with the Maternity Dashboard. Clin Risk 2010 September;7(16):176e82. *[42] McGlynn EA, Schneider EC, Kerr EA. Reimagining quality measurement. N Engl J Med 2014;371(23):2150e3. [43] Darzi A. Quality and the NHS next stage review. Lancet 2008;371(9624):1563e4.

Measuring quality of maternity care.

Health-care organisations are required to monitor and measure the quality of their maternity services, but measuring quality is complex, and no univer...
219KB Sizes 0 Downloads 12 Views