International Journal of Health Care Quality Assurance Diabetic retinopathy care – an international quality comparison Carolina Elisabeth de Korte Dirk F. de Korne Jose P. Martinez Ciriano J. Robert Rosenthal Kees Sol Niek S. Klazinga Roland A. Bal

Article information:

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

To cite this document: Carolina Elisabeth de Korte Dirk F. de Korne Jose P. Martinez Ciriano J. Robert Rosenthal Kees Sol Niek S. Klazinga Roland A. Bal , (2014),"Diabetic retinopathy care – an international quality comparison", International Journal of Health Care Quality Assurance, Vol. 27 Iss 4 pp. 308 - 319 Permanent link to this document: http://dx.doi.org/10.1108/IJHCQA-11-2012-0106 Downloaded on: 30 January 2016, At: 08:11 (PT) References: this document contains references to 49 other documents. To copy this document: [email protected] The fulltext of this document has been downloaded 106 times since 2014*

Users who downloaded this article also downloaded: Niamh Humphries, Karen Morgan, Mary Catherine Conry, Yvonne McGowan, Anthony Montgomery, Hannah McGee, (2014),"Quality of care and health professional burnout: narrative literature review", International Journal of Health Care Quality Assurance, Vol. 27 Iss 4 pp. 293-307 http://dx.doi.org/10.1108/ IJHCQA-08-2012-0087 Dirk F. de Korne, Jeroen D.H. van Wijngaarden, Cathy van Dyck, U. Francis Hiddema, Niek S. Klazinga, (2014),"Evaluation of aviation-based safety team training in a hospital in The Netherlands", Journal of Health Organization and Management, Vol. 28 Iss 6 pp. 731-753 http://dx.doi.org/10.1108/ JHOM-01-2013-0008

Access to this document was granted through an Emerald subscription provided by emerald-srm:368748 []

For Authors If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.

About Emerald www.emeraldinsight.com Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online products and additional customer resources and services. Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation. *Related content and download information correct at time of download.

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0952-6862.htm

IJHCQA 27,4

Diabetic retinopathy care – an international quality comparison Carolina Elisabeth de Korte

308

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

Received 5 November 2012 Revised 19 April 2013 Accepted 9 October 2013

Health Economics Consultant, Operations, The Rotterdam Eye Hospital, Rotterdam, The Netherlands

Dirk F. de Korne Health Innovation, Singapore National Eye Centre, SingHealth, Singapore; Health Services & Systems Research, Duke-NUS Graduate Medical School Singapore, Singapore and Institute of Health Policy & Management, Erasmus University, Rotterdam, The Netherlands

Jose P. Martinez Ciriano The Rotterdam Eye Hospital, Rotterdam, The Netherlands

J. Robert Rosenthal The New York Eye and Ear Infirmary, New York City, New York, USA

Kees Sol The Rotterdam Eye Hospital, Rotterdam, The Netherlands

Niek S. Klazinga Department of Social Medicine, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, The Netherlands, and

Roland A. Bal Institute of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands Abstract

International Journal of Health Care Quality Assurance Vol. 27 No. 4, 2014 pp. 308-319 r Emerald Group Publishing Limited 0952-6862 DOI 10.1108/IJHCQA-11-2012-0106

Purpose – The purpose of this paper is to study the quality indicator appropriateness and use it for international quality comparison on diabetic retinopathy (DR) patient care process in one American and one Dutch eye hospital. Design/methodology/approach – A 17-item DR quality indicator set was composed based on a literature review and systematically applied in two hospitals. Qualitative analysis entailed document study and 12 semi-structured face-to-face interviews with ophthalmologists, managers, and board members of the two hospitals. Findings – While the medical-clinical approach to DR treatment in both hospitals was similar, differences were found in quality of care perception and operationalization. Neither hospital systematically used outcome indicators for DR care. On the process level, the authors found larger differences. Similarities and differences were found in the structure of both hospitals. The hospitals’ particular contexts influenced the interpretation and use of quality indicators. Practical implications – Although quality indicators and quality comparison between hospitals are increasingly used in international settings, important local differences influence their application. Context should be taken into account. Since that context is locally bound and directly linked to hospital setting, caution should be used interpreting the results of quality comparison studies. Originality/value – International quality comparison is increasingly suggested as a useful way to improve healthcare. Little is known, however, about the appropriateness and use of quality indicators in local hospital care practices. Keywords Qualitative research, Quality assessment, Diabetic retinopathy, Eye hospitals Paper type Case study

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

Introduction Quality indicators – measurable variables representing an associated factor or quantity (Smith, 1990) – are employed with increasing frequency in hospitals (Basu et al., 2010; Epstein, 1995; Freeman, 2002). In their Redefining Health Care, Porter and Teisberg (2006) argue that competition among healthcare providers should be value based and supported by data, and using quality indicators to compare providers has become a common phenomenon (Wait and Nolte, 2005). The literature describes two indicator systems: external (for accountability and verification) and internal (for hospital quality improvement) (Davies and Lampel, 1998; Smith, 1990, 1993; Tarr, 1995). By introducing quality indicators, hospital managers aim to close the gap between best practice and poor patient care (Sower et al., 2008). Reliance on indicators, however, risks the unintended consequence of ignoring performance systems (Freeman, 2002). Chronic illness outcome measures such as those used in vascular diseases or diabetes mellitus (DM) are particularly problematic (Smith, 1995). Despite this, indicators are encouraged by governmental bodies as a tool for external justification (Joint Commission, 2010; Organisation for Economic Co-operation and Development, 2010). Besides national benchmark projects, international comparison became a standard for determining best practice (Davis et al., 2007; Donahue and Van Ostenberg, 2000; Garcı´a-Alte´s et al., 2007; Schoen et al., 2006). As quality indicators widen, studying their current use and applicability in practice is imperative. The need to account for context, such as physician and management perceptions and organizational features, is widely recognized (Berg, 1997, 1999; Jerak-Zuiderent and Bal, 2011; Berg and Goorman, 1999; Dixon et al., 2011; Gross et al., 2000). Additional research on quality indicator use in practice is necessary to determine whether using indicators for assessing local hospital quality is worthwhile given differences in contextual features (Freeman, 2002). This is especially the case for the increasingly popular international comparisons. We assessed care quality and quality indicator used for international quality comparison by studying American and Dutch eye hospitals. We studied the diabetic retinopathy (DR) patient process. Hospital managers from both hospitals exchanged data and experiences and participated in quality comparison and benchmark initiatives (De Korne et al., 2010). Our purpose was to clarify how quality indicator were used and their appropriateness generally and application in international quality comparison particularly. Our research question was “How appropriate are quality indicators for international comparison, given local hospital contextual differences?”. Conceptual framework Performance measurement, using quality indicators, can be described as “a method of assessing the performance of individuals, organizations, services or processes” (Basu et al., 2010, p. 437). According to the literature, performance measurement requires quality indicators that fulfil certain requirements. Indicators need to be SMART: specific, measurable, acceptable, achievable, realistic, relevant and timely (Berg et al., 2005). Data used to assess quality through indicators has to be comparable across hospitals (Freeman, 2002; De Korne et al., 2010). While quality plays a major role in performance measurement, care quality interpretation varies (Barnetson and Cutright, 2000; Freeman, 2002; McColl et al., 2000). According to World Health Organization (WHO) (2006), healthcare should be effective, efficient, accessible, acceptable, equitable and safe. This definition integrates most dimensions found in other studies (Aday and Andersen, 1974; Aiken et al., 2002; Chassin and Galvin, 1998; Davies et al., 2000;

Diabetic retinopathy care

309

IJHCQA 27,4

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

310

Lombarts et al., 2009; McGlynn et al., 2003; Schneider and Lieberman, 2001; Wennberg and Gittelsohn, 1973). The WHO definition is mainly designed to assess quality at healthcare system levels. At the individual hospital level, Donabedian (2005) interprets quality by dividing service quality into three assessable levels: outcome (effects on patient and population health status); process (what is done when giving and receiving care); and structure (the attributes and settings in which care occurs). We focused on and assessed DR care delivery because patient groups can be easily identified, treatment paths are obvious and the patient route is known. DR, a DM consequence, affects the eye’s blood vessels, has several severity stages and results in blindness without treatment (Watkins, 2003; NOG, 2006). To assess DR care quality in our two hospitals, we composed 17 quality indicators related to the DR patient process from the literature (Aldington et al., 1995; Kinnersley et al., 1999; De Korne et al., 2010; Mainz, 2003; Mechanic, 2001; Massachusetts Eye and Ear Infirmary (MEEI), 2010). Indicators We found only one report containing quality indicators particular to the DR patient process (MEEI, 2010). We searched the literature for: indicators used in quality measurement within different patient processes; and important topics for quality measurement on patient processes that could be deduced from the articles. Based on this strategy, we found several measures used for DR quality indicators: .

Outcome – the literature mentions several aspects that are important to surgical outcome: total surgeries (MEEI, 2010; Rotterdam Eye Hospital (REH), 2011), post-operative complications (MEEI, 2010), surgical incident reports (MEEI, 2010) and cancelled DR surgeries (Berg et al., 2005; De Korne et al., 2010).

.

Process – measures found important to quality at the process level were: average patients per consulting hour (Mechanic, 2001); waiting time before the first consult (De Korne et al., 2010); consult duration with the ophthalmologist (Mechanic, 2001; Kinnersley et al., 1999); turnaround time (De Korne et al., 2010); absent patients per consulting hour (De Korne et al., 2010); DR pre-examinations per patient (MEEI, 2010); consults performed by the same ophthalmologist (Kinnersley et al., 1999); and waiting time for surgery (De Korne et al., 2010).

.

Structure – at the structural level, we found: scheduled total DM and DR patients to visit the hospital (MEEI, 2010); DR prevalence and incidence (Aldington et al., 1995); available ophthalmologists (Mainz, 2003); and total examination rooms available for DR patients (Mainz, 2003) to be important in the literature.

Based on these findings, we composed a three-level indicator set. All indicators fulfilled the SMART requirements and were validated by hospital staff in advance (Table I). We studied quality indicator appropriateness and use for international quality comparison. Methods We used mixed methods: (1)

A literature review to identify relevant quality indicators and their conditions. We searched Medline, PubMed, Wiley Online Library, Web of Science and the Google Scholar databases for quality indicators related to the DR patient process in hospitals. We used the following keywords: DR, quality, indicators,

Quality level Quality dimension Outcome

1. Efficient/effective No. of vitrectomy surgeries 2. Safe 3. Safe 4. Efficient/ acceptable

Process Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

Indicator

5. Efficient/effective 6. Accessible/ acceptable 7. Efficient/effective 8. Accessible/ acceptable 9. Efficient 10. Efficient/effective

Structure

11. Equitable/ acceptable 12. Accessible/ acceptable 13. Accessible/ efficient 14. Accessible/ efficient 15. Effective/efficient 16. Efficient/effective/ accessible 17. Efficient/effective/ accessible

No. of post-operative DR surgery complications No. of reports of incidents during (DR) surgeries/no. of mistakes made during (DR) surgeries No. of cancelled DR related surgeries

Reference MEEI (2010, p. 82); OZR (2011) MEEI (2010, p. 86) MEEI (2010, p. 85)

De Korne et al. (2010, p. 27); Berg et al. (2005, p. 68) Average no. of patients per consulting Mechanic (2001, hour p. 202) Average waiting time before first consult De Korne et al. (2010, p. 27) Duration of the consult with the Mechanic (2001, ophthalmologist p. 200) Turnaround time/average waiting time for Kinnersley et al. an additional consult (1999, p. 712) No. of absent patient per consulting hour De Korne et al. (2010, p. 27) No. of pre-examinations for DR patients MEEI (2010, p. 13) per patient No. of consults for DR patients performed Kinnersley et al. by the same ophthalmologist (1999, p. 712) Average waiting time for DR-related De Korne et al. surgery (2010, p. 27) No. of retina patients visiting the hospital MEEI (2010, p. 12) every year No. of DR patients visiting the hospital MEEI (2010, p. 12); every year OZR (2011) No. of diabetic patients with DR on the Aldington et al. first visit (1995, p. 439) No. of DR ophthalmologists available in Mainz (2003, p. 526) the hospital No. of examination rooms at the Retina Mainz (2003, p. 526) Department/during a consultation hour

performance, healthcare and international comparison. Although we found several hundred articles, most were related to DR treatment and not patient process. Because we found only one report containing DR patient process quality indicators, we searched for relevant literature from which we could translate information to quality indicators. Since DR indicators were not available, we widened our literature search to include other eye-care specialties. We found several articles related to the cataract and glaucoma patient processes; three were relevant to the DR process. We also found five articles related to general quality issues that could be translated to the DR process. (2)

Diabetic retinopathy care

Our research was based on qualitative data collection methods in two case hospitals: NL-H (in the Netherlands); and US-H (in the USA). Both were specialist eye hospitals, which exchanged data and practice experiences, participated in benchmark projects and were familiar with quality comparison projects.

311

Table I. Quality indicator set for DR hospital care

IJHCQA 27,4

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

312

(3)

We used participant observation – a process that allows researchers to learn about the research subjects’ activities in their natural settings through observation and participation (DeWalt and DeWalt, 2002).

(4)

Semi-structured interviews were a primary research method. By participating in the hospital patient processes, we were able to map the DR patient flow and understand the patient process. Participation lasted six months in NL-H and three months in US-H. We conducted 12 semi-structured, face-to-face interviews (six in Holland and six in the USA), with key stakeholders on the DR process in the two hospitals. Interviewees were ophthalmologists (n ¼ 3), a resident (medical school graduate who practices care under fully licensed physician supervision) (n ¼ 1), a clerk (n ¼ 1), managers (n ¼ 4) and board members (n ¼ 3). Interviews were guided by a topic list based on our main subjects: care quality, quality indicators and potential improvements. Questions were about: respondent functions and daily activities, their involvement in the DR patient process, primary improvements to the process, care quality, and quality indicators as an initiative. The relatively small total interviews were compensated by selecting participants and the interviews’ in-depth character. Interviews lasted two hours maximum, were recorded, transcribed verbatim and analysed. Interview data were combined in each hospital’s case study database and analysed using Donabedian’s outcome, process and structure.

Validity To increase construct validity, written respondent feedback on the preliminary results was subsequently incorporated. Data triangulation was used when comparing data gathered during the interviews with participant observations and comparisons with documents. In our study, biased results were constrained by using one main researcher (LdK), who was independent and fresh to both hospitals. Additionally, peer review was performed by researchers from outside the ophthalmic and hospital community and member checks were used to reduce further bias. For the same reason, those who have been studied during the case studies (physicians, nurses and managers) reviewed and commented on the research outcomes. Case Hospital 1: NL-H The NL-H case Hospital 1, founded in 1874, is the only independently operating public eye hospital in the Netherlands (population, 16 million), providing secondary eye care for the region and tertiary eye care for the country. As a major referral centre, on a yearly basis NL-H staff handle approximately 140,000 outpatient visits and 14,500 surgical cases. In all, 30 specialized ophthalmologists, four anaesthesiologists and four internists, not employed by the hospital, maintain their practices in partnership with hospital managers. The hospital, which has 400 employees and operates resident and fellow (post residency sub-specialty training) programmes and a research institute, is an American Association of Eye and Ear Centres of Excellence (AAEECE) member and the European and World Association of Eye Hospitals founding member. Case Hospital 2: US-H The US-H case Hospital 2, founded in 1820 as the first specialty hospital, is a voluntary, not-for-profit hospital providing secondary and tertiary ophthalmology and otolaryngology care. The US-H concentrates on the Manhattan’s Lower East Side, Brooklyn and Queens.

It also serves regional and (inter)national communities. The US-H, which is part of the larger Continuum Health Partners Inc. hospital network, although having its own perpetuating board, is involved in community outreach, graduate and medical education and scientific research. With about 15 (and 680 voluntary related) ophthalmologists, 10 anaesthesiologists and 700 employees, US-H staff handle about 28,000 surgical cases and 136,000 in-house outpatient visits. There has been a long-term relationship with NL-H since both hospitals are active ACCEECE members.

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

Findings Totally, 17 indicator outcomes are listed in Table II. Although we found a similar medical-clinical treatment approach to DR treatment in the two hospitals, differences were found in care quality perception and operationalization. The differences can be classified as: outcome, process and structure. Limited quality indicators on outcome No quality indicators were available for NL-H (Table II). Although managers in both hospitals tracked total vitrectomy surgeries (1,500 in NL-H and 1,377 in US-H), only US-H staff monitored post-operative complications through incident reports. The US-H managers developed 15 core measures for external justification and transparency purposes. The NL-H managers, however, seemed more interested in using quality indicators internally: The result or outcome means nothing if the process is not addressed [y] The important thing is improvement, for which the indicator is a supporting device (NL-H Board Member).

In other words, NL-H managers focus on outcome indicator relevance to process improvement. Differences on process quality indicators On the process level, we found no large dissimilarities in the consulting hour, patients treated, patient absence and average waiting time before a first consult. The only deviation appeared in consult duration, which lasted 10 and 20 minutes for NL-H and US-H, respectively (Table II). This seems to be related to the differences in the patient-physician relationship, care pathway design and resident’s responsibility. Differences in the patient-physician relationship were apparent in their interaction. In NL-H, patients are placed in a more central position in the care process, assuming a self-managing attitude; in US-H, the physicians fulfil the more central and managing position. Another difference appeared in the consult pattern. In NL-H, ophthalmologists mainly focus on DR, having been informed in advance about the patient’s medical history by his or her general practitioner (GP). The US-H ophthalmologists had to adapt to missing GP information about the patient, which resulted in asking about the patient’s history during the consult: It is not efficient, no, it is very time consuming. But the problem is that not everybody asks patients about their background and because I do not know who does or does not, I have to (US-H Ophthalmologist).

Taking time to attend to and effectively treat the patient is considered the most important care provision in US-H, whereas in NL-H, “the most important thing is a better flow of patients” (NL-H ophthalmologist). To achieve efficiency, NL-H managers

Diabetic retinopathy care

313

IJHCQA 27,4

Quality level Outcome

Indicator

Performance NL-H (the Netherlands)

1. No. of vitrectomy surgeries 1,500 2. No. of post-operative DR surgery na complications

na About 20

1,377 Post- operative wound infections: 2 (0.01%) Wound dehiscence eye: 11 (0.01%) Unplanned readmissions within 30 days: 87 (0.31%) Unplanned RTOR within 30 days: 81 (0.29 %) Post-operative bleeding RTOR: 14 (0.75%) Vitrectomy: 161 (1.28%) Hemorrhages eye: 2 (0.10%) na About 22

About three weeks

About three weeks

About 10 minutes

About 20 minutes

na About 12 per cent

na About 10-15%

na

na

na

All

na

na

na 10,023 About 17 per cent

50,000 na About 80%

4 4

21 16

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

314

Process

Structure

Table II. Quality comparison outcomes

3. No. of reports of incidents during (DR) surgeries/ no. of mistakes made during (DR) surgeries 4. No. of cancelled surgeries 5. Average no. of patients per consulting hour 6. Average waiting time before first consult 7. Duration of the consult with the ophthalmologist 8. Turnaround time 9. No. of absent patients per consulting hour 10. No. of pre-examinations for DR patients 11. No. of consults performed by the same ophthalmologist 12. Average waiting time before surgery 13. No. of retina patients 14. No. of DR patients 15. No. of DM patients with DR on first visit 16. No. of DR ophthalmologists 17. No. of examination rooms at the Retina Department/during a consultation hour

US-H (USA)

na

deploy residents. Comparing NL-H and US-H, we found a difference in residents’ responsibilities: Our position on the residents is that they need to be taught, they are not labourers [y] I have to be on top of them, double-check everything; that’s how I run my clinic (US-H Ophthalmologist).

Unlike NL-H residents, US-H residents are continually supervised by their attending ophthalmologists and are not allowed to treat their own patients. According to US-H managers, this raises service quality and creates trust between patient and physician. The US-H ophthalmologists believe such trust to be a main care quality feature;

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

therefore consults for individual patients in US-H are performed by the same ophthalmologist. Similarities and differences in quality indicator structure We found differences in the hospitals’ general structure, total ophthalmologists, total visiting patients and their pre-existing DR prevalence rate on the first visit (Table II). Both hospitals cope with increasing DR patients, although the US-H increase is far above NL-H. This has resulted in a separate US-H Retina Department, employing 21 DR ophthalmologists; NL-H has no specialized department. The difference in total ophthalmologists seems to be related to cooperation. The NL-H staff work with four DR ophthalmologists joined in partnership; the US-H staff includes fully employed ophthalmologists plus private practitioners. Besides the difference in total patients visiting the hospitals each year, we found a difference in total DM patients who already had DR on their first visit (17 and 80 per cent in NL-H and US-H, respectively, Table II, indicator 15). The deviation is likely to affect the physician’s approach, resulting in differences in the treatment process. This deviation seems to be related to US-H’s multicultural environment and the US insurance system’s influence. Insurance systems guide primary and preventive care in both countries, but primary care’s lesser role in the USA affects tertiary care providers like eye hospitals: We’re here for the complications [y] There’s no question that preventive care is the way we need to go. But the industry and businesses fight it (US-H Board Member).

Discussion and conclusion Introducing market competition and accountability in healthcare, push providers to focus on performance and deliver added-value for customers. As such, collecting and using healthcare sector quality indicators is rapidly increasing. Indicators are encouraged by governmental bodies as a tool for external justification and hospital staff are expected to participate. Beside national benchmark projects, international comparison has become a standard for determining best practice. Healthcare, however, is mostly provided to local markets by local suppliers, making international comparison difficult. As quality indicators widen, it is important to study their use and applicability in practice. The need to take context into account is widely recognized (Berg, 1997; Dixon et al., 2011; Gross et al., 2000) and additional research is necessary to conclude whether the indicators for assessing local hospital care quality is useful given their contextual features (Freeman, 2002). Our study focused on quality indicator appropriateness and use for comparing DR care in two local eye hospitals – one in the Netherlands and one in the USA. Our study has limitations. It includes only two specialty hospital case studies; analysis is based on participative observations; and only a few interviews. Observing, the hospital environment’s influence with preconceptions can bias the analyses. Performing a case study in a natural setting, however, has the advantage – being able to study the circumstances in real world practice in much greater detail than in an experiment. We found differences regarding interpreting and accounting for outcome, process and structure indicators in both hospitals. We suspect that these findings are not unique to eye care or international quality comparisons and are thus relevant to quality comparison in other healthcare settings. Although we found a similar clinical approach to DR treatment in hospitals, care quality perception and operationalization differences were found (Table III). At the outcome level, indicators were not always

Diabetic retinopathy care

315

IJHCQA 27,4

Level

Eye hospital NL-H (the Netherlands)

Eye hospital NL-H (USA)

Outcome

1. No indicators available 2. Indicators mainly for internal improvement and transparency 3. More cooperation between ophthalmologist and GP 4. Treatment process less coordinated by professional, strong use of patient input 5. Residents run “own” outpatient clinic without continuous supervision 6. No set-aside DR Department 7. Ophthalmologists in partnership 8. Low amount of incoming patients with DR 9. Large role primary and preventive care

Some general indicators available Indicators mainly for external justification Less cooperation between ophthalmologist and GP Treatment process more coordinated by professional Residents run outpatient clinic under continuous supervision Set-aside DR Department Ophthalmologists private or employed High amount of incoming patients with DR Small role primary and preventive care

Process

316

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

Structure Table III. Main characteristics in the two case hospitals

available. We found differences in the way quality outcome indicators were interpreted, which influenced internal improvement and external justification for NL-H and US-H, respectively (Table III, indicator 2). At the process level, we found differences in shared care and medical residents’ productivity as a solution to increasing DR patients. Furthermore, the NL-H care process was more focused on the patient compared to the US-H’s more professionally focused care. At the structural level, we found differences in primary and preventive care, which may account for the difference in the DR prevalence at first visit and led to developing a separate US-H Retina Department (Table III, indicator 6). We also found differences in cooperation between hospital staff and ophthalmologists. We conclude that using and interpreting local eye hospital quality indicators for international quality comparison purposes is highly dependent on context. We found that physician and manager perceptions, organizational features and current indicator use to affect indicator results and their interpretation. Although international quality comparison seems to be a structured and rational data-sharing process, this study shows that it is highly dependent on contextual and social processes. Using quality indicators appropriately for international performance measurement and quality comparison depends on the contextual differences between hospitals. Since this context is locally bound and directly linked to hospital setting, caution should be used when interpreting quality comparison studies. Practical implications Given the existing case-mix differences, different organizational characteristics and different systems, it does not seem appropriate to use quality indicators for international performance measurement and quality comparison. If we insist on comparing hospital practices using indicators then it might be useful to develop or use existing outcome indicators (Porter and Teisberg, 2006). These should be integrated into the individual hospital’s well-structured performance management systems to stimulate sustainable improvements at the process level. A nationwide or even global accreditation system (like the Joint Commission) could support exchanging comparable data between individual hospitals (De Korne et al., 2012). Even if the systems are similar, however, barriers to indicator comparisons will remain. Furthermore, the general accreditation standards’ applicability and appropriateness for specialty

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

hospitals can be discussed (Al-Almin et al., 2010). Specialty hospital staff should develop their own specialty-based accreditation systems, using current benchmark indicators as a first step (De Korne et al., 2012). To compete on value, cooperation between hospital managers and clinicians in quality, efficiency and strategy areas is essential.

Diabetic retinopathy care

References Aday, L.A. and Andersen, R. (1974), “A framework for the study of access to medical care”, Health Services Research, Vol. 9 No. 3, pp. 208-220. Aiken, L.H., Clarke, S.P. and Sloane, D.M. (2002), “Hospital staffing, organization, and quality of care: cross-national findings”, International Journal for Quality in Health Care, Vol. 14 No. 1, pp. 5-13. Aldington, S.J., Kohner, E.M., Meuer, S., Klein, R. and SjØlie, A.K. (1995), “Methodology for retinal photography and assessment of diabetic retinopathy: the EURODIAB IDDM complications study”, Diabetologia, Vol. 38 No. 4, pp. 437-444. Al-Almin, M., Zinn, J., Rosko, M.D. and Aaronson, W. (2010), “Specialty hospital market proliferation: strategic implications for general hospitals”, Healthcare Management Review, Vol. 35 No. 4, pp. 294-300. Barnetson, B. and Cutright, M. (2000), “Performance indicators as conceptual technologies”, Higher Education, Vol. 40 No. 3, pp. 277-292. Basu, A., Howell, R. and Gopinath, D. (2010), “Clinical performance indicators: intolerance for variety?”, International Journal of Healthcare Quality Assurance, Vol. 23 No. 4, pp. 436-449. Berg, M. (1997), “Problems and promises of the protocol”, Social Science & Medicine, Vol. 44 No. 8, pp. 1081-1088. Berg, M. (1999), “Patient care information systems and healthcare work: a socio-technical approach”, International Journal of Medical Informatics, Vol. 55 No. 2, pp. 87-101. Berg, M. and Goorman, E. (1999), “The contextual nature of medical information”, International Journal of Medical Informatics, Vol. 56 Nos 1-3, pp. 51-60. Berg, M., Meijerink, Y., Gras, M., Goossensen, A., Schellekens, W., Haeck, J., Kallewaard, M. and Kingma, H. (2005), “Feasibility first: developing public performance indicators on patient safety and clinical effectiveness for Dutch hospitals”, Health Policy, Vol. 75 No. 1, pp. 59-73. Chassin, M.R. and Galvin, R.W. (1998), “The urgent need to improve health care quality”, The Journal of the American Medical Association, Vol. 280 No. 11, pp. 1000-1005. Davies, H.T.O. and Lampel, J. (1998), “Trust in performance indicators?”, Quality in Health Care, Vol. 7 No. 3, pp. 159-162. Davies, H.T.O., Nutley, S.M. and Mannion, R. (2000), “Organisational culture and quality of health care”, Quality in Health Care, Vol. 9 No. 2, pp. 111-119. Davis, K., Schoen, C., Schoenbaum, S.C., Doty, M.M., Holmgren, A.L., Kriss, J.L. and Shea, K.K. (2007), Mirror, Mirror on the Wall: an International Update on the Comparative Performance of American Health Care, The Commonwealth Fund Pub., Washington, DC. De Korne, D.F., Van Wijngaarden, J.D., Sol, K.J., Betz, R., Thomas, R.C., Schein, O.D. and Klazinga, N.S. (2012), “Hospital benchmarking: are US eye hospitals ready?”, Healthcare Management Review, Vol. 37 No. 2, pp. 187-198.

317

De Korne, D.F., Sol, K.J., Van Wijngaarden, J.D., Van Vliet, E.J., Custers, T., Cubbon, M., Spileers, W., Ygge, J., Ang, C.L. and Klazinga, N.S. (2010), “Evaluation of an international benchmarking initiative in nine eye hospitals”, Healthcare Management Review, Vol. 35 No. 1, pp. 23-35. DeWalt, K.M. and DeWalt, B.R. (2002), Participant Observation: A Guide for Fieldworkers, AltaMira Press, Walnut Creek, CA.

IJHCQA 27,4

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

318

Dixon, A., Robertson, R. and Bal, R. (2011), “The experience of implementing choice at point of referral: a comparison of the Netherlands and England”, Health Economics, Policy and Law, Vol. 5 No. 3, pp. 295-317. Donabedian, A. (2005), “Evaluating the quality of medical care”, The Milbank Quarterly, Vol. 83 No. 4, pp. 691-729. Donahue, K.T. and Van Ostenberg, P. (2000), “Joint commission international accreditation: relationship to four models of evaluation”, International Journal for Quality in Health Care, Vol. 12 No. 3, pp. 243-246. Epstein, A. (1995), “Performance reports in quality – prototypes, problems and prospects”, New England Journal of Medicine, Vol. 333 No. 1, pp. 57-61. Freeman, T. (2002), “Using performance indicators to improve healthcare quality in the public sector: a review of the literature”, Health Service Management Research, Vol. 15 No. 2, pp. 126-137. Garcı´a-Alte´s, A., Zonco, L., Borrell, C. and Plasce`ncia, A. (2007), “Measuring the performance of urban healthcare services’: results of an international experience”, Journal of Epidemiology & Community Health, Vol. 61 No. 9, pp. 791-796. Gross, P.A., Braun, P.I., Kritchevesky, S.B. and Simmons, B.P. (2000), “Comparison of clinical indicators for performance measurement of healthcare quality: a cautionary note”, British Journal of Clinical Governance, Vol. 5 No. 4, pp. 202-211. Jerak-Zuiderent, S. and Bal, R. (2011), “Locating the worth of performance indicators. Performing transparencies and accountabilities in health care”, in Rudinow Sætna, A., Mork Lomell, H. and Hammer, S. (Eds), The Mutual Construction of Statistics and Society, Routledge, London, pp. 224-244. Joint Commission (2010), “Evolution of performance measurement at the joint commission 1986-2010”, available at: owww.jointcommission.org/assets/1/18/SIWG_Prologue_web_ version.pdf4 (accessed 4 March 2012). Kinnersley, P., Stott, N., Peters, T.J. and Harvey, I. (1999), “The patient-centeredness of consultations and outcome in primary care”, British Journal of General Practice, Vol. 49 No. 466, pp. 711-716. Lombarts, M.J.M.H., Rupp, I., Vallejo, P., Sun˜ol, R. and Klazinga, N.S. (2009), “Application of quality improvement strategies in 389 European hospitals: results of the MARQUIS project”, Quality and Safety in Health Care, Vol. 18 No. 1, pp. 28-37. McColl, A., Roderick, P., Wilkinson, E., Gabbay, J., Smith, H., Moore, M. and Exworthy, M. (2000), “Clinical governance in primary care groups: the feasibility of deriving evidence-based performance indicators”, Quality in Health Care, Vol. 90 No. 2, pp. 90-97. McGlynn, E.A., Asch, S.M., Adams, J., Keesey, J., Hicks, J., DeCristofaro, A. and Kerr, E.A. (2003), “The quality of health care delivered to adults in the United States”, The New England Journal of Medicine, Vol. 348 No. 26, pp. 2635-2645. Mainz, J. (2003), “Defining and classifying clinical indicators for quality improvement”, International Journal for Quality in Health Care, Vol. 15 No. 6, pp. 523-530. Massachusetts Eye and Ear Infirmary (MEEI) (2010), Quality and Outcomes, Department of Opththalmology, MEEI, Boston, MA. Mechanic, D. (2001), “Are patients’ office visits with physicians getting shorter?”, The New England Journal of Medicine, Vol. 344 No. 3, pp. 198-204. Nederlands Oogheelkundig Gezelschap (2006), Richtlijn Diabetische Retinopathie: Screening, Diagnostiek en Behandeling, Van Zuiden Communications B.V., Alphen aan den Rijn. Organisation for Economic Co-operation and Development (2010), Improving Value in Health Care: Measuring Quality, Organisation for Economic Co-operation and Development, Paris.

Downloaded by Duquesne University At 08:11 30 January 2016 (PT)

Porter, M.E. and Teisberg, E.O. (2006), Redefining Health Care: Creating Value-Based Competition on Results, Harvard Business School Press, Boston, MA. Rotterdam Eye Hospital (REH) (2011), “Organisatie”, available at: owww.oogziekenhuis.nl/overhet-oogziekenhuis/center-of-excellence.html4 (accessed 19 May 2011). Schneider, E.C. and Lieberman, T. (2001), “Publicly disclosed information about the quality of health care: response of the US public”, Quality in Health Care, Vol. 10 No. 2, pp. 96-103. Schoen, C., Davis, K., How, S.H.K. and Schoenbaum, S.C. (2006), “US health system performance: a national scorecard”, Health Affairs, Vol. 25 No. 6, pp. w457-w475. Smith, P. (1990), “The use of performance indicators in the public sector”, Journal of Royal Statistical Society, Vol. 153 No. 1, pp. 53-72. Smith, P. (1993), “Outcomes related performance indicators and organizational control in the public sector”, British Journal of Management, Vol. 4 No. 3, pp. 135-151. Smith, P. (1995), “Performance indicators and outcome in the public sector”, Public Money and Management, Vol. 15 No. 4, pp. 13-16. Sower, E.V., Duffy, J.A. and Kohers, G. (2008), Benchmarking for hospitals, achieving best-in-class performance without having to reinvent the wheel, ASQ Quality Press, Milwaukee, WI. Tarr, J.D. (1995), “Performance measurement for a continuous improvement strategy”, Hospital Material Management and Quality, Vol. 18 No. 12, pp. 77-85. Wait, S. and Nolte, E. (2005), “Benchmarking health systems: trends, conceptual issues and future perspectives”, Benchmarking: An International Journal, Vol. 12 No. 5, pp. 436-448. Watkins, P.J. (2003), “ABC of diabetes: retinopathy”, British Medical Journal, Vol. 326 No. 7395, pp. 324-326. Wennberg, J. and Gittelsohn, A. (1973), “Small area variations in health care delivery”, Science, Vol. 182 No. 4117, pp. 1102-1108. World Health Organization (WHO) (2006), Quality of Care: a Process for Making Strategic Choices in Health Systems, WHO Press, Geneva. Further reading Bal, R. and Zuiderent-Jerak, T. (2011), “The practice of markets. Are we drinking from the same glass?”, Health Economics, Policy and Law, Vol. 6 No. 1, pp. 139-145. Corresponding author Carolina Elisabeth de Korte can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Diabetic retinopathy care

319

Diabetic retinopathy care--an international quality comparison.

The purpose of this paper is to study the quality indicator appropriateness and use it for international quality comparison on diabetic retinopathy (D...
161KB Sizes 0 Downloads 8 Views