When Is the Quality of Care Good Enough? The paper by Retchin and Brown in this issue of the Journal' reports that "recommended elements of routine and preventive care" were more likely to be provided to elderly patients by doctors practicing in four staff/group model HMOs (health maintenance organizations) than by doctors practicing in four IPA (independent practice associations) or network model HMOs who, in turn, were more likely to provide the elements than doctors delivering fee-for-service care. As a physician-manager in an HMO, my initial reaction was to be most heartened by the results. After all, they are consistent with the belief of many of us that managed care organizations, particularly staff and group model HMOs, can manage and improve the quality of care-not just the costs of care. My next reaction was concern: the study reported here grows out of a demonstration project not out of a rigorous experimental design. The HMOs may not be representative; the fee-for-service physicians may not be representative. In short, the results may not be generalizable. With time, my initial reactions have given way successively to introspection and speculation. I find this paper to be highly provocative. For me, it clearly raises many more questions than it answers; and I would like to share them. To further discussion, I would ask the readers to assume with me that the results of this study are valid and will prove to be generalizable. You will note in this study of ambulatory care performance that in not one of the care systems is even one measure performed at the 100 percent level. Is it good practice, or at least "good enough" practice to order a test of renal function in 93 percent of elderly patients, or obtain or document a history of alcohol use in 69 percent? Why did 7 percent of staff/group patients and 29 percent of fee-for-service patients not have a urea nitrogen or creatinine? Do some physicians order these tests for 100 percent of their patients and others for 0 percent, or is there a distribution somewhere in between? How wide is the distribution and what factors contribute to it? How many physicians routinely order both a urea nitrogen and creatinine, which may not be a costeffective practice? Perhaps most importantly, are there standards to guide practice in any of these settings? Who sets them? Are they agreed upon by the physicians? In short, variation is an indicator of poor quality. Are there attempts to assess and manage variation in any of these care systems? As physicians, we like to feel that we provide the same high quality of care to all our patients, but we have never been able to prove it. Data like those in this paper suggest we do not do so. Indeed, this paper raises the possibility that the same physicians provide different care to their patients in different care structures. Fifty-eight percent of those physicians whose fee-for-service (FFS) patients were in the FFS group also saw HMO patients in an IPA/network arrangement. Although we have no data on their prepaid patient population, it is certainly possible they behave like physicians in the IPA group when seeing an IPA patients. If so, then they would be practicing differently for different patients. It will take other studies specifically designed to compare performance of physicians who take care of patients in multiple care systems to demonstrate whether or not this is the way physicians behave. If so, what are the determinants of that behavior? What is it about an IPA that might lead to performance of more of the "recommended elements" of AJPH April 1990, Vol. 80, No. 4

routine and preventive care? Is it simply the fact of prepayment and the physician's knowing that he/she is seeing a prepaid patient that alters physician behavior? Alternatively, does the patient somehow cue the physician to behave differently? One would hope that some of the differences in performance in the staff/group model HMOs were achieved by the presence of systems in those organizations which were designed to support the practitioner in performing routine elements of care. Systems supports can include education, reminders, feedback of performance results, and incentives (positive and negative). To what extent, however, have HMOs explicitly developed systems to encourage better performance? Which systems work best in which settings? Can any system or combination of systems lead to 100 percent performance? If not, why not? Retchin and Brown focus on elements of history taking, physical examination, and performance of preventive practices. What about the other aspects of ambulatory care such as abilities to form an effective physician-patient relationship, counsel patients, make diagnoses accurately and efficiently, refer and prescribe appropriately? It is likely that these attributes are as variable as the ones studied in this paper. We do not yet have a clue whether type of payment and organization of care influence performance of these care elements and, if so, in what direction. Are prepaid care systems more or less likely to encourage risk-taking behavior? In the early days the natural assumption was that prepaid plans would skimp; but do they? Many in the United States consider fee-for-service care to be the gold standard of quality. Yet, the opportunity for variation is clearly much greater in fee-for-service care. It seems ironic that HCFA (Health Care Financing Administration), employers, and regulators should be so concerned about demonstrating quality in HMO care when they have paid so little attention to quality in FFS care. I find it peculiar that presumed freedom of choice of clinician should be either a surrogate for quality or somehow make up for poor technical performance by the chosen. I am not suggesting that efforts to evaluate quality, even if they focus on the organized aspects of medical care, such as hospitals and HMOs, are misguided. Rather, I find it strange that we have labored so long under the myth that we were providing high quality care in any of our settings when we had few measures of care and when the evidence we could get invariably indicated performance variation and rarely, if ever, demonstrated 100 percent performance of "recommended practices." We have labored for too long under the assumption that high technology is high quality. We must step back, look at the available data, admit to ourselves that we do not provide as good care as we could in any setting or care system, and commit ourselves to doing better. Quality improvement efforts are driven and monitored by data about the procedural details of delivery of care and data on the outcomes of care.2 At this time, most such data come from special research studies. We are at a very primitive level of development of routine measurements of care. This derives in part from our fragmented approach to developing priorities for improving care, an approach that is subject in turn to the competing interests of the clinicians who deliver care, the specialty societies, the health care regulators, the payors, and others. Perhaps we can stimulate a truly 403

EDITORIALS

competitive system in which individuals and organizations work on their own or in self-defined consortia to improve their care and market the improvement to the public and payors. An alternative model, the collaborative model, will require us to foster formal inter-institutional communication, cooperation and collaboration on a national, regional, or local basis. We will need to obtain the commitment of multiple parties (not just those with "low scores" on various measurements) to work together; we will need to share methods, successes and failures, so that all of us will learn and be able to better our performance. For the collaborative model to succeed, organizations will have to adopt methods for quality improvement which are less dependent on the concept of outliers (or "bad apples") and more dependent on the concept that all of us can do better.3 In either model, competitive or collaborative, we will be able to say that we provide care that is "good enough" only when our perfor-

I

mance is virtually 100 percent on all measures of agreed upon importance. REFERENCES 1. Retchin SM, Brown B: The quality of ambulatory care in Medicare health maintenance organizations. Am J Public Health 1990; 80:411-415. 2. Schoenbaum SC: The quality improvement cycle in clinical practice: colorectal cancer detection. HMO Practice 1989;3(5): 169-172. 3. Berwick DM: Continuous improvement as an ideal in health care: N EngI J Med 1989; 320(l):53-56.

STEPHEN C. SCHOENBAIuM, MD, MPH Address reprint requests to Stephen C. Schoenbaum, MD, MPH, Deputy Medical Director, Harvard Community Health Plan, 10 Brookline Place West, Brookline, MA 02146. X 1990 American Journal of Public Health 0090-0036/90$1.50

Accessible Housing Design Advisory Network Being Developed

A nationwide lack of usable, affordable and marketable housing remains one of the major issues facing Americans with disabilities. Barrier-free or adaptable housing, whether newly constructed or renovated, is essential for disabled and older individuals to live independently. In July 1989 the Center for Accessible Housing, funded by the National Institute on Disability and Rehabilitation Research, was created at the School of Design at North Carolina State University. It is a Research and Training Center, and its purpose is to improve and provide technical assistance and training about the design and development of accessible housing and products for use in the home. One of the Center's first tasks is to build a network of people with disabilities, their families, and close friends whose personal experience qualifies them to be a part of the solution to accessible housing issues through participation in a nationwide Accessible Housing Design Advisory Network. Membership in the Accessible Housing Design Advisory Network is free and entirely voluntary. Members will receive copies of the Center's newsletter, and may periodically be contacted by its designers and researchers to solicit opinions, review ideas, and/or evaluate training programs, housing or product designs. If you would like more information about the Center and to receive a Network membership questionnaire, please call (919) 737-3082 or send your name and mailing address to: The Research and Training Center for Accessible Housing, North Carolina State University, Box 8613, Raleigh, NC 27695-8613. The Center combines the in-house design expertise of faculty and students at NCSU School of Design with the research and practical experience of Barrier Free Environments, Inc., also of Raleigh, and three collaborating organizations: Rehabilitation Research and Development Center of Atlanta Veterans Affairs Medical Center; Adaptive Environments Center, Boston; and the Department of City and Regional Planning, UNC at Chapel Hill.

404

AJPH April 1990, Vol. 80, No. 4

When is the quality of care good enough?

When Is the Quality of Care Good Enough? The paper by Retchin and Brown in this issue of the Journal' reports that "recommended elements of routine an...
386KB Sizes 0 Downloads 0 Views