Downloaded from www.ajronline.org by 50.34.108.147 on 10/15/15 from IP address 50.34.108.147. Copyright ARRS. For personal use only; all rights reserved
1117
Perspective
Quality Philip
Improvement
in Diagnostic
Radiology
N. Cascad&
Quality:
Grant me strength, time, and opportunity always to correct what I have acquired, always to extend its domain; for knowledge is immense and the spirit of man can extend infinitely to enrich itself daily with new requirements. Today he can discover his errors of yesterday, and tomorrow he may obtain new light on what he thinks himself sure of today. Maimonides,
our
we
system
have
entered
of health
a new
many
definitions
care
delivery
revolutionary is indisputable.
phase Dr.
in Paul
Ellwood, in his Shattuck Lecture of 1 988 [1], describes the current health cane environment as consisting of “uninformed patients, skeptical payors, concerned politicians, frustrated physicians and besieged healthcane executives.” Stimulated by questions
raised
in studies
such
as Wennberg’s
[2],
which
describes wide variations in practice patterns in different geographic settings without demonstrable differences in health cane outcomes, government and private sector payers have begun to question the ability and motivation of physicians to resolve issues of cost-effectiveness in health care quality. They are thus becoming more adamant that the medical
profession
either
monitor
itself
or
have
it done
by
others. Unfortunately, as we head into this new era of assessment and accountability [3], this new function of monitoning has resulted in physicians being introduced to new, and
at times
confusing,
concepts
and
terminology.
It is my
intent in this article to discuss important aspects of quality improvement programs and to pose a challenge to the discipline
of radiology.
of health
an interest
care
quality
basic
is that espoused by the Commission of Accreditation of Healthcane Onganiza(JCAHO). The JCAHO defines health care quality as degree
ability
of current
to which
of desired
of undesired Basically,
cane
given
can
say
then
services
outcomes
outcomes one
that
reflects
the current that
increase
and reduce state
individuals
the
of organizations
and
prob-
the probability of knowledge.” on organizations
that achieve more desirable outcomes compared norms have a higher quality of cane and vice versa, such comparisons should be made on a continual scientific advances are introduced. The process of quality assessment, then, involves ing the performance
the
definitions
patient
patient
An example
are organiza-
with
elements
in quality.
as there
tions
Joint tions
Oath and Prayer
and
and Assurance
Improvement,
To evaluate the quality of health cane, we must first define exactly what we mean by quality. To do otherwise is to invite miscommunication, disagreement, and conflict. There are as
“the That
Assessment,
individual
with the and that basis as companpractition-
ens with established norms. The ultimate goal, however, is not just to undertake an assessment of performance but to introduce feedback mechanisms that will allow the introduction
of changes,
which
in turn
would
lead
to an improvement
of comparative performance information and the introduction of changes for improving performance are referred to as quality improvement programs [4]. The question of whether the cost of care should enter into the measurement of quality seems reasonable in that inefficiency and low quality often go hand in hand, either directly, by delaying diagnoses or treatments and by the introduction of complications from medical interventions, on in the quality
of care.
The continuous
feedback
Received October 23, 1989; accepted after revision December 14, 1989. ‘
Department
of Diagnostic
Imaging/Radiology,
Sinai Hospital
of Detroit
and Wayne
State
University,
requests to P. N. Cascade. AJR 154:1117-1120,
May
1990 0361 -803X/90/1545-1
117 © American Roentgen Ray Society
6767 W. Outer
Dr. , Detroit, Ml 48235-2899.
Address reprint
1118
CASCADE
indirectly by unnecessarily absorbing resources that otherwise could be directed toward more appropriate care. This is the rationale for including assessment of appropriateness as part of quality assessment programs. One brief comment about the term quality assurance. The
Downloaded from www.ajronline.org by 50.34.108.147 on 10/15/15 from IP address 50.34.108.147. Copyright ARRS. For personal use only; all rights reserved
word
assurance
tainty, that
may
misleading all outcomes
medical
should
intervention.
Indicators,
instill
an image
lay organizations or
can
This is certainly
Thresholds,
Standards
of confidence
and the public be
positive
of health care quality extracted are those related
ment
of quality
and
definition
the
subsequent
for
any
given
of Care
elements
the
cer-
not the case.
basic
When
and
into thinking
is examined, the to the measure-
comparison
with
estab-
lished norms. Measures of quality, so-called indicators, should be objective, quantifiable, and statistically reliable, and norms should, whenever possible, be based on carefully conducted and scientifically controlled clinical studies. Proper development
of each
requires
considerable
time
and
financial
invest-
ment. To attempt to develop indicators and study every aspect of care related to the practice of diagnostic radiology would be an enormous endeavor, virtually impossible. It is therefore necessary to focus attention and direct available qualityassessment resources toward high-volume and high-risk activities that are problem prone and that have the most direct impact on outcome. By way of example, broad aspects of care deserving of indicators that might be developed include radiation safety, performance of personnel, and efficacy of operations. Correspondingly, specific indicators might include patient exposure doses during mammography, technical success rates of percutaneous transluminal angioplasty, and the timeliness of performance of emergent CT in the evaluation of head trauma. Once indicators are developed and data are being collected, comparisons should be made to establish the relative quality of cane. In some instances, relative performance will be high, whereas in others it will be lower. In these circumstances, it seems reasonable to establish thresholds. When these thresholds are crossed, an in-depth study should be undertaken to determine a cause. In a few instances, data will reveal results of such a negative nature that acceptable standards of practice will be considered breached and immediate corrective action will be warranted. It is extremely important
the
standards
be equitable,
reasonable,
applicable
to all settings in which care is delivered, and always have the best interest of the patient in mind. Whenever possible, standards of care should be developed by explicit methodology based on properly conducted scientific studies or by implicit methods that use consensus panel techniques. In either case, the standards should be acceptable to the majority of those individuals
to be assessed.
Quality Assessment Science
and Quality Improvement
The American College of Utilization established in 1 973 to encourage
as a
Review Physicians was and maintain academic
AJA:154,
May 1990
excellence among physicians engaged in the many facets of utilization review and the assessment of the quality of medical care. Under the sponsorship of the College, the American Board of Quality Assurance and Utilization Review Physicians was founded in 1 978. Certification in this specialty is now being granted pending successful completion of a certification examination. The examination is comprehensive, covering quality, timeliness, economics, appropriateness of cane, techniques of care review, sociology, and demography. Most research in radiology today is directed toward the advancement of diagnostic and therapeutic knowledge and the comparative assessment of technology. Our research experts, with some exceptions, are not sufficiently trained or motivated in the study of medical care components or consequences. Knowledgeable researchers in the fields of economics, statistics, computer science, public health administration,
and
psychology
and
behavioral
sciences
could
ideally
coordinate their efforts with clinical research scientists to provide optimal quality assessment and improvement programs. A coordinated group such as this could improve on proposed and existing computer data bases, which are developed and maintained primarily by payers and the government. These data bases have been focused on the determination of the cost/benefit implications of patterns of practice and have not been used for identifying factors directly affecting the outcome of health cane. Improved national data bases developed with the input of physicians, including radiologists, could allow analysis of the impact of physician interventions on clinical outcomes and the appropriateness of care. The information provided would assist patients, payers, and physicians in making correct decisions on health cane delivery. One model of such a data base is being developed by the JCAHO as part of its “Agenda for Change.” National indicators are under study in obstetrics, anesthesiology, cardiology, cardiac surgery, oncology, trauma, and organization management.
Additional
areas
Indicator development begin in 1990. Although incomplete radiology-oriented
will
be considered
for imaging
clinical
in design data
in the
services and detail,
base
that
near
future.
is scheduled an example could
provide
to of a an
assessment of diagnostic outcome was described recently by Bind [5]. He used information provided by a computerized tumor registry to assess the diagnostic accuracy of more than 21 000 consecutive screening mammograms. By using similar techniques, it seems feasible that sophisticated data bases could be developed to provide continuous feedback loops of the outcome of procedures to those wishing to participate. Specific areas of concern, such as the early diagnosis of colorectal, breast, and lung neoplasms, could be evaluated more effectively by extending data collection and feedback beyond specific health care organizations to broaden communities. This might help us to answer serious questions about the costs and benefits of new technology and procedunes raised by skeptical patients, payers, and the medical community. For example, the place of lasers and athenectomy devices in interventional radiology might be determined by objective results, rather than by ego, public promotion, and turf battles. Before such programs can be successful, how-
Downloaded from www.ajronline.org by 50.34.108.147 on 10/15/15 from IP address 50.34.108.147. Copyright ARRS. For personal use only; all rights reserved
AJR:154,
May 1990
QUALITY
IMPROVEMENT
even, serious questions need to be addressed, including those related to maintaining physician and patient confidentiality, as well as cost-effectiveness. Research is now being conducted in an attempt to extend our knowledge of the factors responsible for so-called missed diagnoses. Studies of the psychophysiologic mechanism involved in optical information gathering, image perception and formulation of false images, and failure to recognize disease patterns are now under way [6]. A natural extension of accumulation of this basic science information will be to develop means to improve these operating characteristics to make them clinically useful. Further studies to identify vanables related to diagnostic accuracy and the means to improve on these variables are largely unexplored. Only a few studies related to the necessity for seclusion, privacy, and time for analysis have been performed. Psychological tests could beformulated to identify prospective radiology residents for inborn physiologic capacity to recognize patterns. These variables can best be tested by outcome analysis of an objective nature, with the assistance of large data bases and automatic feedback processes.
Physician Review
Performance,
Necessary
Fallibility,
Peer
Evaluation of the cognitive performance of the diagnostic radiologist is a difficult task. Whereas indicators of performance related to technical procedures are relatively easy to develop, accurate and cost-effective methods of data gathening and setting standards of diagnostic accuracy are difficult to design and implement. Attempts to judge the overall Capabilities of an individual are thwarted by the lack of knowledge of the complexity of an individual practice. Double reading and determination of disagreements on retrospective review of relatively small samples of cases, a method often used by radiology departments to pass external reviews by the JCAHO, cannot be expected to reflect accurately the capabilities of a typical general diagnostic radiologist who may interpret more than 1 0,000 studies annually, a total consisting of variable numbers and types of studies that use multiple imaging techniques. What is required then, is the development of indicators and systematic methods that will more accurately determine when an incorrect diagnosis has been made. Development of computer data bases and feedback loops, particularly focused on disease categories or outcomes, would be advantageous and justified. Three factors should be considered when a false-positive on false-negative interpretation has been made. The first two, scientific ignorance and negligence, are obvious. The third factor, unrelated to ineptitude, has been well described by Gonovitz and Maclntyre in their article “Toward a Theory of Medical Fallibility” [7]. Not even the most experienced, studious, and intelligent physician deeply involved in research at the cutting edge of medical knowledge can ever reach the goal of infallibility. Each patient has his or her own distinct and unique characteristics, and it is the infinite number of particulars that lead to fallibility. Patients, payers, government, and even physicians need to understand that patient injury or
IN DIAGNOSTIC
1119
RADIOLOGY
failure to diagnose is not proof of culpability. Evaluation of missed diagnoses by peer review can be used to ascertain to the best degree whether a particular incorrect diagnosis was unavoidable or should have been made correctly after reviewing all of the particulars of a given case. The objective is to take action as a group to commend, reeducate, proctor, or change privilege delineations.
Political and Psychological
Issues
Why is it then, that portions of the medical community have resisted the monitoring and evaluation of physician performance, peer review, the development of standards of practice, and periodic reaccreditation of physicians after completing initial training programs? Many physicians have expressed the opinion that such efforts are bureaucratic, time consuming, costly endeavors that are unnecessary and infringe on the independent practice of medicine. Furthermore, the malpractice climate has instilled defensive attitudes, particularly as they relate to the establishment of standards for fear of encouraging and aiding plaintiff attorneys, although the opposite is as likely to occur, that is, setting up legal defenses based on national accepted norms. For these reasons and others, the process of developing quality improvement programs has not been well received by many radiologists. Behavioral patterns that are rather consistent in nature have arisen and have been well described by Miller and Flanagan [8]. First there is the “infantile stage,” in which physicians have little knowledge and much resistance to change, reacting with anger and ridicule; this is followed by an “adolescent phase,” in which the physician accepts what is imposed but with passive resistance and an unwillingness to participate; the subsequent “phase of maturity” is reached when the process of quality improvement is understood and accepted and participation is active. The final phase, “adulthood,” is reached when creativity, innovation, research, and active contributions take place and there is true motivation to improve the quality of care.
The Challenge The challenge now is for the discipline of diagnostic radiology to mature into the phase of adulthood-to develop its own quality improvement programs rather than them imposed by others. What group of people, other than radiologists and allied health personnel, is better prepared, more knowledgeable, on better suited to do so? Should not our scientific societies, in conjunction with the American College of Radiology, work together to determine which aspects of cane are most important, which factors are the best indicators of quality, and exactly what standards and thresholds should be put into place? Should not the radiologic community, which has struggled long and hard with the subject of mandatory reaccreditation, finally reach consensus by developing a cost-effective mechanism of certification that reflects the unique, everyday functions
of
expectations
each
radiologist
of government
and
satisfies
and
the
the
public?
ever-increasing
Perhaps
the
Downloaded from www.ajronline.org by 50.34.108.147 on 10/15/15 from IP address 50.34.108.147. Copyright ARRS. For personal use only; all rights reserved
1120
CASCADE
solution will be found in the development of quality assessment programs based on generic indicators of performance monitored and evaluated on an ongoing basis. Locating adequate funding to support a national program of generic mdicator development, stimulating research in the field of outcome improvement, and creating extensive data bases with sufficient detail to provide continuous feedback loops to nadiologists is one of the most significant challenges of all. Although volunteenism will still be sought and applauded, a full time staff of experts is necessary if these goals are to be reached. And finally, by achieving these goals it will be possible not only to make the statement but to provide objective proof to the public, payers, the government, and accneditating agencies, such as the Joint Commission, that “radiology is best practiced by radiologists.”
AJR:154,
May 1990
REFERENCES 1 . Ellwood PM. Shattuck lecture. Outcomes management: a technology of patient experience. N Engi J Med 1988;318: 1549-1556 2. Wennberg J. Which rate is right? N EngI J Med 1986;314:310-31 1 3. Relman AS. Assessment and accountability: the third revolution in medical care. N Engl J Med 1988;319: 1220-1 222 4. Berwick DM. Sounding board: continuous improvement as an ideal in health care. N EngI J Med 1989;320:53-56 5. Bird RE. Low-cost screening mammography report on finances and review of 21 716 consecutive cases. Radiology 1989;171 :87-90 6. Hendee WA, Wells PNT. Visual perception as an opportunity for radiologic research. Invest Radiol 1989;24:575-576 7. Gorovitz 5, Maclntyre A. Toward a theory of medical fallibility. Hastings Cent Rep 1975;5:13-23
8. Miller ST. Flanagan E. Growth and development assurance: an ontogeny for quality assurance 14:358-362
of physicians in quality managers. QRB 1988;