Hidden Costs of Poor Image Quality: A Radiologist’s Perspective Bruce I. Reiner, MD

Although image quality is a well-recognized component in the successful delivery of medical imaging services, it has arguably declined over the past decade owing to several technical, economic, cultural, and geographic factors. To improve quality, the radiologist community must take a more proactive role in image quality analysis and optimization; these require analysis of not just the single step of image acquisition but the entire imaging chain. Radiologists can benefit through improved report accuracy, diagnostic confidence, and workflow efficiency. The derived data-driven analyses offer an objective means for provider performance analysis, which can help combat commoditization trends and self-referral by nonradiologist providers. Key Words: Image quality, outcomes analysis, data mining, quality assurance J Am Coll Radiol 2014;-:---. Copyright © 2014 American College of Radiology

INTRODUCTION

Image quality is an essential component of the overall quality of medical imaging service delivery. The old adage “garbage in, garbage out” is applicable to the relationship between image and report quality, because deficiencies in quality can introduce error and uncertainty in the radiology report [1,2]. As a result, it is clearly in the best interest of the radiologist to play an active role in image quality analysis and optimization. Unfortunately, the practice of medical imaging quality assurance (QA) has arguably worsened, owing to a number of technical, economic, cultural, and geographic factors. From a technical perspective, the transition from analog to digital medical practice has resulted in a shift from centralized to distributed workflow; this shift has decreased communication and peer-to-peer consultation [3]. From an economic perspective, declining technical and professional reimbursement has placed heightened importance on workflow and productivity optimization and has led to the commoditization of medical imaging service delivery [4]. A large proportion of radiologist compensation is tied directly to individual productivity, creating the potential to prioritize quantity over quality. Technology development has also led to the disintegration of geographic boundaries, which historically have served to protect many radiologists from competition. The digitization of medical imaging data and the outsourcing of professional services through teleradiology have dramatically

Department of Diagnostic Imaging, Baltimore VA Medical Center, Baltimore, Maryland. Corresponding author and reprints: Bruce I. Reiner, MD, Department of Diagnostic Imaging, Baltimore VA Medical Center, 22 N. Greene Street, Baltimore, MD 21201; e-mail: [email protected]. ª 2014 American College of Radiology 1546-1440/14/$36.00  http://dx.doi.org/10.1016/j.jacr.2014.04.008

eroded these geographic boundaries, creating further distance between radiologists and technologists and limiting communication at the point of care [5]. The one area of radiology in which QA has consistently improved is breast imaging. Rigorous quality standards and analytics have been incorporated into these imaging services through legislative and societal initiatives (ie, the Mammography Quality Standards Act and BI-RADS); these are discussed in further detail later. In the absence of similar quality standards in other radiology disciplines, QA is left largely to the discretion of individual service providers, which can result in quality deficiencies and inconsistencies. In order to counteract these negative trends, it is incumbent upon the radiologist community to be more proactive and take a leadership role in QA education, research, clinical oversight, and intervention. IMAGE QUALITY AND THE MEDICAL IMAGING CHAIN

Image quality is customarily viewed as the outcome of a single-step event (ie, image acquisition) that is in large part determined by the technologist performing the imaging exam. Actually, image quality is part of a chain of events (ie, an imaging chain; Table 1). In addition to acquisition, a number of preceding and subsequent steps in this chain influence and are affected by image quality: exam ordering, scheduling, protocol determination, historical image and clinical data access, image processing, interpretation, and reporting. These individual steps rely on multiple technologies and players. In addition to the modality used for image acquisition, other technologies that are routinely used include computerized physician order entry, radiology information systems, image processing software, picture 1

2 Journal of the American College of Radiology/Vol.

-

No.

-

Month 2014

Table 1. Sequential steps in the imaging chain Step Number Description 1 2 3 4 5 6 7 8 9 10 11 12 13

Exam ordering Scheduling Data retrieval Protocol selection Image acquisition Image review and quality assurance Image processing Archiving Distribution Display Interpretation Reporting Communication

archival and communication systems, and reporting software. Technologies such as advance visualization software and electronic medical records may also play a role in selected cases. The imaging chain can be thought of as having two distinct and separate phases that affect image quality. The first phase (ie, pre-exam archival) consists of those steps beginning with order entry and ending with image archiving. These steps have a direct effect on image quality because they involve exam selection, scheduling, determination of the clinical indication (and supporting clinical data), review of historical imaging data, protocol optimization, image acquisition, and image processing. Although the principal stakeholder in this phase is the technologist tasked with exam performance, other ancillary stakeholders play a role, including radiology administrators, supervisory technologists, clerical staff, radiologists, and clinicians. Any of the individual steps performed by these stakeholders can be a limiting factor and have significant “downstream” ramifications for image quality. As an example, consider a chest CT angiography performed for evaluation of pulmonary emboli. A multitude of factors will ultimately contribute to image quality and the ability to render an accurate and definitive diagnosis. These factors include: selection of the optimal exam (relative to the patient’s clinical status); assignment of the exam to the optimal technology (if multiple modalities are options) and technologist; comprehensive review of historical clinical and imaging data; protocol optimization (in terms of optimizing both CT image acquisition and contrast administration); and image processing (using 2- and 3-D reconstructions). A deficiency in any one of these steps can be deleterious to image quality and create additional negative effects. Failing to view image quality as a multistep process oversimplifies its analysis and risks a lack of understanding of the myriad factors that affect it and the potential interventions to improve it. The second “phase” of the imaging chain that plays a role in image quality analysis consists of those steps that follow exam completion and storage (ie, post-image

archiving). By this point, image quality is essentially established, unless additional “after the fact” imaging is performed (eg, patient recall). The individual steps in this phase include exam distribution and assignment, image presentation and display, interpretation, reporting, and communication. The principal stakeholder in this phase is the radiologist (or clinician) tasked with rendering diagnostic interpretation of the imaging dataset. Other potential ancillary stakeholders include referring clinicians, nurses, transcriptionists, technologists, and clerical staff. Although the steps in this phase do not directly affect image quality, the reverse is true—that is, the steps are directly affected by the image quality. As subsequent data show, report accuracy, productivity, and diagnostic confidence are all affected by image quality. In the example of a chest CT angiography performed for evaluation of pulmonary emboli, deficiencies in image quality (eg, noise, motion, poor pulmonary artery opacification) have the potential to adversely affect radiologist performance and clinical outcomes in a variety of ways, including erroneous diagnosis, equivocal report findings, and recommendations for additional imaging exams and/or clinical tests. Increased patient morbidity can result from failure to treat a pulmonary embolus (ie, a false negative), unnecessary treatment resulting in bleeding (ie, a false positive), and delayed diagnosis and treatment. Thus, analysis of image quality should not end with a traditional QA check, but instead should extend to the entire imaging chain. If approached in this way, image quality analysis and optimization constitute a complex process that needs to take into account multiple steps, technologies, and stakeholders. CLINICAL AND ECONOMIC IMPACT OF IMAGE QUALITY

The most traditional metric for evaluating clinical outcomes in medical imaging is diagnostic accuracy, which refers to the ability to detect, quantify, characterize, and classify disease [6]. Diagnostic accuracy errors in radiology are common and have been estimated to occur in as high as 30% of image readings [7]. A number of studies have demonstrated that deficiencies in image quality and display correlate with both false positive and false negative interpretation errors [8-10]. Although false positive interpretation errors tend to be viewed as having higher priority owing to the clinical impact of missed pathology, false negative interpretation errors can also be of great clinical and economic importance. Fletcher and Elmore [11] reported that, on average, a woman undergoing screening mammography has a 10.7% chance of a false positive result for each individual mammogram; another study estimated that approximately 50% of women undergoing routine screening mammography will have a false positive that leads to an interventional procedure [12]. The net result of these false positive interpretations is increased patient anxiety, increased

Reiner/Costs of Poor Image Quality 3

cost, overutilization of medical services, and increased morbidity. In addition to negatively affecting clinical outcomes, image quality deficiency has the potential to adversely affect service provider economics by reducing productivity, increasing retakes, and decreasing referrals [13-15]. Although the correlation among poor image quality, retakes, and referrals is fairly straightforward, the relationship between image quality and productivity is often overlooked. In a study by Siegel et al [16], an increase of 16% in radiologist interpretation time was observed for poorer quality chest radiography images. These studies illustrate the substantive clinical and economic impact that image quality has on radiologist performance and the opportunity for QA improvement. Poor image quality can also result in additional imaging exams, which can take the form of a “redo” (ie, repeat of the imaging exam within a very short time after the initial exam) or performance of an alternative imaging exam. In the example of the chest CT angiography performed for evaluation of pulmonary embolus, poor image quality resulting in ineffective or incomplete diagnosis could result in a short-term follow-up chest CT angiography; performance of an alternative imaging exam (eg, ventilation perfusion lung scan); additional clinical tests (eg, D-dimer assays); and delayed treatment. The performance of chest CT angiography and ventilation perfusion lung scan results in additional radiation þ/- contrast, which has associated patient safety concerns. A number of radiology reporting deficiencies have been documented in the literature, including the use of language that introduces uncertainty and ambiguity, frequent follow-up recommendations, and transference of clinical responsibilities to the referring clinician (eg, clinical correlation recommended) [2,17]. Many of these report deficiencies are attributed to radiologist uncertainty or inconclusiveness in analysis of the imaging dataset; thus, it is reasonable to assume that image quality deficiencies may play a role in their occurrence. Additional research is required to further investigate these potential relationships and provide economic analysis of the costs and consequences associated with poor image quality. CONFOUNDING VARIABLES AND OUTCOME VARIABILITY

The impact of image quality on clinical outcomes needs to be considered in light of the fact that the diversity of imaging studies, patients, providers, and technologies all affect quality outcomes in different ways. An imaging study that is being performed as a short-term follow-up for a documented abnormality will in all likelihood be less affected by a quality deficiency than a comparable imaging study without a recent preceding exam and report to establish a clinical baseline. Because patients, providers, and technologies also play fundamental roles

in defining image quality, variability in these groups may affect image quality outcomes to varying degrees. One way to think of this relationship between variability and outcome is as “deviation from the ideal.” A patient who is thin, ambulatory, compliant, and relatively healthy (ie, has a low morbidity rating) will in all likelihood be less affected (in terms of clinical outcome) by a given deficiency in image quality than an obese, nonambulatory, noncompliant, and morbid patient for the same exam type and clinical indication. A similar analogy can be made in comparing two different technologies (eg, imaging modalities) used for the same purpose. If the same patient is being imaged on 3 Tesla and 0.5 Tesla MRI scanners for the same exam, one might expect that a comparable image quality deficiency (eg, motion) will have varying degrees of clinical impact (as measured by diagnostic accuracy and/or confidence). In both cases, a more-ideal patient and technology will not be as adversely affected as their less-ideal counterparts, owing to inherent differences in quality potential. A similar phenomenon is observed among medical imaging service providers. Studies have shown that image and/or display quality differences affect radiologist performance to varying degrees (ie, interradiologist variability). Radiologists with higher levels of clinical experience and specialty training tend to compensate for image quality deficiencies more effectively than their colleagues who have less experience and training [18,19]. This provider performance variability is relevant to both radiologists and clinicians, in that a great proportion of professional medical imaging services are currently being provided by nonradiologists, many of whom engage in self-referral [20,21]. Several studies have documented that both image quality and report accuracy are significantly poorer for most nonradiologist physician providers when compared with radiologist providers [22,23]. This dichotomy in image and report quality has profound implications for medical economics and clinical outcomes and can serve as an important distinction for patients in provider selection in this burgeoning age of patient empowerment, in which more patients are demanding an active role in medical decision making and greater access to data [24]. By focusing more attention on image quality and outcome analysis data, the radiology community has an opportunity to differentiate itself not only from nonradiologist competition, but also from competition within the radiologist community. This strategy could be effective in combatting commoditization trends and declines in reimbursement [4]. These findings underscore an important and frequently overlooked phenomenon: the impact of quality deficiencies is often not experienced in a uniform or predictable fashion within the radiologist community. By objectively quantifying these outcome variances, customizable analytics and intervention strategies can be performed with the goal of

4 Journal of the American College of Radiology/Vol.

-

No.

-

Month 2014

improved education, process improvement, and decision support. INTERVENTION STRATEGIES

An effective intervention strategy should always begin and end with objective data and derived analytics. In order to create objective analytics related to image quality analysis and the comprehensive imaging chain, a standardized data infrastructure must first be created, in which all imaging exams are analyzed for image quality using metrics from a standardized schema (Table 2). Supporting data related to the exam type, clinical indication, technology used, protocol parameters, provider identification, and patient attributes can also be recorded (much of this information is available in the DICOM header). These standardized image quality scores could be directly incorporated into the radiology report (in a manner similar to BI-RADS), providing the consumer with a standard image quality reference that assists in clinical and imaging management decisions. These image quality ratings can be provided by subjective assessment (eg, human observers) and/or objective assessment (eg, computerized image quality algorithms) [25]. The resulting image quality database can be used to provide a variety of standard and customized quality analytics aimed at providing objective performance analysis, education and training, process improvement, decision support, new technology development, and creation of best-practice guidelines (ie, evidence-based medicine). The quality-centric data and analytics can evolve to include data from each step in the collective imaging chain with the goal of objectively defining cause and effect relationships between individual data points and identifying specific areas for process improvement. In addition to being available for internal review, the standardized data can be made accessible to imaging customers (eg, referring clinicians, patients, and thirdparty payers), with the goal of data transparency, objectivity, and reliability. The current practice of “idiosyncratic Table 2. Standardized scale used to rate image quality Rating Image Quality Criteria 1

2

3 4 5

Image quality deficiency of high magnitude, precluding diagnostic evaluation and requiring the imaging exam to be repeated for diagnosis. Image quality deficiency of intermediate magnitude and resulting in limited diagnostic evaluation, which may or may not require repeating the exam, based upon the clinical context. Image quality deficiency of low magnitude and not significantly limiting diagnostic evaluation. Barely perceptible image quality deficiency which has no impact on clinical interpretation and diagnosis. No image quality deficiency identified; exam of superior quality and diagnostic value.

Note: The numbers 1-5 indicate a rating scale, with 1 indicating the lowest image quality level, and 5, the highest.

QA” could eventually be replaced by an environment in which QA is rigorous, continuous, and standardized. This approach is beneficial to all parties and could serve as a valuable method for differentiating service providers on the basis of quality, economics, and clinical outcomes. If and when such a system is implemented, it would provide incentive for technology vendors to prioritize qualitycentric technology development, which is currently lacking in the market [26]. As technology refinements become integrated into practice, the same quality database infrastructure could be used to validate the efficacy of new technology, further driving the innovation process in accordance with tangible and reportable benefits tied directly to quality and outcomes. Although the idea of directly incorporating image quality metrics into the radiology report might appear to introduce economic and medico-legal risks to radiology providers, fundamental changes related to quality transparency in health care are already underway. The historic latitude accorded to institutional and individual providers to keep quality and outcome data confidential is being systematically replaced; major efforts are currently being made by governmental agencies, payers, and patient advocacy groups to force health care providers to collect and publish quality and safety metrics [27]. In the end, image quality lies at the center of defining the relative success or failure of medical imaging service providers in optimizing clinical outcomes. If the radiology community wants to maintain its status of high importance and relevance in medical delivery, it is time for standardized image quality data to be directly incorporated into service deliverables and analyses. This approach provides the radiology community with not only a method of objectively differentiating quality deliverables from those of their nonradiologist counterparts, but also ample justification and value in expanding the role of radiologist to that of quality educator, consultant, researcher, and supervisor. CONCLUSION

As greater scrutiny is placed on quality and clinical outcomes in medicine, it becomes essential for the radiologist community to objectively validate its service deliverables. Image quality, which lies at the heart of medical imaging quality and outcomes analysis, is particularly important because it has an effect on multiple steps in the collective imaging chain. Through direct implementation and analysis of standardized image quality metrics in the radiology report, insights can be gained as to existing quality strengths and deficiencies. Additionally, opportunities can be identified for quality improvement through the combined use of education/training, quality-centric technology, decision support tools, performance analytics, data-driven bestpractice guidelines, and image processing software. Rather than waiting passively to have quality mandates

Reiner/Costs of Poor Image Quality 5

created by third parties, the radiologist community needs to take an active role in creating new data-driven analysis and innovation strategies, which in the long run will serve to promote the profession, drive technology development, and counteract existing commoditization trends. TAKE-HOME POINTS

 Image quality has arguably declined over the past decade, owing to various technical, economic, cultural, and geographic factors.  Although image quality analysis is customarily viewed as a single-step event (ie, image acquisition), it is part of a multistep chain of events (ie, the imaging chain).  Image quality deficiencies can a serve as a source of diagnostic accuracy errors, which have been reported to occur in as many as 30% of all radiology reports.  Image quality analysis requires the implementation of standardized image quality metrics, which can be created in a fashion similar to that used in BI-RADS.  Creation of objective image quality analytics provides a method for combatting existing commoditization trends and self-referral by nonradiologist imaging providers. REFERENCES 1. Alpert HR, Hillman BJ. Quality and variability in diagnostic radiology. J Am Coll Radiol 2004;1:127-32. 2. Reiner B. Uncovering and improving upon the inherent deficiencies of radiology reporting through data mining. J Digit Imaging 2010;23: 109-18. 3. Reiner B, Siegel E, Protopapas Z, et al. Impact of filmless radiology on the frequency of clinician consultations with radiologists. AJR Am J Radiol 1999;173:1169-72. 4. Reiner B, Siegel E. Decommoditizing radiology. J Am Coll Radiol 2009;3:167-70.

9. Brown ML, Haun F, Sickles EA, et al. Screening mammography in community practice: positive predictive value of abnormal findings and yield of follow-up diagnostic procedures. Am J Roentgenol 1995;165: 1373-7. 10. Robinson PJ. Radiology Achilles’ heel: error and variation in the interpretation of the Roentgen image. Br J Radiol 1997;70:1085-98. 11. Fletcher SW, Elmore JG. Mammographic screening for breast cancer. N Engl J Med 2003;348:1672-80. 12. Elmore JG, Barton MB, Moceri VM, et al. Ten-year risk of false positive screening mammograms and clinical breast examinations. N Engl J Med 1998;338:1089-96. 13. Nagy PG, Pierce B, Otto M, et al. Quality control management and communication between radiologists and technologists. Am J Coll Radiol 2008;5:759-65. 14. Prieto C, Vano E, Ten JI, et al. Image retake analysis in digital radiography using DICOM header information. J Digit Imaging 2009;22:393-9. 15. Waaler D, Hofmann B. Image rejects/retakes—radiographic challenges. Radiat Prot Dosimetry 2010;139:375-9. 16. Siegel EL, Reiner BI, Hooper FJ, et al. Effect of monitor image quality on the soft-copy interpretation of chest computer radiography images. Proc SPIE 2001;4323:42-6. 17. Reiner BI, Knight N, Siegel EL. Radiology reporting, past, present, and future: the radiologist’s perspective. J Am Coll Radiol 2007;4:313-9. 18. Krupinski EA, Roehrig H, Dallas W, et al. Differential use of image enhancement techniques by experienced and inexperienced observers. J Digit Imaging 2005;18:311-5. 19. Krupinski EA, Weinstein RS, Rozeck LS. Experience-related differences in diagnosing medical images displayed on monitors. Telemed J 1996;2: 101-8. 20. Hillman BJ, Olson GT, Griffith PE, et al. Physicians’ utilization and charges for outpatient diagnostic imaging in a Medicare population. JAMA 1992;268:2050-4. 21. Kouri BE, Parsons RG, Alpert HR. Physician self-referral for diagnostic imaging: review of the empiric literature. AJR Am J Radiol 2002;179: 843-50. 22. Levin DC. The practice of radiology by non-radiologists: cost, quality, and overutilization. AJR Am J Radiol 1994;162:513-8. 23. Hillman BJ, Joseph CA, Mabry MR, et al. Frequency and costs of diagnostic imaging in office practice: a comparison of self-referring and radiologist-referring physicians. N Engl J Med 1990;323:1604-8.

5. Kenny LM, Lau LS. Clinical teleradiology—the purpose of principles. Med J Aust 2008;188:197-8.

24. Reiner BI. Quantifying radiation safety and quality in medical imaging. Part 4: the medical imaging agent scorecard. J Am Coll Radiol 2010;7: 120-4.

6. Krupinski EA. Current perspectives in medical image perception. Atten Percept Psychophys 2010;72:1205-17.

25. Reiner BI. Automating quality assurance for digital radiography. J Am Coll Radiol 2009;6:486-90.

7. Berlin L. Perceptual errors. AJR Am J Radiol 1996;167:587-90.

26. Reiner BI. Quantifying radiation safety and quality in medical imaging. Part 3: the quality assurance scorecard. J Am Coll Radiol 2009;6:694-700.

8. Saunders RS Jr, Baker JA, Delong DM, et al. Does image quality matter? Impact of resolution and noise on mammographic task performance. Med Phys 2007;34:3971-81.

27. Thrall JH. Quality and safety revolution in health care. Radiology 2004;233:3-6.

Hidden costs of poor image quality: a radiologist's perspective.

Although image quality is a well-recognized component in the successful delivery of medical imaging services, it has arguably declined over the past d...
139KB Sizes 2 Downloads 4 Views