Editorial

The quintessence of medical science and practice

Acta Radiologica 2014, Vol. 55(8) 899–900 ! The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0284185113516032 acr.sagepub.com

Jan ML Bosmans1,2

This edition of Acta Radiologica features a paper on the quality of chest radiography reports. Mirja HirvonenKari and her team at the University of Helsinki analyzed the contents of chest X-ray reports by general radiologists, compared those to reports of the same examinations by a chest radiologist, assessed inter-observer agreement, and evaluated the clarity of the report contents from the viewpoint of the referring physicians. At first sight, one could consider this paper nothing more than a report of the results of peer review in the radiology department of one group of hospitals in a Northern European country: interesting as an instrument for local feedback and improvement, but not more than that. And indeed, peer review is a necessary and essential quality standard (1). The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review (2). Adherence to standards of care in radiology may be defined by the degree of interpretive agreement between readers (3). There is, however, much more to it. Although both length and content varied greatly depending on the author of the report, and the reports contained different quantities of information, Hirvonen-Kari et al. conclude that the quality of the reports in their study was rather high, whichever radiologist made them. That reports made by different radiologists would show a wide variation of style, length, and content, was easy to predict. As the guidelines of the European Society of Radiology (ESR) put it, there is no universally agreed definition of a good radiological report, and both radiologists and those who receive reports may hold differing views on optimal style and content (4). A few years ago, while I was working on a comparison of abdominal CT reports from eight medical centers in two countries, I discovered I could tell from most reports from which center they originated. In my daily work as a supervisor in an academic medical center, I can often link a report to the resident who made it without looking at the bottom line. Do try this at home, it’s great fun! The question is how desirable this situation is.

In art, developing a style of one’s own is the hallmark of the transition of a novice to a mature artist. In radiology, the times when the reputation of a colleague greatly depended upon his rich vocabulary and style are long over. A large number of surveys among referring physicians have consistently brought forward a preference for structured reports in both radiologists and referring clinicians (5–9). As long as they are made by competent radiologists, such reports can only be impersonal, stereotypical, and mutually interchangeable. In Hirvonen-Kari’s study, more than one-third of the examination requests did not contain a clinical question. Apparently, the referring clinicians’ adherence to the conviction that radiologists need adequate clinical input to generate useful and reliable output is less pronounced in daily life than in surveys (7). In case a clinical question was present in this study, both the initial and the expert reports nearly always addressed the problem. As the reports by the chest radiologist were used as a de facto gold standard to judge the original reports, I could not help but wonder that these expert reports were 250–300% longer than the initial ones; and that despite this, only 7% had a conclusion, versus 22% of the initial, much shorter reports. In a study of abdominal CT reports by our own group, the radiology department that produced the longest reports was also the one where almost half the reports did not contain a conclusion (10). In my experience, the moment a radiologist dictates a conclusion is often the instance he/she reviews the clinical question and upgrades the report from a purely descriptive to a bayesian, truly medical level. Therefore, leaving out a conclusion, a radiologist not only complicates life for the referring clinician, he also bereaves himself of a moment of reflection and medical thinking. It is almost paradoxical that despite this, the referring clinicians in Hirvonen-Kari’s study thought the 1 2

Department of Radiology, Ghent University Hospital, Ghent, Belgium Antwerp University Hospital, Edegem, Belgium

Corresponding author: Jan ML Bosmans, Department of Radiology, Ghent University Hospital, De Pintelaan 185, B-9000 Ghent, Belgium. Email: [email protected]

900 expert reports were more clear than the initial ones and contained sufficient information in more cases. The greater length of the expert reports may have been caused by the considerably greater number of heart, lung, and pleural findings the chest radiologist reported. But then again, undue emphasis on incidental findings may result in over-interpretation by the attending physician (11). In addition, in 25 cases, the expert stated that something ‘‘could not be excluded’’, a double negative cliche´ several guidelines advice against. None of this is a mortal sin, but it does show that gold standards can come in variable degrees of purety. The conclusions of the study of Mirja HirvonenKari’s group are all but anecdotical and only locally applicable. The content of non-structured reports is difficult to compare. Yet being able to compare results, both of different individuals and of consecutive studies in one individual, is quintessential to medical science and practice. This is just one reason why developing tools for structured reporting is the way we have to go. Both the Radiological Society of North America (RSNA) with its reporting templates and RadLex lexicon and the European Society of Radiology (ESR) with its RSNA/ESR Structured Reporting Initiative are working hard to provide the radiological community with such tools. References 1. Boland G. Radiologists’ performance: assessment using peer review (oral presentation). European Congress of Radiology, Vienna 2013: A–270.

Acta Radiologica 55(8) 2. O’Keeffe M, Davis TM, Siminoski K. A workstationintegrated peer review quality assurance program: pilot study. BMC Medical Imaging 2013;13:19. 3. Mahgerefteh S, Kruskal JB, Yam CS, et al. Peer review in diagnostic radiology: current state and a vision for the future. RadioGraphics 2009;29:1221–1231. 4. No authors listed. Good practice for radiological reporting. Guidelines from the European Society of Radiology (ESR). Insights Imaging 2011;2:93–96. 5. Naik SS, Hanbridge A, Wilson SR. Radiology reports: examining radiologist and clinician preferences regarding style and content. Am J Roentgenol 2001;176:591–598. 6. Grieve FM, Plumb AA, Khan SH. Radiology reporting: a general practitioner’s perspective. Br J Radiol 2009;83:17–22. 7. Bosmans JM, Weyler JJ, De Schepper AM, et al. The radiology report as seen by radiologists and referring clinicians: results of the COVER and ROVER surveys. Radiology 2011;259:184–195. 8. Plumb AA, Grieve FM, Khan SH. Survey of hospital clinicians’ preferences regarding the format of radiology reports. Clin Radiol 2009;64:386–394. 9. Dog˘an N, Varlibas¸ ZN, Erpolat O¨P. Radiological report: expectations of clinicians. Diagn Interv Radiol 2010;16:179–185. 10. Bosmans JM, Weyler JJ, Parizel PM. Structure and content of radiology reports, a quantitative and qualitative study in eight medical centers. Eur J Radiol 2009;72:354–358. 11. Revak CS. Dictation of radiologic reports. Am J Roentgenol 1983;141:210.

The quintessence of medical science and practice.

The quintessence of medical science and practice. - PDF Download Free
48KB Sizes 0 Downloads 4 Views