LETTER TO THE EDITOR will be inflated for the same reason: sites where MR images showed no signs of disc herniation were unlikely to be explored surgically. This is an example of verification bias.3 Sensitivity of MR may not be 100% even with this study design because a dog could have MR images classified as false negative if the site of disc herniation was incorrect (e.g., if the radiologist or surgeon made a mistake counting the vertebrae). It appears that one error was made because Cooper et al. report sensitivity of MR to be 98% (i.e., 43/44 dogs). In comparison, sensitivity of CT is reported to be 89% (i.e., 39/44 dogs). Cooper et al. do not provide data that allow their calculations to be checked, and their sensitivity estimates are based on logistic regression, which accounts for variations in interpretation between observers, but it is interesting to note that based on these extrapolated numbers of dogs and McNemar’s test the difference in sensitivity between MR and CT is not significant (P = 0.13). Cooper et al. admit at the end of the discussion that their study design was biased in favor of MR, but still persist in stating repeatedly that ‘‘sensitivity of MR was greater than CT.’’ Readers should be aware that this conclusion is not valid, for the reasons stated above.

Dear Editor, In a recent paper1 , Cooper et al. attempted to compare sensitivity of magnetic resonance (MR) and noncontrast CT for intervertebral disc herniation in dogs. In order to do that, it would be necessary to review the MR and CT images of a series of dogs that all had a surgically proven disc herniation. The images for each dog could be classified as true positive (TP) (images showed signs of disc herniation) or false negative (FN) (images showed no signs of disc herniation). Sensitivity of each modality = TP/TP + FN. Any difference in sensitivity between MR and CT could be tested statistically using McNemar’s test.2 It is easier to describe this study than to actually do it because in clinical practice patients are selected for surgery, and surgeons are guided to specific operative sites, principally on the basis of the diagnostic images. Cooper et al. make it clear that the 44 dogs in their study were selected for surgery on the basis of their MR images. The CT images were not viewed until a later date. As a result, any dog with MR images that showed no signs of disc herniation would not be included in this study because such a dog would be unlikely to have spinal surgery, and unlikely to have a surgically proven disc herniation. Hence, with this study design, Cooper et al. have systematically excluded dogs with false negative MR images and guaranteed that their estimate of sensitivity of MR will be very high. Their estimates of accuracy of determining the site and laterality of a herniated disc

Yours Sincerely, Christopher R. Lamb The Royal Veterinary College, University of London

REFERENCES 1. Cooper JJ, Young BD, Griffin JF, et al. Comparison between noncontrast computed tomography and magnetic resonance imaging for detection and characterization of thoracolumbar myelopathy caused by intervertebral disk herniation in dogs. Vet Radiol Ultrasound 2014;55:182– 189.

2. Dwyer AJ. Matchmaking and McNemar in the comparison of diagnostic modalities. Radiology 1991;178:328–330. 3. Richardson ML, Petscavage JM. Verification bias: an underrecognized source of error in assessing the efficacy of MRI of the menisci. Acad Radiol 2011;18:1376–1381.

First, as Lamb and others have highlighted gold standard diagnoses that are arrived upon using the same test being studied inherently inflate sensitivity. This limitation is unavoidable in a clinical setting when assessing the sensitivity of imaging modalities to detect diseases that are confirmed surgically, where imaging was used to determine the need for surgical intervention. We are not aware of a feasible manner to entirely mitigate this limitation. In one respect, permitting clinicians to view both CT and MR images during the diagnosis of IVDH may have reduced cases that could have been positive on CT and negative on MRI,

Author response: Dear Editor, The authors appreciate the comments provided by Dr. Lamb and agree that the manuscript has limitations with respect to comparing the relative sensitivity of magnetic resonance imaging (MR) and CT for identifying intervertebral disk herniation (IVDH) in dogs. We wish to clarify a few issues that are relevant to the appropriate interpretation of our findings.

Vet Radiol Ultrasound, Vol. 55, No. 4, 2014, pp 345–346.

345

346

LETTER TO THE EDITOR, COOPER ET AL.

but cases with IVDH that were false negatives based upon both modalities would remain unrecognized. Unfortunately, the final published version of our manuscript did not emphasize critical elements of the design that attempted to address the possibility of IVDH cases visible on CT but not MRI. That is, all dogs that had T3-S1 myelopathy were permitted to enroll in this study over the time period it was conducted, which resulted in 53 animals accrued. All 53 animals had MR and CT scans reviewed by image raters using blinding and image randomization. In 44 dogs, surgical IVDH was initially diagnosed via MRI. There were nine other dogs that either lacked IVDH or did not have it surgically confirmed. In three of these nine animals IVDH was diagnosed by both CT and MR, but the dogs were not included in the IVDH population because surgery was not performed. In six dogs, neither CT nor MR identified IVDH. These six cases consisted of ischemic myelopathy (n = 3; normal CT but identified via MRI), diskospondylitis (n = 2; identified via CT and MRI), and vertebral body neoplasia (n = 1; identified via CT and MRI). These data highlight that in this population of 53 dogs prospectively acquired regardless of meeting gold standard diagnostic criteria, blinded image raters could not appreciate cases where CT identified IVDH that was not recognized on MRI. Unfortunately, the sample size was considered too small to make reliable conclusions and these data were therefore removed based on peer-reviewer concerns. We wish to thank Dr. Lamb for his timely comments and we agree that the limitations of this study should have been emphasized more prominently in the manuscript. However,

2014

we also believe design steps were implemented to minimize said limitations and this should have been stipulated more clearly as well. Sincerely, JOCELYN COOPER DVM, DACVIM (Neurology) Mission Veterinary Specialists, San Antonio, TX 78023 GEOFF FOSGATE DVM, PhD Professor, Epidemiology, Department of Production Animal Studies, University of Pretoria, Onderstepoort 0110, South Africa JONATHAN M. LEVINE DVM, DACVIM (Neurology) Dept. Small Animal Clinical Sciences, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, TX 77843–4474 BENJAMIN YOUNG DVM, DACVR Department of Large Animal Clinical Sciences, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, TX 77843, Current address: VCA Alameda East Veterinary Hospital, Denver CO 80247 JOHN F. GRIFFIN IV, DVM, DACVR Department of Large Animal Clinical Sciences, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, TX 77843

Author response.

Author response. - PDF Download Free
28KB Sizes 3 Downloads 4 Views