678

Correspondence / Clinical Radiology 70 (2015) 676e679

R. Umezawa*, H. Ota, K. Jingu Tohoku University School of Medicine, Sendai, Japan * Guarantor and correspondent. E-mail address: [email protected] (R. Umezawa) http://dx.doi.org/10.1016/j.crad.2015.02.008 Ó 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

Re: Validating a threshold of ocular gaze deviation for the prediction of acute ischaemic stroke SirdI am no neuroradiologist, but my attention was caught by the recent study from McKean et al.1 The title promised the “validation” of a radiological “prediction” model, something that I am very interested in. A “prediction” or “prognostic” model usually involves a measurement (or combination of measurements) made on a patient which is used to predict future outcomes. The Glasgow Coma Scale is an example of a prognostic model known to work well and be clinically useful. In the present case, the authors measured the degree of ocular gaze deviation (OGD) and used this to predict whether the patient did or did not have an acute ischaemic stroke.1 Prognostic models are “developed” using data collected on patients who do and do not have the outcome of interest and are only clinically useful if they are sufficiently predictive. It is also important to know that a model developed and accurate in hospital X also works for patients in hospital Y. For example, the performance of a model may not be “generalizable” because of spectrum biases and other factors that act on patients recruited for development. Therefore, it is usual to “validate” the accuracy of model prediction using representative patients recruited prospectively from other hospitals.2 Alas, I was to be disappointed: McKean and colleagues describe a model developed using data collected from 517 patients presenting to a single hospital, with no attempt to validate predictive accuracy either there or elsewhere.1 In any event, I was struck by the apparently poor predictive capabilities of the data. The authors conclude that “Significant OGD >11.95 has a high specificity for acute infarct” and that “This threshold may provide a helpful additional sign in the detection of subtle acute infarct”.1 Specificity describes the accuracy with which a test identifies patients without disease and cannot be evaluated sensibly without simultaneously considering sensitivity,

DOI of original article: http://dx.doi.org/10.1016/j.crad.2014.07.011.

the accuracy with which the test identifies patients with disease. At their OGD threshold of 11.95, specificity for infarct is 95.9% but sensitivity is 17.3%.1 How “helpful” is a diagnostic test used at a threshold that will miss more than four out of every five patients with the target condition? Not very, I would argue, especially when research has shown that patients and their doctors value gains in sensitivity far over and above corresponding gains in specificity.3,4 The fact is that any test can have “high specificity” d simply label all patients normal. Alternatively, we can achieve 100% sensitivity simply by calling all patients positive. Rather than considering sensitivity and specificity in isolation, the real issue is how the two are balanced. The receiver operating characteristic (ROC) curve illustrates the change in sensitivity/specificity pairs across all possible diagnostic thresholds.5 In their article, the authors’ ROC curve hovers just above the line of no discrimination.1 Ultimately, I would argue that McKean and colleagues have shown beyond doubt that OGD has no clinical utility whatsoever for stroke prediction. Given this, there is no point performing a “validation” study as it is vanishingly unlikely that OGD would have any useful predictive capability in other centres. It is well-established that the predictive capabilities of prognostic models usually diminish between development and validation phases due to “over-fitting” of the data collected during the former.2

References 1. McKean D, Kudari M, Landells M, et al. Validating a threshold of ocular gaze deviation for the prediction of acute ischaemic stroke. Clin Radiol 2014;69:1244e8. 2. Altman DG, Vergouwe Y, Royston P, et al. Prognosis and prognostic research: validating a prognostic model. BMJ 2009;338:b605. 3. Schwartz LM, Woloshin S, Sox HC, et al. US women’s attitudes to false positive mammography results and detection of ductal carcinoma in situ: cross sectional survey. BMJ 2000;320:1635e40. 4. Boone D, Mallett S, Zhu S, et al. Patients’ & healthcare professionals’ values regarding true- & false-positive diagnosis when colorectal cancer screening by CT colonography: discrete choice experiment. PloS One 2013;8:e80767. 5. Halligan S, Altman DG, Mallett S. Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: a discussion and proposal for an alternative approach. Eur Radiol 2015 (in press) PMID: 25599932.

S. Halligan UCL, London, UK E-mail address: [email protected] http://dx.doi.org/10.1016/j.crad.2015.01.012 Ó 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

Re: Validating a threshold of ocular gaze deviation for the prediction of acute ischaemic stroke.

Re: Validating a threshold of ocular gaze deviation for the prediction of acute ischaemic stroke. - PDF Download Free
52KB Sizes 0 Downloads 5 Views