Downloaded from http://jnis.bmj.com/ on August 28, 2015 - Published by group.bmj.com

PostScript

LETTER TO THE EDITOR

Clinical trials: what are we afraid of, what should we do? We would like to respond to an editorial in the September issue of the journal because it rehearses much confusion and many misconceptions about clinical trials.1 More importantly, the authors’ recommendations are wrong-headed and can only harm our patients and, secondarily, our specialty.

WHAT ARE AUTHORS AFRAID OF? The authors discuss some of the difficulties with recent randomized controlled trials (RCTs) that have not demonstrated the good outcomes our interventions were purported to deliver. The title of the editorial, that RCTs can be a ‘double-edged sword’, seems to warn the reader against something, but against what exactly? A. Could it be that we are designing and participating in too many trials? In fact, we are collectively responsible for our field delivering poor (if any) evidence regarding the merits of our daily interventions. We need more trials, preferably trials designed and conducted by neurointerventionists. We must regain control of how to evaluate the merits of our own practice. Most importantly, if we are to offer patients care that they can trust, we must be constantly working to validate our still unvalidated interventions. What is the best way for us to do this? A trial, of course, but not just any type of trial. We will get back to this point. B. Should we distrust the disappointing results of recent RCTs because they have ‘limitations’, as the citation from Concato1 suggests? What are we to do about trials that have design shortcomings? Should we stubbornly practice interventions that have now been shown to be harmful, albeit in trials ‘with limitations’, claiming them to be standard of care, just like an intervention that has been proven beneficial? Of course not; disappointing trial results simply mean that such interventions should only be offered within the context of better designed trials. C. Should the readers of the editorial be warned against participating in ongoing trials? Are they being warned that they don’t have the ‘appropriate expertise’, and that they should shy away from evaluating their practice J NeuroIntervent Surg Month 2014 Vol 0 No 0

while waiting for the ‘leaders of the field’ to show that the treatment they offer can be performed well? What will this possibly tell the average interventionist about the treatment that they themselves offer their patients? We fear that typical readers may come to these conclusions.

COMMON MISCONCEPTIONS ABOUT TRIALS The authors raise concerns about the improper design, interpretation or extrapolation of trial results—but this is no reason to fear trials; this is not only an indication for doing better trials, but for promoting better practices. Unfortunately, the authors rehearse the most common misconceptions about trials, which we will attempt to disassemble here. A. Misconception 1: Trials are conceived as weapons or fodder for arguments: they are the best way to win turf battles or to obtain reimbursement for our efforts. Such trials are disconnected from clinical practice, which goes on no matter what is being trialed by distant and divorced ‘clinical researchers’. Response: Properly designed trials are the best way to evaluate our actions and also, at the same time, waiting for results, they are the best and most ethical way to care for patients. A. Misconception 2: Trials should be designed to show what we think we already know and do. Response: Trials are the way to act according to a hypothesis, accepting that it can only be a hypothesis until proved otherwise. Trials are the way to see if what we do is actually good for patients, in the reality of daily patient care. The burden of proof is always—as it should be —on the shoulders of whoever is proposing an invasive risky treatment. A trial that fails to show that an intervention is beneficial for most patients is not a failure, as the authors suggest; it is an indication that more work needs to be done until the proposed improved intervention is shown to be beneficial in general for all patients. A. Misconception 3: Trials should address detailed treatment protocols enacted by credentialed experts on a homogeneous group of selected patients. Response: The authors repeat a profound misunderstanding of the role of heterogeneity in clinical research. Heterogeneity in patients, symptoms, lesions, techniques or operators are real challenges yet, at the same time, real opportunities to assess the value of

common interventions in the care of patients. This misunderstanding likely comes from an attempt to import into the clinical world the methods of preclinical research. In the animal laboratory we attempt to isolate a cause–effect relationship and remove all sources of ‘differences’ between experimental subjects by using animals of the same strain, with the same aneurysm at the same location, treated in the same way by the same surgeon, to prove our mechanistic hypotheses. The aims of clinical research are different. Our preclinical discoveries will be of no use to clinicians if, once tested in the heterogeneous conditions of real practices, they have no power to change outcomes for the better. A good treatment cannot be a treatment that is successful only in the unverifiable context of a single expert surgeon reporting his singular experience with particular cases. Just as you can trust a good tool when it has been shown to be useful in many varied circumstances (and a good surgeon because he has accumulated a large experience with many varied cases), a good intervention is one that has been shown to be beneficial in the various circumstances in which the treatment will be employed to help heterogeneous patients confronted with similar problems. Thus, a good trial is a trial with broad inclusion criteria and few exclusion criteria, not one that narrowly defines what, who or how treatment should be delivered.

EXPLANATORY AND PRAGMATIC TRIALS The authors fail to mark the distinction between explanatory and management or pragmatic trials (table 1). This failure may explain why—although they purport to defend and promote our specialty—they end up recommending the wrong types of trials to do so. This is a matter of logic. Imagine I am an old interventionist who refuses to get up at night to rush to the hospital to perform urgent mechanical thrombectomy for acute strokes. First, I will claim that thrombectomy is a dangerous experimental intervention, reserved for credentialed experts, performed rarely and only in the context of an explanatory trial, proposing to intervene on narrowly selected patients, filtered to be the best candidates, according to a detailed protocol, requiring an expert research team. I should rest happily undisturbed for a while. Then if, to my dismay, the trial shows many years later that treatment in these optimal conditions can be beneficial, I will claim that, according to any textbook on clinical trial 1

Downloaded from http://jnis.bmj.com/ on August 28, 2015 - Published by group.bmj.com

PostScript Table 1

Explanatory and pragmatic trials* Explanatory trial

Pragmatic trial

Question

Can therapy work in optimal conditions?

Patient eligibility Physicians Treatments Follow-up test and intensity Outcomes

Strictly limited to best candidates Best hands Closely monitored/detailed specifications Frequent visits and special tests to assess biological responses Restricted set of biological explanatory outcomes

Does therapy work under normal conditions? All-comers Normal expertise Standard care Routine practice Clinical outcomes

*Inspired by Sackett.3

methodology, the results do not apply to the ‘real world’. Explanatory trials can be used to abandon treatment when the results show that harms outweigh benefits. A positive result from an explanatory trial will always remain ambiguous because the trial was conducted in artificial conditions on narrowly selected patients by highly selected operators. We cannot scientifically generalize that a widespread clinical application is worthwhile. It would take a positive management or pragmatic trial such as the International Subarachnoid Aneurysm Trial (ISAT)2 to show that I should get out of bed to treat acute stroke. On the contrary, if I am an enthusiastic positive young interventionist eager to help patients with acute stroke, I will claim that thrombectomy is nearly a standard endovascular intervention that can be performed by any qualified interventionist in the regular conditions of normal if urgent care; I will want to participate in a pragmatic trial, welcoming all patients with the disease, open to include standard interventionists like me, in a large simple trial. If the trial shows positive results, it is clearly worthwhile to adopt this treatment as standard of care. On the other hand if, to my dismay, the trial does not show clear benefits, the results are ambiguous and there is no reason to abandon this treatment, further specification or refinement of therapy is worth exploring. To summarize, if the goal is to abandon a treatment, it is better to design an

Table 2

DOING THINGS DIFFERENTLY SELF-CONTRADICTORY RECOMMENDATIONS In the end, the goal of the editorialists remains unclear. They perhaps wanted to warn readers that RCTs could damage our field, our clinical practice, or even harm patients. Although we do share many of the authors’ concerns—such as the need for long follow-up for proper evaluation of the merits of preventive treatments, the necessity to change the way trials are designed and conducted, or the dissatisfaction with the aberration that we need to compete for scant financial support to have permission to do our duty—we remain at odds with their conception of the role of trials, conceived as proofs obtained outside clinical care. We know that what can harm patients are uncontrolled interventions that have never been proven beneficial, and we know that what can harm a specialty is the continued practice of unverifiable care. The authors can only come up with self-contradictory advice: they want larger trials that reflect our ongoing practices and that capture long-term outcomes but, at the same time, they want rapid answers from trials that address a narrowly defined group of rare homogeneous patients treated with precisely ‘standardized’ methods enacted by a few selected credentialed operators. Of course, these contradictory goals cannot bear fruit.

Conclusions that can be drawn from explanatory or pragmatic trials*

Experimental group

Benefit greater than harm

Benefit no greater than harm

Explanatory trial Pragmatic trial

Ambiguous results Worthwhile to adopt treatment

Sensible to abandon treatment Ambiguous results

*Inspired by Sackett.3

2

explanatory trial (table 2). This is clearly not what the authors intended, but they are making the wrong recommendations.

We cannot encourage the status quo: the majority of doctors continue to practice unverifiable medicine, performing unvalidated treatments by anybody on any patient with no reliable results while, at the same time, a small minority of ‘researchers’ must compete for scant funds. This small minority of interventionists assess an even smaller proportion of patients with interventions performed by highly selected individuals in a minority of homogeneous patients, using explanatory trials that last forever, hoping to show therapy in a good light, but which at best will give results that remain ambiguous. This system will never work. We must learn to do things differently.

First we need a demarcation between validated and unvalidated care. We must make this demarcation ourselves before administrators or governments make it for us. Unvalidated care consists of ‘experimental interventions’ that should be offered within a trial designed to protect the very patients being offered yet-to-be-shown beneficial treatments. Within trials, patients are protected from our unverified hypotheses and from our enthusiasm regarding our purported powers. Quality control can only come after an intervention is validated as good in general. And, of course, there is no point in doing, teaching and training generations of physicians to perform procedures that are useless or dangerous. Once a treatment is validated as beneficial in general, it can be practiced outside of a trial, not before. In a fast evolving field, clinical care consists of two articulated contexts: normal care and care research. Normal practice is care previously validated as beneficial, using reliable methods, usually RCTs. When is it appropriate to resort to care research? When care is suboptimal, there is hope for patients to benefit, but reliable knowledge has yet to be obtained. Better outcomes are possible, but within a process capable of protecting patients from false promises. This is a pragmatic RCT. When should care switch back to normal practice? When promising tests and treatments have not delivered the promised results. When should they be integrated into normal practice? When they have been shown to improve patient outcomes. We should not fear RCTs as double edged swords but, rather, conceive and design trials as the way to progress and practice optimal care while we learn what optimal care is in real time. J NeuroIntervent Surg Month 2014 Vol 0 No 0

Downloaded from http://jnis.bmj.com/ on August 28, 2015 - Published by group.bmj.com

PostScript Tim E Darsaut,1 Jean Raymond2 1

Division of Neurosurgery, Department of Surgery, University of Alberta Hospital, Mackenzie Health Sciences Centre, Edmonton, Alberta, Canada 2 Department of Radiology, Centre Hospitalier de l’Université de Montréal, Notre-Dame Hospital, Montreal, Quebec, Canada

Provenance and peer review Not commissioned; internally peer reviewed. To cite Darsaut TE, Raymond J. J NeuroIntervent Surg Published Online First: [please include Day Month Year] doi:10.1136/neurintsurg-2013-010970

J NeuroIntervent Surg Month 2014 Vol 0 No 0

1

2

Received 10 September 2013 Accepted 7 January 2014

Correspondence to Dr Jean Raymond, Department of Interventional Neuroradiology (NRI), CHUM—NotreDame Hospital, 1560 Sherbrooke East, Pavilion Simard, Room Z12909, Montreal, QC, Canada H2L 4M1; [email protected]

Competing interests None.

REFERENCES

3

▸ http://dx.doi.org/10.1136/neurintsurg-2013-010882 J NeuroIntervent Surg 2014;0:1–3. doi:10.1136/neurintsurg-2013-010970

Mocco J, O’Kelly C, Arthur A, et al. Randomized clinical trials: the double edged sword. J Neurointerv Surg 2013;5:387–90. Molyneux AJ, Kerr RS, Yu LM, et al.; International Subarachnoid Aneurysm Trial (ISAT) Collaborative Group. International Subarachnoid Aneurysm Trial (ISAT) of neurosurgical clipping versus endovascular coiling in 2143 patients with ruptured intracranial aneurysms: a randomised comparison of effects on survival, dependency, seizures, rebleeding, subgroups, and aneurysm occlusion. Lancet 2005;366:809–17. Sackett D. The principles behind the tactics of performing therapeutic trials. In: Haynes B, Sackett D, Guyatt G, Tugwell P. eds. Clinical epidemiology. How to do clinical practice research. Philadelphia, PA: Lippincott, Williams and Wilkins, 2006;173–243.

3

Downloaded from http://jnis.bmj.com/ on August 28, 2015 - Published by group.bmj.com

Clinical trials: what are we afraid of, what should we do? Tim E Darsaut and Jean Raymond J NeuroIntervent Surg published online January 24, 2014

Updated information and services can be found at: http://jnis.bmj.com/content/early/2014/01/24/neurintsurg-2013-01097 0

These include:

References Email alerting service

This article cites 2 articles, 1 of which you can access for free at: http://jnis.bmj.com/content/early/2014/01/24/neurintsurg-2013-01097 0#BIBL Receive free email alerts when new articles cite this article. Sign up in the box at the top right corner of the online article.

Notes

To request permissions go to: http://group.bmj.com/group/rights-licensing/permissions To order reprints go to: http://journals.bmj.com/cgi/reprintform To subscribe to BMJ go to: http://group.bmj.com/subscribe/

Clinical trials: what are we afraid of, what should we do?

Clinical trials: what are we afraid of, what should we do? - PDF Download Free
119KB Sizes 3 Downloads 0 Views