The European Journal of Contraception and Reproductive Health Care, 2014; 19: 71–73

LETTERS TO THE EDITOR Not only randomised controlled trials, but also controlled observational studies SIR: I very much appreciated the recent editorial on evidence-based medicine authored by the editor-in-chief of the The European Journal of Contraception and Reproductive Health Care1. Dr Amy stated that ‘the strongest evidence is provided by systematic reviews of […] randomised controlled trials (RCTs)…’. He referred to a 2006 paper2 of the Fertility Regulation group3, Cochrane Collaboration, that gave an overview of about ten years of experience in producing systematic reviews and, based on this, he made six recommendations. Since 2006, the group has grown both with regard to the number of reviews and that of collaborating authors, and our thinking concerning methodology has progressed along the road. I should like to present some additional recent information related to the aforementioned editorial. We agree that effectiveness is best examined by RCTs, especially when the difference in outcome between the new intervention and the comparison group is small. Effectiveness is not the only factor that physicians take into account when advising patients. On the other side of the coin are the adverse effects. RCTs are not optimally equipped to assess side effects for two reasons. Firstly, the duration of RCTs is often too short to detect side effects that require a longer time to develop. Secondly, RCTs have low power to detect infrequent side effects. For assessing the latter, researchers therefore are forced to use observational data.We hear the chorus of standard criticism that such studies are by definition prone to confounding, but this one liner needs more attention4. Sometimes the results from observational studies are in agreement with those from RCTs, provided the number of RCT participants is sufficient to generate those data. An example is the risk of venous thrombosis brought to light by the Women’s Health Initiative study5. Importantly, empirical evidence is accumulating which shows that side effect estimates from observational studies are

indeed similar to estimates from RCTs. Golder et al.6 demonstrated that ‘there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and metaanalyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types’. It also means that, in principle, observational studies provide valid estimates of adverse effects. It is noteworthy that observational studies can suffer from other forms of bias (for example: loss to follow-up, unblinded outcome assessment), but that applies equally to RCTs. Finally, in the field of contraception, RCTs are sometimes impossible to conduct; we then must rely on observational studies, the best evidence that is available. These empirical observations also have an impact on the use of Grading of Recommendations Assessment, Development and Evaluation (GRADE)7, which aims to make ‘more credible, realistic and useful recommendations’ for guidelines. In the GRADE approach, the highest quality evidence comes from a systematic review based on RCTs; controlled observational studies are downgraded. However, for infrequently occurring adverse effects, the highest quality evidence is generally provided by the controlled observational study. For reporting studies of that type, we usually refer to the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) guidelines8. At present, the Methods Group of the Cochrane Collaboration is developing guidance for assessing the risk of bias in non-randomised studies. These efforts indicate the increasing recognition of the value of evidence from non-randomised studies. Declaration of interest: The author is a coordinating editor of the Cochrane Fertility Regulation Group2. The author alone is responsible for the content and the writing of the paper.

© 2014 The European Society of Contraception and Reproductive Health DOI: 10.3109/13625187.2013.871251

Frans M. Helmerhorst Leiden University Medical Centre – Gynaecology and Clinical Epidemiology, Leiden, the Netherlands E-mail: [email protected]

Letters to the Editor

REFERENCES

1. Amy JJ. Evidence-based medicine: The salad is only as good as the ingredients. Eur J Contracept Reprod Health Care 2013;18:323–6. 2. Helmerhorst FM, Belfield T, Kulier R, et al. The Cochrane Fertility Regulation Group: synthesizing the best evidence about family planning. Contraception 2006;74:280–6. 3. http://fertility-regulation.cochrane.org/ 4. Vandenbroucke JP. When are observational studies as credible as randomised trials? Lancet 2004;363: 1728–31. 5. Cushman M, Kuller LH, Prentice R, et al;Women’s Health Initiative Investigators. Estrogen plus progestin and risk of venous thrombosis. JAMA 2004;292:1573–80. 6. Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: Methodological overview. PLoS Med 2011;8:e1001026. 7. Atkins D, Best D, Briss PA, et al; GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ 2004;328:1490. 8. www.equator-network.org/reporting-guidelines/ the-strengthening-the-reporting-of-observationalstudies-in-epidemiology-strobe-statement-guidelinesfor-reporting-observational-studies/

Interpretation of randomised clinical trials’ results. A reply to J. J. Amy ..................................................................................................

Evidence-based medicine; Interpretation; Non-inferiority trial; Null hypothesis; Randomised clinical trial; Relative risk; Superiority trial

K E Y WO R D S

..................................................................................................

SIR: I read with great interest Jean-Jacques Amy’s editorial on evidence-based medicine1. I certainly agree with his conclusion that much can go wrong with so-called ‘best evidence’ and that even randomised controlled trials (RCTs), the ‘gold standard’ for assessing therapeutic interventions, may have shortcomings. RCTs are commonly considered to be the foundation of evidence-based treatments, but they must indeed be critically appraised to confirm the validity of conclusions2. I should like to add a few thoughts on how results of RCTs can lead to misinterpretation and bias. 72

The main advantage of RCTs lies in the confidence with which we can view their results. However, results of RCTs may lack validity because of problems with the design, the trial execution or the interpretation of the results. Different problems with validity of RCTs may arise depending on (i) the control group chosen (active or placebo), and (ii) the objectives and planned statistical analysis (demonstration of superiority versus non-inferiority). Whereas the demonstration of superiority of one treatment modality (a new medicine) over another (a standard medicine or placebo) validates the trial and indicates that the design and the execution yielded enough trial sensitivity (‘power’) to demonstrate a statistically significant difference, this is not always true for active controlled non-inferiority trials3. Non-inferiority trials are intended to show whether a new treatment has at least as much efficacy as a standard. It can be in the interest of the sponsor to avoid that some degree of inferiority of the new medicine versus the standard therapy is detected; this provides an incentive for less precise measurements (e.g., of blood pressure changes) and, hence, for a poor trial execution. Moreover, even if the non-inferiority trial is well designed and well executed, demonstrating non-inferiority does not by itself prove that the new medicine is effective. Indeed, one has to critically assess whether the choice of the comparator and the dosage used are really appropriate, and whether one can be certain (generally based on historical data from placebo-controlled trials) that the active control, given in this particular dose to these types of patients within this particular trial setting, has an efficacy greater than that of placebo. If not, the trial may have shown noninferiority versus placebo. Hence, for a non-inferiority trial to be interpretable, it is indeed critical to know that the active control has its expected effect in the trial. Similarity between the test drug and the active control can mean that both treatments are equally effective, but also that both treatments are equally ineffective3. As mentioned above, the non-inferiority trial seeks to show that the difference in response between the test drug and the active control is smaller than some prespecified non-inferiority margin (‘delta’). This margin must be assumed based on past performance of the active control, the comparison of the current study with prior studies, and the assessment of the quality of the study. The validity of the conclusion of a non-inferiority trial depends on the choice of the non-inferiority margin, which should therefore be clearly defined and explained.

The European Journal of Contraception and Reproductive Health Care

Copyright of European Journal of Contraception & Reproductive Health Care is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Not only randomised controlled trials, but also controlled observational studies.

Not only randomised controlled trials, but also controlled observational studies. - PDF Download Free
41KB Sizes 0 Downloads 0 Views