ARTHRITIS & RHEUMATOLOGY Vol. 66, No. 10, October 2014, pp 2661–2663 DOI 10.1002/art.38783 © 2014, American College of Rheumatology

EDITORIAL

How Publication Bias May Harm Treatment Guidelines Robert B. M. Landewe´ The treatment of rheumatoid arthritis (RA) is an attractive field for systematic literature research and meta-analysis. Several factors may explain this popularity. RA is the most common inflammatory rheumatic disease, at least 15 different disease-modifying antirheumatic drugs (DMARDs) with proven efficacy but uncertain priority are available, and the availability of all of these effective DMARDs has motivated many investigators to test strategies with different timing, intensity, and combinations of DMARDs in order to further optimize treatment outcome. These reasons may also explain why RA treatment has become such a highly competitive arena with vivid scientific debate, conflicting personal opinions, and immense commercial interests. An investigator conducting systematic literature research has to find his way in this arena of prejudices, strong opinions, and marketing strategies. Often such a researcher is told by “experts” that a rather limited literature search will suffice, since the experts already know the data very well. Multiple experiences have convinced me of the contrary. While the expert’s memory may be a fruitful source to check for inconsistencies in the search results, it is obvious that expert opinion in fact is only a compilation of recent and remarkable but often incidental observations published in high-impact journals by famous authors, rather than a balanced interpretation of all available literature. In 2009, most experts in the European League Against Rheumatism task force for recommendations for the management of RA were convinced of the virtues of DMARD combination therapy because of one or two “eye-catching” trials. But a careful and extensive systematic literature search made the experts finally conclude that there was

actually insufficient evidence to claim that DMARD combination therapy in RA is better than DMARD monotherapy (1). Superior efficacy in the trials with combination DMARD therapy could be ascribed to the effects of glucocorticoids rather than to the DMARD combinations themselves. Such a conclusion would not have been reached if only single trials were interpreted, or if expert opinion was followed. Systematic literature research and meta-analysis are powerful tools to get a scientifically robust impression of the state of the art regarding the treatment of diseases. Electronic bibliographic databases, such as PubMed, Cochrane, and Embase, that can be easily accessed, have proven to be indispensable, and the contribution of this type of meta-research to the body of current medical knowledge is substantial. There is one important proviso. Before the results of a trial can be retrieved for systematic literature research, they should appear in the public domain. Publication of trial results is the sole responsibility of the investigators. Not all investigators take this responsibility very seriously. An unknown percentage of completed clinical trials will never be published. While this behavior of deliberately keeping completed trials outside the public domain is widely considered unethical, there is no legal way to force investigators to publish their results. In fact, that is a strange omission in a world where to date particular emphasis has been laid on detecting and preventing scientific fraud. Just as with fraud, deliberate refusal to publish unsatisfactory trial results does not give justice to the altruistic contribution of patients who voluntarily consent to participate in a trial. Reasons not to publish the results of trials are rather intransparent. Investigators may lose interest when confronted with the absence of an anticipated positive effect, or they may be discouraged by consecutive rejections by journals. Alternatively, investigators may be hindered by industry sponsors from publishing commercially unprofitable results. Whatever the reasons, deliberately refraining from publication of completed trials will always lead to a

Robert B. M. Landewe´, MD: Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands, and Atrium Medical Center, Heerlen, The Netherlands. Address correspondence to Robert B. M. Landewe´, MD, Academic Medical Center, University of Amsterdam, Department of Clinical Immunology and Rheumatology, Meibergdreef 9, PO Box 22660, 1100 DD Amsterdam, The Netherlands. E-mail: [email protected]. Submitted for publication June 24, 2014; accepted in revised form July 10, 2014. 2661

2662

skewed repertoire of trials that can be disclosed by systematic literature research. The repertoire is skewed, because it is obvious that the most newsworthy and unprecedented outcomes of trials will be rapidly published in journals with high impact factors, while negative outcomes, or results that are “only” replications, will have a higher likelihood of not being published at all, or only after a long delay. Therefore, if “scientific evidence” is a distribution of all previously conducted trials with positive, negative, and neutral results, and systematic literature research is the tool to measure scientific evidence, a specific systematic literature research question will have a positively skewed resolution, since unpublished evidence cannot be included. As a consequence, guideline committees may take on these toopositive impressions. Politicians and reimbursement bodies may base their judgment of the risk/benefit ratio of treatments on biased opinions, and consequently patients may receive treatments for which the expectations are exaggerated, which in turn may inadvertently affect public health. Journal editors who have joined forces in the International Committee of Medical Journal Editors have recognized the potential dangers of publication bias and determined in 2005 that every prospective trial should be registered in a trial registry before publication will be considered (2). The Food and Drug Administration as well as other bodies for drug registration have issued similar requirements for new treatments. The reasons for these promulgations are 2-fold. First, registration may give insight into what proportion of conducted trials will never be published; and second, increased transparency by registration may force investigators to publish the results of registered trials regardless of their outcome. In this issue of Arthritis & Rheumatology, Khan et al report on their investigation of whether the advent of the trial registry ClinicalTrials.gov in 2005 has indeed led to better performance regarding timely publication of completed clinical trials (3) The article is readworthy for several reasons. First, Khan et al found that no less than one third of the 143 trials of treatment of RA with a completion date before December 31, 2009 and registered in ClinicalTrials.gov was not published at all. These 48 unpublished trials together have enrolled the stunning number of 10,000 RA patients, whose selfless contribution to medical science apparently has been lost because investigators did not want to proceed with publishing the data. A second remarkable observation also pertains to

´ LANDEWE

the 48 completed but unpublished trials. Khan et al, who should be commended for their scrutiny and endurance, have repeatedly queried the investigators and sponsors of these trials to obtain further information. While some of them confirmed nonpublication of their registered randomized controlled trial and provided some helpful information, 29 investigators or sponsors of registered but unpublished trials failed to respond at all to 3 contact attempts. Of these 29 trials, 26 (90%) were funded by industry! To put it into appropriate context, of all 95 published trials, 64% were funded by industry. Third, Khan et al found that a greater proportion of published trials were not registered at the time of the start of the trial, but only during the conduct of the trial, or worse, around the date of submission of the manuscript. This indicates that investigators may comply with this requirement only in order to get an article published, rather than because they understand the value of registries such as ClinicalTrials.gov. Such a finding casts doubt on whether trial registration is currently appropriately used. Finally, and not unexpectedly, Kahn et al found that publication is indeed strongly and independently associated with positive trial outcomes. This finding is in agreement with observations in other areas of medicine and proves that the body of evidence accessible via electronic databases is skewed toward (too) optimistic expectations for the treatment of RA. It is impossible to estimate to what extent this is the case, since the appropriate data to judge are obviously lacking. However, it seems clear that “obligatory” registration of trials has not prevented the available literature on the treatment of RA from being biased toward positive results. Such a bias may be partly responsible for the frequently heard but hard-to-prove critique that trial results are often more positive than real practice experience. In the discussion section of their article, Khan et al give suggestions to improve the “visibility” of trials that have been conducted but never reported in the public domain. They brake a lance for open data sharing, and I agree with them, although many legal and ethical obstacles regarding privacy protection and patients’ right of self-determination should be overcome. In the meantime, however, we should not forget to continuously convince our current and future clinical investigators of their moral obligation as a scientist to register their trials before the start, to provide followup information to registries, and to make research data available to the public regardless of “how good the results are.” We should tell these investigators that they owe their patients who have consented to take part in an experiment

EDITORIAL

2663

with unclear outcome that at the very least their data will be handled with extreme care and will be made a part of public scientific knowledge. Pharmaceutical companies withholding “uninformative” (read commercially uninteresting) data for publication do not positively contribute to a healthy scientific climate. Studies such as that by Khan et al will be indispensable to check accountability with regard to these important principles. AUTHOR CONTRIBUTIONS Dr. Landewe´ drafted the article, revised it critically for important intellectual content, and approved the final version to be published.

REFERENCES 1. Gaujoux-Viala C, Smolen JS, Landewe R, Dougados M, Kvien TK, Mola EM, et al. Current evidence for the management of rheumatoid arthritis with synthetic disease-modifying antirheumatic drugs: a systematic literature review informing the EULAR recommendations for the management of rheumatoid arthritis. Ann Rheum Dis 2010;69:1004–9. 2. DeAngelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors [editorial]. JAMA 2004;292:1363–4. 3. Khan NA, Singh M, Spencer HJ, Torralba KD. Randomized controlled trials of rheumatoid arthritis registered at Clinical Trials.gov: what gets published and when. Arthritis Rheumatol 2014;66:2664–74.

Editorial: how publication bias may harm treatment guidelines.

Editorial: how publication bias may harm treatment guidelines. - PDF Download Free
36KB Sizes 0 Downloads 5 Views