1CLINICAL HARMACOLOGY THERAPEUTICS

AUGUST 1992

COMMENTARY Publication bias: Its implications for clinical pharmacology Gerhard Levy, PharmD Amherst,

N.Y.

For clinical pharmacologists directly engaged in patient care and in teaching rational pharmacotherapy, the clinical literature on the comparative efficacy and safety of medicinal agents is the lifeblood of their profession. If that literature is biased, then the compass that guides their work has a twisted face, misdirecting their efforts and distorting their judgment. That, in a nutshell, is the context in which clinical pharmacologists must assess the implications of publication bias in the clinical literature. Two aspects of publication bias deserve particular attention: selective publication intended to serve the interests of authors or their sponsors or both, and bias against "negative" results on the part of certain authors, reviewers, and editors. Both issues have been addressed previously by others as reflected in the list

From the Department of Pharmaceutics, School

of Pharmacy,

State

University of New York at Buffalo. Presented in the Symposium on Ethical Issues in Clinical Pharmacology at the Annual Meeting of the American Society for Clinical Pharmacology and Therapeutics, Orlando, Florida, March 18-20, 1992. Received for publication April 8, 1992; accepted April 12, 1992. Reprint requests: Gerhard Levy, PharmD, Department of Pharmaceutics. School of Pharmacy, SUNY/Buffalo, Amherst, NY 14260.

13/1/38605

of references. Here I will consider them from the viewpoint of clinical pharmacology and suggest how their occurrence can be minimized. It is one of the realities of our times that almost all clinical investigations of the comparative efficacy and safety of the members of a class of medicinal agents require financial sponsorship, mainly by the pharmaceutical industry. Many, perhaps the majority, of such investigations are actually designed and initiated by medical or clinical pharmacology departments of pharmaceutical companies. They frequently control the data and can decide whether or not to publish and what to publish. Even in the absence of such control or of restrictive contractual obligations, investigators may publish selectively in accordance with the expressed or presumed wishes of their sponsors to retain sponsor good will and ensure future financial support. The consequences of these realities require exploration. The Danish physician Gotschel searched the literature from 1966 to 1985 for double-blind comparative trials of nonsteroidal anti-inflammatory agents marketed in Denmark and found 196 publications. Significant differences between treatments were claimed in 93 reports. The "new" drug was favored in 73 trials; the control drug in only eight trials. The 39 trials that showed significant differences in adverse effects all

115

116

CLIN PHARMAC01. THER ACKilitil 1992

Levy

favored the new drug. Examination of the published data revealed that significant differences claimed in 10 trials were erroneous; all of these errors were in favor of the new drug! In 82 reports (42%), bias in the conclusions or abstract consistently favored one of the drugs: the control drug in one report and the new drug in 81 reports. In an earlier study, Juhl et al.2 evaluated 306 randomized clinical trials of gastroenterologic therapy indexed between 1964 and 1974. Of these, 191 reports showed significant differences between treatments. In only five of these was the test treatment significantly inferior to the control (established treatment or placebo). Some studies focused more directly on a possible association of clinical trial outcome with industrial sponsorship. Thus, Davidson3 examined the reports of 107 controlled clinical trials published in five leading medical journals during 1984. Of these, 71% favored the new therapy, whereas 29% favored traditional therapy. Forty-three percent of the former but only 13% of the latter trials had been supported by industry. Davidson found also that "In no case was a therapeutic agent manufactured by the sponsoring company found to be inferior to an alternative product manufactured by another company."3 In addition to relative efficacy and safety, costeffectiveness has become an important criterion in drug product evaluation. Hillman et al., 4 a group that has performed numerous cost-effectiveness studies for many pharmaceutical manufacturers since 1978, rein economic analysis, the cently explained that ". choice of which agents and interventions to compare, which data to include, which costs to measure, which perspective to adopt, which outcomes to assess, which assumptions to make, and how to present the results may produce important differences in a study's conmany of the asclusions." They caution that ". sumptions made in economic analyses are not easily recognized by the readers of a study report" and point out that ". attempts to fund only positive studies are the rule, and incentives for investigators to make favorable assumptions are ever present."4 The available data do not justify a definitive conclusion that the demonstrated bias in the published literature in favor of "new" drugs has been intentional and motivated by self-serving interests of investigators or sponsors. Gotsche listed 22 different factors that may increase the number and the proportion of significant results favoring a new drug, ranging from design bias, choice of dose, and selection of indexes to choice of statistical methods and outright fraud. It is true, how.

.

.

.

.

I

.

ever, that many of these variables should ordinarily not favor either the "new" drug or the control unless choices have been made intentionally. A perhaps more subtle form of publication bias is based on the "negative" or "positive" outcome of a study. Apparently, at least some investigators, reviewers, and editors dislike studies that do not show significant differences between treatments or other study variables, that is, studies with negative results. But, as the very stated so aptly by Hetherington et al.,5 ". either negative or process of characterizing a study as positive on the basis of the results of statistical significance testing is misconceived. All studies that have been well designed and well conducted represent positive contributions to knowledge, regardless of whether statistically significant differences between groups have been observed." That may be so but the evidence shows that investigators and reviewers, and perhaps editors (there are apparently no hard data on the latter), prefer reports with positive results over those with negative results. The role of reviewers was illuminated by Mahoney6 in an interesting study in which reviewers for one journal in the social sciences were each assigned one of several different versions of a manuscript with identical Introduction and Methods sections but with Results and Discussion sections that described either positive or negative outcomes. On a scale of 0 to 6 (low to high), the Methods section of the manuscript with positive results received a score (mean ± SD) of 4.2 ± 1.9 (n = 10), whereas the Methods section in the manuscript with negative results received a score of 2.4 ± 2.4 (n =- 14), p < 0.05, even though these sections were identical! However, it appears that the major source of bias against publishing studies with negative findings is actually the authors. Dickersin et al.' surveyed 318 authors of published clinical trials; 156 respondents reported 1041 published and 271 unpublished trials. Of 178 unpublished completed trials with trends specified, only 15% favored the new therapy. Conversely, 55% of 767 published reports favored the new therapy (p < 0.001). The major reported reason for nonpublication was not rejection of a submitted manuscript but negative results and (therefore?) lack of interest. One apparent correlate of negative results in this particular survey (but not in others) is trial size. The median number of subjects was 24 in the unpublished studies and 68 in the published studies. A retrospective survey8 of 487 research projects approved by the Central Oxford Research Ethics Committee between 1984 and 1987 showed that, as of May 1990, 285 studies had been analyzed and 52% of these had been published. .

.

VOLUME 52

A proposal to reduce publication bias

NUMBER 2

The published studies had more positive results (odds ratio, 2.3), were rated more important by the investigators, and had a larger sample size. It would be interesting to determine if studies supported by the pharmaceutical industry (or the National Institutes of Health 'NM]) tend to have a larger sample size than studies without such source of funding; this could indirectly (and unintentionally) lead to a higher frequency of positive results in industry-funded rather than in unfunded studies. However, Easterbrook et al.8 reported in 1991 that studies sponsored by pharmaceutical companies in the United Kingdom were less likely to be published whatever the results, possibly because many of the sponsored studies were comparisons of formulations and ". . small trials for product licensing."' The apparent bias against publishing reports of clinical studies with negative results is also evident in a recent survey by Dickersin and Meinert9 who monitored 574 studies approved in 1980 by two Johns Hopkins University ethics committees. Of the studies completed by 1988, 71% from the School of Medicine and 63% from the School of Public Health had been published. Studies with statistically significant differences were more likely to be published (odds ratio, 2.7). Also associated with more likely publication were multicenter design and NIH funding, that is, factors that one would expect to result in a larger number of subjects per trial (and therefore in a greater likelihood of showing statistically significant differences). What are the potential consequences of the tendency not to publish the findings of comparative clinical triwithout statistically significant differences? als Simes") found that a pooled analysis of 16 published clinical trials showed a significant survival advantage for combination chemotherapy, whereas no significant difference in survival (relative to the standard regimen) was found in the analysis of the results of 13 published and unpublished trials (the latter obtained by way of an advance trial registry). Thus, as Thompson and Pocock" have recently concluded, reliance on published studies may distort the results of metaanalyses. This is of course also true for the more informal literature analyses that many clinical pharmacologists and clinical pharmacists perform routinely. The problem is so serious that Chalmersr2 has confailure to publish an adequate accluded that ". count of a well-designed clinical trial is a form of scientific misconduct that can lead those caring for patients to make inappropriate treatment decisions." Concern about real or suspected bias against reports of studies with negative results moved the executive edi.

.

117

tor' 3 of the New England Journal of Medicine to reassure or explain to readers that ". when the Journal considers publication of a negative study, it applies the same criteria as it does when considering a positive one: Does it deal with an important question? Is the information new and interesting? Was the study well done?" What can be done to reduce publication bias in the clinical literature and what should be the role of clinical pharmacologists in that effort? One may be .

.

tempted to advocate a wide range of federal regulations and government oversight to achieve a more balanced and less restrictive flow of information in pharmacotherapy. It is doubtful that such strategy will be effective. One can imagine different scenarios of clinical research with intentional attempts to withhold or distort information only to find that all reasonable regulatory efforts to prevent these misdeeds can be readily circumvented by those intent on doing so. A more wholesome and, hopefully, more realistic attitude is to presume that most clinical investigators are honest and decent individuals who will cooperate in minimizing a problem with serious implications for proper patient care. In that spirit, the most direct and immediate efforts should be educational; to discuss all facets of publication bias among us and with out students frequently and comprehensively, in its scientific and ethical aspects. We have to rid ourselves of the negative connotation of negative results and perhaps find a more cheerful designation for results that show no significant difference between treatments. (After all, it is nice to find that oral administration of a drug with minimal adverse effects is not significantly different in effectiveness from that of extensive surgery that is the current standard treatment!) We must also remind ourselves that clinical research, regardless of sponsorship, is subject to public policy and public accountability. Essentially, no clinical research can proceed without the approval of a group of individuals representing the public interest, that is, an Institutional Review Board on Research Involving Human Subjects (IRB). That group must weigh the apparent risk of the proposed research against the potential public benefit of the knowledge to be obtained. Benefit is obviously limited or even absent if the knowledge will not be made available to health professionals at large. As stated by Chalmers,12 institutional committees on human research ". are only doing half their job if they approve clinical research projects but then fail to assess whether the work was conducted as agreed and then reported appropriately." The time has come to address the other .

.

(:AN

118

Levy

PI IARMACOL TI1ER

AUGUST 1992

half of the job. It can be done in a manner that is fair to the sponsors (who should derive certain exclusive benefits in return for their financial investment), to the investigators (who should have sufficient time for a thorough analysis and interpretation of their data and for initiation of follow-up studies before revelation of their findings to competing investigators) and, last but not least, to the public. These purposes can be achieved by requiring disclosure of study findings by way of formal publication or by alternate means (see next paragraph) within a reasonable time after completion of the investigation. Here is my specific proposal: IRBs will require submission of comprehensive reports of the findings of all IRB-approved investigations no later than 3 years after completion of these investigations. Patients and volunteers who participated in these studies should not be identified in these reports. Comprehensive progress reports will be required no later than 5 years after the start of investigations that have been ongoing for more than 3 years except for studies that were designed initially to continue for more than 3 years. Reporting requirements for the latter will be set by IRBs on an individual basis. IRBs may permit postponement of reporting deadlines for compelling scientific reasons only. Such decisions must be recorded in the IRB minutes and in a notification that includes the title and approved protocol of the investigation. All reports of completed investigations, progress reports, and notifications of approval to delay reporting are to be forwarded promptly by the IRBs to a central nongovernmental registry. The central registry will publish abstracts of the final reports, progress reports, and notifications in a quarterly journal and supply copies of the complete documents for a fee. The model for the proposed scheme is Dissertation Abstracts, which has been a means of publicizing and distributing doctoral dissertations for many years. Charny" made a similar though less specific proposal in a letter to the editor of The Lancet in 1991, stating that ". . . ethical approval of a study should be conditional on its registration. . ." and suggesting that . . research ethics committees should forward the information to a central register. Ethical approval should also be contingent on a report in a potentially publishable format, which could yield information to be added to the register." There will be objections to my proposal, and some .

of these objections can be anticipated and considered here. Industry (or any other source of funding) paid for the study; it "belongs" to the sponsor. That argument would apply to nonclinical studies but not to clinical studies. Society in most countries asserts a public interest in clinical research on human subjects and permits it only when there is reasonable expectation of societal benefit. The 3-year delay in the proposed reporting requirements should be sufficient to give the sponsor an adequate head start to achieve competitive advantage and be sufficient also for investigators to fully exploit their intellectual findings. The findings of certain clinical investigations may, if publicly available, be embarrassing to the sponsor or helpful to commercial competitors. For example, a sponsor's tablet product may be found to be less bioavailable than that of a competitor or a sponsor's nonsteroidal anti-inflammatory drug may be found to produce more (serious) adverse effects than that of a competitor. Three years is enough time to reformulate the poorly bioavailable product and to confirm, refute, or put into perspective by way of additional studies the findings of the comparative anti-inflammatory drug study. If the initial findings are confirmed, prescribers should know. Industry will perform or support future studies in countries that do not have a reporting requirement. The world is getting smaller. Policies concerning clinical investigations, be they regulatory or professional, are quite similar in all medically advanced countries. If a consensus on reporting requirements can be reached among clinical pharmacologists and other concerned parties, then it is reasonable to expect that such policies will eventually be adopted generally. Some investigators will not formally complete their studies for many years to avoid or postpone the reporting requirement practically indefinitely. For example, the study objectives may have been satisfied with 100 subjects (i.e., the desired information has been obtained) but enrollment of additional subjects continues at a very slow rate for several years. Not only will this attempt to circumvent the intent of the proposed reporting scheme be readily apparent to the IRB members upon review of the annual report and renewal of approval required by regulatory agencies, but the need for a comprehensive progress report no later than 5 years after the start of an approved investigation, as proposed here, will prevent the prolonged withholding of study findings. Studies will be reported selectively and incompletely by those investigators who wish to withhold in-

VOLUME 52 NUMBER 2

formation. This can be prevented or minimized by requiring that the protocol approved by the IRB be attached to the report. Many of the industry-sponsored investigations are small feasibility or exploratory studies or formulation development studies of little or no public interest. They will only clutter up the literature. These investigations, like all the others, will appear only as short abstracts in the "literature," that is, in the quarterly journal published by the central registry. Presumably, only few persons will want to purchase the complete reports. 1RBs may not be willing to take on the extra work of reviewing the final reports submitted by investigators, in view of the additional burden on their volunteer members. The work involved will usually be consi.derably less than that of reviewing a manuscript for publication. If necessary, paid reviewers should be enlisted as IRB consultants and investigators should be charged a fee for this purpose. It will be evident that what is proposed here will require essentially no extra effort for those investigators who publish comprehensive reports of their studies reasonably promptly. Submission of their manuscript(s) or published report(s) to the IRB, and through it to the central registry, would satisfy all requirements. The abstract published by the central registry would then reference the published report(s), making purchase of such reports from the central registry unnecessary and thereby avoiding copyright problems. The reporting scheme outlined here is being presented as a basis for discussion. One would hope that the focus of such a discussion will be on how to reduce publication bias rather than on the real or apparent shortcomings of the proposed reporting scheme per se. Let the critics not only express their objections to the specifics of the proposal but also offer a better alternative for optimizing the societal benefit of clinical studies whose findings are not generally available to the health professions at present. Surely this is a problem that should be of utmost concern to clinical pharmacologists!

A proposal to reduce publication bias

119

Note from the Editor The Editor would welcome letters from readers commenting on Dr. Levy's article or otherwise addressing the issue he has raised and the suggestions he has made.

References I. Gotzsche PC. Methodology and overt and hidden bias in reports in 196 double-blind trials of nonsteroidal antiinflammatory drugs in rheumatoid arthritis. Controlled Clin Trials 1989;10:31-56. Juhl E, Christensen E, Tygstrup N. The epidemiology of the gastrointestinal randomized clinical trial. N Engl J Med 1977;296:20-2. Davidson RA. Source of funding and outcome of clinical trials. J Gen Intern Med 1986;155-8. Hillman AL, Eisenberg JM, Pauly MV, et al. Avoiding bias in the conduct and reporting of cost-effectiveness research sponsored by pharmaceutical companies. N Engl J Med 1991;324:1362-5. Hetherington J, Dickersin K, Meinert CL. Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and

pediatricians. Pediatrics 1989;84:374-80. Mahoney Mi. Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cog Ther Res 1977;1:161-75. Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H. Publication bias and clinical trials. Controlled Clin Trials 1987;8:343-53. Easterbrook PA, Berlin JA, Gopalan R. Matthews DR. Publication bias in clinical research. Lancet 1991; 337:867-72. Dickersin K, Meinert CL. Risk factors for publications bias: results of a follow-up study. Controlled Clin Trials 1990;11:255. Simes RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol 1986;4:1529-41. Thompson SG, Pocock Si. Can meta-analyses be trusted? Lancet 1991;338:1127-30. Chalmers I. Underreporting research is scientific misconduct. JAMA 1990;263:1405-8. Angell M. Negative studies. N Engl J Med 1989;321: 464-6. Charney M. Publication bias. Lancet 1991; 337:1102.

Publication bias: its implications for clinical pharmacology.

1CLINICAL HARMACOLOGY THERAPEUTICS AUGUST 1992 COMMENTARY Publication bias: Its implications for clinical pharmacology Gerhard Levy, PharmD Amherst,...
446KB Sizes 0 Downloads 0 Views