CLINICAL PHARMACOLOGY and
THERAPEUTIC !iii volume 18 number 5
Clinical trials of drugs from the viewpoint of the academic investigator (a satire) Dr. Lasagna was unable to attend the workshop in person, and therefore delivered his remarks by videotape, in the assumed role of the newest' 'Secretary of Human Experimentation" at the Department of Health, Education, and Welfare addressing a public television audience. There is no difficulty in sensing the serious truths underlying the satire in his address.
Louis Lasagna, M.D.
I am delighted for this opportunity to share with you some thoughts on drug evaluation in man. The two-dozen-odd "Secretaries of Human Experimentation" in the Department of Health, Education and Welfare (HEW) who preceded me during the past four years were not in office long enough to develop a philosophy. I have been more fortunate, having been on the job for a full six months. It has been said that my appearance today is simply a political response to criticism of my Department. Nothing could be farther from the truth. It would be easy for me to dismiss many of the criticisms of drug trials as ridiculous-to say that the Nader group are a bunch of raving consumerist lunatics; the drug industry, rapacious and profit-mad; the physicians, arrogant, solo Reprint requests to: Louis Lasagna. M.D .. Professor and Chairman. Department of Pharmacology and Toxicology. University of Rochester. School of Medicine and Dentistry. 260 Crittenden Blvd .• Rochester. N. Y. 14642.
entrepreneurs; the journalists, uninformed and cynical makers of lurid headlines; the academicians, cloistered "holier-than-thou" boobies. While many would say just that, it would be the cowardly way out. No, fellow citizens. I prefer to face up to these criticisms with dignity and with honor. Let me begin with the criticism that my associates and I place excessive reliance on animal toxicities prior to initiating clinical trials. Au contraire, I have come to realize that the amount of information we can derive from animal investigation is, sad to say, limited. But you, the public, frightened by the thalidomide tragedy and similar stories, have forced us into the present position. The public is more frightened by the possibility of toxic effects than by the delays in developing the new drugs we so badly need in almost every area of modem therapeutics. In going over the voluminous files on drugs
Clinical Pharmacology and Therapeutics
submitted to us over the years, I have been impressed by the following facts: (1) No species is really like man. (2) Most of the drug toxicity that can be predicted readily in animals is nonspecies-specific and is dose-related. (3) Most of the useful information is available within the first three to six months of animal testing, and longer tests rarely pick up new data. (4) The claim that the Tasmanian devil, the white rhinoceros, the wooly mammoth, and the redbreasted sapsucker are the species of choice is simply not true. Accordingly, as of April 1, I am recommending that we reverse the current trend toward longer and longer toxicity tests in animals and evaluate the drugs in man sooner. This should speed up the supply of new drugs and cut the cost of drug development. "But," you may ask, "will this not endanger the first human subject?" Needless to say, if that were the case, I would not propose the change. My review of Phase I trials shows that these are unbelievably safe. The combination of starting with teeny-tiny doses and going up very cautiously, of using healthy subjects, of employing expert investigators and monitoring all sorts of bodily functions with laboratory and other tests, and of stopping at the first sign of significant toxicity is hard to beat. Drug mischief under these circumstances is negligible. (Would that I could say the same about marketed drugs in clinical practice.) This does not mean, however, that I am completely happy with Phase I trials. Dr. Blackwell, in his review published in the JOURNAL, * has shocked me, quite frankly, with his data on the way in which investigators pick the starting doses in Phase I trials and the way in which they proceed to go up in their dosage. Some, for example, report that they use a fraction of the lowest dose that leads to any effects in any species. Others use the LD50 in rodents or the LD50 in dogs and monkeys. Still others say they use trial-and-error tactics-a euphemism, I suggest, for "I don't know how I do it. " In progressing upward from the starting dose, some use an arithmetic progression, *Blackwell. B.: For the first time in man. CUN. 13:812·823. 1972.
others use a geometric progression, and others a combination of the two. One investigator has suggested using a Fibonacci series. I must confess that when I originally heard this, I thought a Fibonacci series was an Italian soccer league playoff, but I find that it is not; it is a statistical concept which, I am happy to say, has not been used by anyone, including the person who originally suggested it. I am just a simple-minded non-scientist, but this is confusing to me. Is there not any approach that could be agreed upon? Does the slope of a dose-response curve in animals tell us nothing about how we can progress in man? Or is it possible, perhaps, that any way of starting with homeopathic doses and proceeding cautiously will work satisfactorily? Now, I do realize that we are caught on the horns of a dilemma. If we go too rapidly in our dosage progression, we will save time but risk more toxicity. If we go too slowly, on the other hand, we will be safer; but it will take too much time and money to define a tolerable dose limit. I wonder, however, whether after the initial few doses show that the human species is not extraordinarily susceptible to the drug, one might not then proceed reasonably rapidly with the dosage progression until we begin to approach the dosage range on a per kilogram basis, for example, where the most sensitive species begins to show some toxicity. I think it is a challenge to all investigators interested in Phase I trials to address themselves to these matters. A second criticism is that we are making too much use of healthy volunteers. It has been suggested that, ethically, one should have only sick patients even for Phase I studies, on the grounds that only those people who can possibly benefit from the drug in question-if it turns out to be useful-should be at risk in these early trials. I really do not agree, because it seems to me that to give homeopathic doses of a new analgesic, for example, to patients in pain will not really serve the patients very well, and I believe that this sort of study can be done well and more ethically in the healthy. A quite different situation, to be sure, exists in the area of cancer chemotherapy where these
Volume 18 Number 5, Part 2
Clinical trials of drugs-viewpoint of academic investigator (satire)
new drugs are extremely toxic and where, in fact, it is common practice to use patients even in the earliest trials. But I must confess to being chagrined and perplexed at the question of which healthy subjects should be used. I am told that we should not use prisoners because they are an especially captive population; that students are off-bounds; that women and children cannot be used; that Blacks and Chicanos and the poor of all races, colors, and creeds should not be used. Who, I may ask, is left? I have also been told, in regard to animal species, that we should avoid using cats and dogs and bunnies. I have even been approached by members of the "God is a White Mouse" Club suggesting that rodents should not be employed in our tests. Faced with all these objections, I have decreed, again as of April I, that wherever possible we should use patients in our trials, even the earliest ones; and if we cannot, we should then employ male, white, Anglo-Saxon, Protestant, country club members who have paid their dues and are not as yet in prison. As far as animals are concerned, I have decreed that we shall limit our investigations to snakes and fleas. Fortunately, my colleagues at the National Institutes of Health (NIH) have come up with a new species of flea-bearing snake that should aid us tremendously in these new pursuits. Now, controlled trials. Controlled trials, despite the fact that they are thought by many to have rescued us from the ancient practices of leeching and puking and purging, have recently come under attack. It is said, and I am afraid quite rightly, that controlled trials can be poorly planned and poorly done, and that the fact that a trial is randomized and controlled does not guarantee its validity or merit. The University Group Diabetes Program, National Institute for Arthritis and Metabolic Diseases (UGDP) study, for example, did indeed use some pretty funny patients and a rather peculiar dosage regimen. But it did show toxicity, did it not? And doctors do treat the wrong kinds of patients with drugs, do they not? And they are often inflexible in their dosage, are they not? And some continue the drug even if it does not work well, do they not? Some critics
have said that the UGDP study showed only that if you give oral hypoglycemic agents to the wrong people in the wrong way, these agents will be toxic. To that I say, "picky, picky, picky." Nonetheless, I am struck by the infrequency of attention to stratification in the randomization of patients in clinical trials. For instance, in analgesic trials, you will often find at the end of a simple randomization procedure that the patients in one group are significantly different in their baseline levels of pain from the patients in another group. At that point, one has a hard task adjusting for this, although some statistical techniques such as covariance are said to be useful in this regard. But would it not be better, since we know that the initial level of pain is related to the performance of drugs and of placebo, to stratify during our trial for this important variable? These are not the only criticisms that have been leveled at controlled trials. It has been said that these trials are unnatural, that they are done in ways completely unlike the manner in which the drug will ultimately be applied once it is marketed. That is true. The trials that are the basis for marketing are done by experts, usually in homogeneous inpatient populations, carefully monitored, with no other drugs involved, and with subjects who are volunteers-and we have a large literature suggesting that volunteers often can differ in very important ways from non-volunteers. Then the drug is marketed, and it is used by non-expert physicians in heterogeneous populations, often in outpatients, and with many other drugs in the act. There is every reason in the world why the performance should be different and why it is therefore difficult to predict the ultimate performance on the basis of these somewhat artificial controlled trials. I accept these criticisms. I have decided that if several controlled trials are done and are in agreement that there is efficacy, that is enough to establish the possibility of useful therapeutic performance from the drug. To get more of the same would not really help us very much, because common side effects will be picked up with ease in relatively small-sized
studies, and a really rare side effect will not be detected unless we study huge numbers of patients. I must confess that I have been confused, on asking my colleagues in the Food and Drug Administration (FDA) or people in industry how many patients are required before a drug is approved, to be told, "Oh, about a thousand" or "About two thousand" or "About three thousand." When I ask why one number or the other, I am told: "That's a good round number. " I suffer from insomnia and, of late, I have been reading a statistics book at night in bed. I am not an expert in statistics, but on dipping into that volume, I find that one can, in fact, predict the numbers of subjects that will be required if one wants to pick up with some assurance a side effect of a given frequency. I have begun to ask: "Why not use these techniques in deciding what numbers of subjects we need to have prior to marketing?" It has been suggested that perhaps we should go into Phase IV testing before a drug is marketed, or perhaps engage in these studies in a more formal way after the drug is marketed, as was done with levodopa, in an attempt to make this whole process more rational. I must say I am much attracted to that possibility. It seems to me that we should not pile controlled trial on controlled trial, but rather should move as rapidly as we can to studying the drug as it will ultimately be used in practice. This brings me to the question of the validation of clinical experience. When I arrived at HEW, my colleagues here and some consumer friends of mine told me that doctors were stupid and were fond of bleeding and purging even in 1975. I was further confused when they told me that these physicians, who are not smart enough to tell placebos from active drugs, could, however, pick up even small-to-moderate differences between generic versions of a brand preparation. I could not see how physicians who are too stupid to do one could be quite adept at the other. Unlike Gresham's law, good remedies seem, in general, to push out the bad. That, I concluded, is why leeching and puking and purging and bleeding have faded from the therapeu-
Clinical Pharmacology and Therapeutics
tic scene. We also know that most of our knowledge about disease, drug treatment, and drug toxicity has, in fact, come, not from controlled trials, but from naturalistic observations by smart physicians using their past knowledge and experience as control. So, as a result, I have decided that while most doctors are not Witherings or OsIers, neither are they morons or robots. How, then, can we use their experience? I suggest that we had best gird our loins and work on the methodology for validating naturalistic clinical experience. We have done nobly in regard to the principles of controlled trials, and these are generally agreed upon. We are beginning to make headway in regard to the principles of drug toxicity evaluation. I believe we can do the same for naturalistic trials. At the very least, should we not, when a drug is marketed, be studying its usage? Should we not be seeing whether the indications are followed, whether the dosage is appropriate, whether the duration of treatment is correct, whether the therapeutic performance and toxic performance are as anticipated or whether they differ significantly from what would have been predicted by the experts in the controlled trials? I realize that in trying to assess both benefit and harm from these drugs in a naturalistic sense, we will in some way have to subtract from the performance the "background noise," as it were, that results from spontaneous improvement or deterioration in the patient and from the effects of suggestibility. Nevertheless, I think we can do better than we are now doing with these data, which are largely useless at the moment. What we need, in my view, is to use both clinical trials and naturalistic experience and to use each for what it can contribute. It is not either/or, but how to use both. In fact, I do not see how we can get away from repeatedly assessing the totality of evidence on a drug, looking at all the information we have, and then trying to assess what weight to put on each bit of evidence. For example, in regard to oral contraceptives, what finally convinced me that they were capable of producing thromboembolic disease in some women was not just that we had case reports of cardiovascular catastrophe
Volume 18 Number 5, Part 2
Clinical trials of drugs-viewpoint of academic investigator (satire)
in young women, There were some peculiar autopsy findings in some of these cases, There were some challenge and rechallenge experiments in other women, Cardiovascular toxicity was seen in other trials where estrogen was used. Clotting factor abnormalities could be easily detected in almost any woman on oral contraceptives; decreased venous flow was shown in subjects given hormonal agents; and, finally, there were the data from case control studies. Case control studies are not always useful. I believe they were useful in the case of the oral contraceptives, even though purists have said: "How do you know that women on oral contraceptives who complain of symptoms that might be related to thromboembolic disease would not be more likely to be hospitalized and, therefore, to give (in a hospital survey) a biased view of the relationship between these medications and clotting abnormalities?" I believe that the objections against the case control studies on oral contraceptives can be minimized; but I must say that I find a similar attempt to link aspirin and gastrointestinal ulceration and bleeding in one study, and reserpine and breast cancer in another, to be on much shakier ground, In both of these studies, an attempt was made to implicate the drugs in
question by comparing the past taking of either aspirin or reserpine in patients with ulcer and gastrointestinal bleeding on the one hand, and breast cancer on the other hand, with the incidence of taking such drugs in the control group. All well and good, except for the fact that from both of these control groups have been thrown out, rather arbitrarily, significant numbers of people who were taking the drugs in question. I have concluded that for certain kinds of studies such as these, it is simply impossible to do a rational, defensible case control study, If epidemiologists believe that this approach which I have just been criticizing is correct, then I suspect that they are all in need of psychiatric help. I hope you will continue to keep those letters and postcards coming in. I like to look on our Department as not so much resembling a ship of state, a stately cruiser with a competent captain at the helm, which is very comfortable to ride on but which, if the captain makes a mistake, is likely to ride onto some shoals and to sink with all hands on board, I like rather to think of our Department as a raft, a raft without a captain, without a leader. On a raft of this sort, it is very hard to sink and lose all hands, although one is likely to have one's feet wet a good deal of the time.