J Chem Ecol DOI 10.1007/s10886-014-0463-8

COMMENTARY: REFLECTIONS ON 40 YEARS

The Devil is in the Details Jocelyn Millar

# Springer Science+Business Media New York 2014 Much of chemical ecology is about details, and about getting them right. As diligent and careful scientists, we should all know this, but too often scientific rigor takes second place to expedience. For us as chemical ecologists, the situation is further complicated because our projects often span multiple disciplines, and it is impossible for any one person to be an expert in all of them. Even with this limitation, I suggest that it may be useful to pause and ask ourselves, are we doing the best science we can, as opposed to doing science that is only good enough to get published somewhere among the plethora of publishing options available today? I am by no means the first to voice these sorts of awkward questions; over the past several years there has been a lot of publicity and wringing of hands over the problem of irreproducibility of published results, particularly in high-profile journals (for a compilation of articles and reviews, see http://www.nature.com/nature/focus/reproducibility/). This has spilled over into the popular press as well (for example, http://www.economist. com/news/briefing/21588057-scientists-think-science-self-correctingalarming-degree-it-not-trouble), with a number of reasons cited as contributing to the overall problem of science done poorly, superficially, or even incorrectly, and rushed into print. To quote from the article in The Economist, “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.” It is a sorry situation indeed when publication becomes an end unto itself, regardless of whether what is being published is actually true or correct. From my perspective as a frequent reviewer, I provide some examples from our own discipline where I repeatedly see what I consider to be lapses in the rigor with which we do science. Foremost is a lack of rigor in the identification of compounds. By far the most frequent cause is claiming an identification on the basis of a match, usually of unstated quality, between the mass spectrum of an unknown and a spectrum in a mass spectral database. For a number of the simpler compounds, the match may well be correct, but “may well be correct” does not constitute rigorous science. Of late, the problem has been compounded because

J. Millar (*) University of California, Riverside, CA, USA e-mail: [email protected]

benchtop GC/mass spectrometers have become so easy to use, and the power of the database matching algorithm that suggests a possible match with one key-stroke is so seductive. However, this ignores several important facts. First, the database represents a small fraction of the compounds known to science, and thus, it can provide only the best match to the limited number of spectra it contains, regardless of whether that match is correct. Second, the mass spectra of isomers and other closely related compounds can be very similar and even virtually identical, but this fact is ignored frequently. Too often, researchers take a database match at face value, even though it is usually not that much more work to confirm possible matches with authentic standards, or by some other unambiguous identification methods. It also is valuable to unequivocally demonstrate that other isomers do not match, as further proof of one’s identification, but this is done nowhere near as frequently as it should be. In short, expedience triumphs over rigor. Third, even though authentic standards may not be commercially available, they usually are not hard to come by. In my own experience over more than 3 decades, I cannot think of a single instance in which a colleague has turned down my request for a sample of an authentic standard to confirm an identification. Yes, it is more work, and it takes more time, but at least one can be sure that the identification is unequivocally and indisputably correct before publishing. For just such reasons, the Journal of Chemical Ecology established guidelines for authors, which explicitly lay out the requirements for claiming to have identified a compound. Despite the fact that these guidelines are readily accessible on the Journal’s website, it is clear from many manuscripts that I receive to review that these guidelines are often ignored. A second area in which rigor is frequently lacking is in the determination of the absolute configurations of bioactive molecules, particularly with plant volatiles. Most if not all binding proteins and biological receptors for semiochemicals are inherently chiral because they are constructed from chiral amino acids. Thus, in terms of biological activity, it is not good enough to simply state that a plant extract contains, for example, α-pinene, because to a protein or receptor, the (+)- and (−)-enantiomers of α-pinene are as different as chalk and cheese. Scientists such as Professor Kenji Mori have repeatedly stressed the importance of chirality in relation to the bioactivity of semiochemicals, and with modern chiral stationary phase gas or liquid chromatography columns, it is often trivial to

J Chem Ecol determine which enantiomer is present in an extract. Unfortunately, in many cases, these analyses are not done. The problem is compounded when, having identified a chiral compound by mass spectrometry or other methods, without determining which enantiomer it is, a commercially available sample subsequently is used to reconstruct blends for testing of bioactivity, even though the commercial material may not be the correct enantiomer. Expedience again has triumphed over rigor. A third area in which standards seem to have slipped, despite substantial increases in the power and sensitivity of analytical methods and instrumentation, is in the careful delineation of what constitutes a pheromone blend, particularly with lepidopteran pheromone blends. In the late 1980s and early 1990s, a number of research groups worked out methods to collect the pheromone that was actually being released by calling female moths, as opposed to the blend that was found in extracts of the pheromone glands. Comparisons between the two methods clearly showed that for most species, the pheromone blends determined by these two methods were qualitatively and quantitatively not the same, with the gland extracts containing distorted ratios, compounds that were not released, and even inhibitory compounds. Despite such clear evidence that gland extracts may not be representative of the actual pheromone, it is becoming common to see accounts of new pheromone blends that are based solely on reconstructions of ratios found in gland extracts, regardless of whether the compounds and ratios found in the gland are an accurate representation of what is released. In many cases, even basic field tests to optimize ratios by assaying a range of ratios are not carried out, nor do many researchers attempt to correct for the different vapor pressures of the pheromone components when loading dispensers. The problem is compounded when pheromone gland ratios are used as a basis for assessing whether different host or geographic races have different pheromone blends. It is more difficult and time-consuming to measure the pheromone actually released than to simply extract pheromone glands, but if the whole foundation of a project depends on accurate determination of possible true differences, these data are critical. Alas, expedience triumphs over rigor. I also suggest a couple of areas in the biological and ecological side of our discipline where more rigor would lead to better science. For example, when a battery of compounds that differ substantially in volatility are tested in physiological or behavioral bioassays as odorants, a significant percentage of studies are done without correcting for the different volatilities of the test compounds, so that the antenna is being stimulated by the same number of molecules of each compound. It is not meaningful to state that one compound elicited a larger response than the other, when the

actual quantities being delivered to the antennae or the animal may have differed by orders of magnitude. To make such adjustments by collecting volatiles from the stimulus dispenser is not difficult; it just takes a bit of time, and possibly a couple of iterations to balance the release rates of the various compounds in a blend. For different but parallel reasons, the chemical and isomeric purity of compounds used in laboratory and field bioassays should be checked, recorded, and reported in publications because as we chemical ecologists know, even trace amounts of impurities can have major effects on behavioral responses. The problem is exacerbated by the fact that many semiochemicals are not terribly stable, and they can degrade and isomerize upon exposure to light, air, or traces of acids, bases, or salts. I also suggest that the value of bioassays that test “something versus nothing” is questionable. For example, when testing a volatile stimulus versus clean air in a laboratory bioassay, such a bioassay can inform the initial steps in the identification of potentially active compounds, but unless such studies progress beyond laboratory tests, at least in terms of published results, these types of “something versus nothing” assays may produce results that are statistically significant but biologically irrelevant. Is it really valid and biologically relevant to call a compound a semiochemical when it is only possible to demonstrate its activity under highly simplified conditions in laboratory bioassays? It is also a mystery to me why studies that have otherwise been carefully and thoroughly done may be spoiled by authors playing fast and loose with interpretation of their statistical analyses. For example, I often have seen some version of the phrase, “The data for A were higher than for B, even though they were not statistically significantly different”. The best response to this when receiving such manuscripts to review is, “There is no point in doing statistical analyses of data if you are either not going to believe the results, or worse, are going to be selective in choosing which results to believe and which to ignore.” I am not pointing an accusatory finger at any individuals or groups by use of these examples. The point that I want to make is that we must all work harder at ensuring that our science is accurate, reliable, and reproducible. Certainly, we are all under pressure to produce more and publish more, but that should not be at the expense of lowering standards and cutting corners. I for one would like to be remembered for the solid foundation that my work provided for future research, rather than as a scientist whose work had to be repeated because it could not be trusted. I hope that you, my friends and colleagues, are of like mind.

The devil is in the details.

The devil is in the details. - PDF Download Free
88KB Sizes 3 Downloads 3 Views