International Journal of

Radiation Oncology biology

physics

www.redjournal.org

EDITORIAL

Too Much Impact? Scientific Journals and the “Impact Factor” Anthony L. Zietman, MD, FASTRO, Editor-in-Chief Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts Statistics play a large role in contemporary sports, providing objective metrics by which teams and players are judged. Cricket and baseball, the 2 most carefully measured games on the planet, employ a range of statistics to describe performance, and no one would assert that any 1 measure holds primacy over any other. Each statistic must be assessed in context and with a grain of salt. For example, does a pitcher’s earned run average over the previous 5 games or over a single season properly reflect an entire career? Can it even be interpreted without a win-loss record, a strikeout-to-walk ratio, or the opponents’ batting averages to round out the perspective? Of course not. Scientific publishing also employs a range of statistics, known as “bibliometrics.” These can be used to assess the relevance and value of a paper to the field or to assess the shortor long-term influence of a journal. Science, however, differs from sports in that 1 particular metric, called the “impact factor” (IF), has acquired a semisanctified status. An excess of reverence to this single metric has had unanticipated and distorting consequences. The IF was first released in 1975 as a measure of a journal’s citation influence (1). It is calculated by considering all citations in 1 year to a journal’s content published in the previous 2 years, divided by the number of substantive, scholarly items published in that journal in those same 2 years. Thus, for any journal, the IF for 2013, released in July 2014, can be expressed as a ratio of the number of citations in 2013 for articles published in 2011 and 2012 to the number of substantive articles published in 2011 and 2012. At first glance, the IF would seem to offer a straightforward measure of the importance of papers published in a journal over the recent past, based upon the extent of citations. The

potential problems are, however, on refliection obvious (2). A single heavily cited paper could skew the IF upward despite everything else published being mediocre. The IF can easily be inflated if frequent “self-citation” of the journal’s own articles occurs. The publication of review articles which are frequently cited and, themselves, cite extensively, can increase the IF without any commensurate increase in the quality of the original scientific articles. In an ideal world, these concerns would even out and be of little concern, provoking occasional small, innocent, and temporary perturbations in the IF. This, however, is not the world in which we live. Over the last 2 decades, to quote McVeigh and Mann, “the impact factor has gone from being a measure of a journal’s citation influence in the broader literature to a surrogate that assesses the scholarly value of work published in that journal. These misappropriated metrics have been used to assess individual researchers, institutions, and departments” (3). The IF is now so important to many authors that editors and publishers choose to splash it across the cover or feature it prominently on the web page, where it is listed to the second decimal place. Promotion and academic advancement in some cases now depend upon publication of work in “high-impact” journals. In such a competitive environment, editors feel pressure to increase the IF in order to attract better manuscripts and thus drive their journal’s IF higher. This rising cycle could begin simply by the journal becoming more selective and improving the quality of the papers published. Sadly, however, it often begins less virtuously with an aggressive editorial initiative aimed at the low-hanging fruit of self-citation. Many of us can think of examples of journals in which, before our papers are finally accepted, we have been “encouraged” to add several additional references,

Reprint requests to: Anthony L. Zietman, MD, FASTRO, Department of Radiation Oncology, Massachusetts General Hospital,

55 Fruit St, Boston, MA 02114. Tel: (617) 726-2000; E-mail: [email protected]

Int J Radiation Oncol Biol Phys, Vol. 90, No. 2, pp. 246e248, 2014 0360-3016/$ - see front matter Ó 2014 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ijrobp.2014.07.018

Volume 90  Number 2  2014

curiously enough, all from that very journal and all of which fit within the 2-year “impact factor window.” Some level of self-citation will always naturally occur, but Thompson Reuters, publisher of the IF, concludes that most high-quality science journals have a self-citation rate of 20% or less (4). Exceptions may occur if, for example, the number of journals within a narrow field is very few, but higher self-citation rates are generally a warning flag. (The Red Journal, by the way, self-cites less than 10% of the time [5]. In fact, in 2013, the Red Journal received 45,583 citations and only 3099 were self-citations, according to Scopus [6]). The Committee on Publication Ethics strongly cautions editors away from these practices, and Thomson Reuters tracks these numbers closely. Indeed, they may actually sanction more egregious editors by suspending their journal’s IF (7). Less common than self-citation but far more nakedly manipulative has been the development of “citation cartels” in which journal editors quietly agree to more frequently cite one another’s articles (8). While this is not as blatantly obvious as self-citation, it can, in the era of big data computation, be detected, and several illicit cartels have recently been broken up. Another, and some would say, more reliable, publishing metric is the Eigenfactor. This is essentially an impact factor that excludes self-citation and weights the incoming citations toward those from higher-ranked journals. The Red Journal performs highly among radiology journals by this criterion, but somehow it has not captured the scientific world’s imagination in quite the same way; such is the power of name branding (9). Interestingly, in a recent opinion piece, the editors at JAMA have argued to remove the word “impact” from editorials and reviews because of their somewhat violent and hyperbolic nature (10). Those editors prefer “influence” or “affect,” and so do I. So what of the Red Journal’s IF? Over the last two decades, it has been slowly climbing, a strong testament to the leadership of the previous editor James Cox (11). Like the stock market, the Red Journal’s IF has experienced year-to-year “noise,” but the upward trend over time has been clear. It is unlikely that it can go very much higher simply because radiation oncology is a small specialty and, for leading journals of specialties our size, the IF usually runs at a natural limit in the range of 4 to 6. When I took over the journal in 2011, we had a large backlog of papers that had been accepted but not yet published, and these were “pushed” into print in 2012, greatly increasing our IF denominator. This year is now entering the IF window and, during the first year that they pass through that window, these papers have had little chance to be cited. Thus, they increase the denominator more than they do the numerator. Next year, however, we anticipate a “rebound” as the denominator stays the same size (indeed it may shrink due to more selective acceptance practices we put in place in 2013), and the numerator will rise from subsequent citations.

Scientific Journals and the “Impact Factor”

247

At the Red Journal, we have learned neither to party when the impact factor goes up nor act like there is a death in the family when it goes down. On our journal’s website, we publish several alternative and more robust bibliometrics that may be used for assessment and, by all these criteria, the Red Journal’s performance remains strong (http://journalinsights.elsevier.com/journals/0360-3016). One of the most intriguing emerging metrics to gauge the relevance of an individual paperdand, when aggregated, the entire journaldis the number of article downloads. This has great immediacy and tells the editor exactly what is on the minds of readers at that very moment. We at the Red Journal review these data annually and share them to ensure that the top articles from the previous year are not missed (12, 13). As can be imagined, it will likely not be long before authors and editors alike start to manipulate these numbers. As that happens, the value of the download metric will decrease. An abiding criticism of traditional metrics is that they really only track the articles that are cited by other academic researchers. Thus, an article that “the world” is talking about due to mentions in the New York Times, CNN, health bloggers, or by German Chancellor Angela Merkel may not necessarily be cited in research by other academics. The emerging measure known as “altmetrics” is now available to track the influence of an article among the mainstream public (14). It has also been speculated that, in a future of open publishing and powerful search tools, the peer review process may progressively disappear and be replaced by postpublication “crowd sourcing,” in which, for example, the number of “likes” or the number of times a paper is forwarded may be measured. That day has not yet arrived, although a new generation of researchers raised on social media will likely drive it on. For now, the IF, still holds its disproportionate sway. We believe that our journal’s long-view policy of paying close attention to the quality of the manuscripts that we publish and less to the natural year-to-year IF variation will ultimately push our IF higher still and to the ceiling for our size specialty (15). The impact factor will for the Red Journal reflect what it is supposed to reflect, an unmanipulated measure of a journal’s proportionate citation, subject to fluctuations over the short term.

References 1. Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90-93. 2. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997;314:498-502. 3. McVeigh ME, Mann SJ. Journal impact factor denominator: Defining citable items. JAMA 2009;302:1107-1109. 4. Web of Science. Journal self-citation in the journal citation reportsdscience edition 2002. Thomson Reuters. Available at: http:// wokinfo.com/essays/journal-self-citation-jcr/. Accessed July 9, 2014.

248

Zietman

5. Mary Heffner, personal communication. February 22, 2014. 6. Committee on Publication Ethics. Code of conduct and best practice guidelines for journal editors. COPE. Available at: http://public ationethics.org/resources/guidelines. Accessed June 1, 2014. 7. SCOPUS journal analyzers. Available at http://www.scopus.com/source/ eval.url?isCompareJournal=true&sourceIds=17191&styleIndexes=null. Accessed August 25, 2014 8. Davis P. The emergence of a citation cartel. Available at: http:// scholarlykitchen.sspnet.org/2012/04/10/emergence-of-a-citation-ca rtel. The Scholarly Kitchen. Accessed June 6 2014. 9. Roberts WC. Piercing the impact factor and promoting the Eigenfactor. Am J Cardiol 2011;108:896-898.

International Journal of Radiation Oncology  Biology  Physics 10. Fontarosa PB. Guidelines for writing effective editorials. JAMA 2014; 311:2179-2180. 11. Cox James D. Passing the batondin a marathon. Int J Radiat Oncol Biol Phys 2011;81:1206-1207. 12. Zietman AL. The Red Journal’s most downloaded articles of 2012. Int J Radiat Oncol Biol Phys 2013;86:218-221. 13. Zietman AL. The Red Journal’s top downloads of 2013. Int J Radiat Oncol Biol Phys 2013;89:937-939. 14. Thelwall M, Haustein S, Larivie`re V, et al. Do altmetrics work? Twitter and ten other social web services. PLoS One 2013;8:e64841. 15. Zietman AL, Bennett KE. The four Rs of the Red Journal: A progress report from the new editorial team. Int J Radiat Oncol Biol Phys 2013;87:7-9.

Too much impact? Scientific journals and the "impact factor".

Too much impact? Scientific journals and the "impact factor". - PDF Download Free
163KB Sizes 1 Downloads 7 Views