Letter to the editor J Cardiovasc Med 2015, 16:525–526

Evaluating the merit of research in clinical cardiology: from citation to declaration Alberto Dolara Agenzia Regionale Sanita`, Florence, Italy Correspondence to Dr Alberto Dolara, Via Stefano Turr 7, 50137 Firenze, Italy Tel +39 055 588806; e-mail: [email protected] Received 5 March 2014 Revised 13 April 2014 Accepted 17 April 2014

To the Editor, The assessment of scientific publications is an integral part of the scientific process, but in the modern era the number of papers published has grown continuously. The Journal Citation Reports indicates that there were 507 000 papers published in the life sciences in 1990 and 702 000 in 2000. The number of life science journals grew by 27%, from 4464 to 5684.1 Hirsch, a researcher who in 2005 created an index to evaluate the merit of scientific publications, states2 that ‘for the few scientists who earn a Nobel prize, the impact and relevance of their research is unquestionable. Among the rest of us, how does one quantify the cumulative impact and relevance of an individual’s scientific research output? In a world of limited resources, such quantification (even if potentially distasteful) is often needed for evaluation and comparison purposes (e.g., for university faculty recruitment and advancement, award of grants, etc.)’. Hirsch’s ‘potentially distasteful’ words suggest that the evaluation of scientific research is still a ‘work in progress’. The purpose of this article is to review the issue from the point of view of a clinical cardiologist. Examining the diffuse criticism about the impact factor, a method to assess the merit of a scientific paper, is beyond the scope of this article.3 Even Eugene Garfield and Thomson Reiter conclude that this measurement must be used carefully when evaluating an individual researcher, since it is debatable, controversial and subject to abuse (E. Garfield. The agony and the ecstasy. The history and meaning of the journal impact factor, presented at the International Congress on Peer Review and Biomedical Publications; 16 September 2005; Chicago). More pertinent is to examine the H index invented by Hirsch. The H index is based on an author’s number of citations and papers, with all published papers numbered in order of decreasing citations. The analysis of the function number of quotations vs. number of papers, visualized as a Cartesian plot (y-axis, number of citations; x-axis, number of papers), is a concave curve; the 1558-2027 Copyright ß 2015 Wolters Kluwer Health, Inc. All rights reserved.

intersection of the 458 line with this curve gives the value of H. Hirsch, who works in the Department of Physics, University of California, states that the research he performed using the index focussed on physicists, but he suggests that ‘the H index should be useful for other scientific disciplines as well’.4 This conclusion was criticized in a recent article by Opthof and Wilde5 who carefully analyzed the bibliometric parameters of 37 Dutch professors of clinical cardiology. They selected the six leading cardiovascular journals and defined a publication period of 2000–2009 in the Thomson Reuters Web of Science, to generate an H index for two topics: Brugada syndrome and Marfan syndrome. The result was an H index of 59 for Brugada syndrome and 29 for Marfan syndrome. They state that ‘when such large differences exist within what on the surface seems to be a relatively homogeneous field as ‘‘clinical cardiology’’, little imagination is needed to see how such differences could work out for scientists who are active in fields such as ‘‘robotics’’ and ‘‘atherosclerosis’’ in one and the same university medical center . . .’. The Internet now allows us to organize, share and easily access information relating to life sciences, and it is possible to evaluate and comment on published research on the web. Some publishers are using altmetrics, a method invented in 2010 based on counts of citations in social web services. These counts appear more rapidly than citations in scientific journals. For example, one could expect an article to be tweeted on its publication day and blogged within a month of publication, whereas citations in science journals take longer. Hence, social media citations have become a marketing tool for promoting articles. There are altmetric websites offering services such as altmetric.com, impactstory.org and sciencecard.org [J. Priem, D. Taraborelli, P. Groth, C. Neylon. Altmetrics: a manifesto, v.1.0; October 2010; website: http//altmetrics.org/manifesto (accessed 11 November 2013)].6 In December 2012, a group of editors and publishers of scholarly journals met at the Annual Meeting of the American Society for Cell Biology and developed a set of recommendations referred to as the San Francisco Declaration on Research Assessment (DORA). The declaration calls on funders, institutions, publishers and researchers to stop using journal-based metrics, such as the impact factor, as the criteria for hiring, tenure and promotion decisions, but rather to consider a broad range of impact measures that focus on the scientific content of the individual paper [San Francisco Declaration on DOI:10.2459/JCM.0000000000000140

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

526 Journal of Cardiovascular Medicine 2015, Vol 16 No 7

Research Assessment (DORA); n.d.; website: http:// am.ascb.org/dora/ (accessed 26 June 2013)]. Eighty-three original signatory organizations, including publishers (e.g. PLOS), societies such as Advancing Science Serving Society (which publishes Science) and funders such as the Wellcome Trust, have adhered to DORA. Bornmann,7 in a literature review, concludes that the assessment of ‘societal’ impact is an additional method to measure ‘scientific’ impact, further increasing the administrative effort involved in evaluations. Eisen et al.8 state that initiatives such as DORA and the emerging field of altmetrics will eventually shift the culture and identify multivariate metrics that are more appropriate to 21st century science. Hirsch’s initial question about how to quantify the cumulative impact and relevance of an individual’s scientific research output probably has no definite response. Although it is reasonable to suppose that without the use of the bibliometry indices the systems of academic ranking and promotion would be even worse, only adequate follow-up will establish whether there was something new or useful in papers published in a certain historical time or whether they only fulfilled the obligation to ‘publish or perish’. Social and clinical impact can therefore be very different from editorial impact, and perhaps the Choosing Wisely initiative of the American Board of Internal Medicine Foundation [Choosing Wisely: an initiative of the ABIM Foundation; 2013; website: http://www.choosingwisely.org (accessed 21

November 2013)] can illustrate this issue. Choosing Wisely partners are most of the American Professional Associations including cardiologists. Choosing Wisely encourages physicians, patients and other healthcare stakeholders to think and talk about medical tests and procedures that may be unnecessary and may even cause harm. No one can deny that ‘wisdom’ requires time and calm consideration. Physicians and cardiologists in clinical practice should avoid haste in evaluating the real merit of clinical research before implementing it in their patients.9,10

References 1 2 3

4 5 6

7 8

9 10

Opthof T, Coronel BC. Productivity in science: more and more? Cardiovasc Res 2002; 56:175–177. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A 2005; 102:16569–16572. Lozano GA, Larivie‘re V, Gingras Y. The weakening relationship between the impact factor and papers’ citations in the digital age. J Am Soc Inf Sci Technol 2012; 63:2140–2145. Hirsch JE. Does the H index have predictive power? Proc Natl Acad Sci U S A 2007; 104:19193–19198. Opthof T, Wilde AAM. Bibliometric data in clinical cardiology revisited. The case of 37 Dutch professors. Neth Heart J 2011; 19:246–255. Thelwall M, Haustein S, Larivie`re V, Sugimoto CR. Do altmetrics work? Twitter and ten other social web services. PLoS One 2013; 8:e64841; doi: 10.1371/journal.pone.0064841. Bornmann L. What is societal impact of research and how can it be assessed? A literature survey. J Am Soc Inf Sci 2013; 64:217–233. Eisen JA, MacCallum CJ, Neylon C. Expert failure: re-evaluating research assessment. PLoS Biol 2013; 11:e1001677; www.plosbiology.org. [Accessed 2 October 2013]. Dolara A. Invito ad una ‘slow medicine’. Ital Heart J Suppl 2002; 3:100– 101. Dolara A. Avoiding haste in clinical cardiology. Acta Cardiol 2005; 60:569–573.

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Evaluating the merit of research in clinical cardiology: from citation to declaration.

Evaluating the merit of research in clinical cardiology: from citation to declaration. - PDF Download Free
105KB Sizes 3 Downloads 7 Views