This article was downloaded by: [Colorado College] On: 04 March 2015, At: 15:38 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Archives of Environmental & Occupational Health Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/vaeh20

Assessing Productivity Among University Academics and Scientific Researchers a

Derek R. Smith Deputy Editor-in-Chief a

Archives of Environmental & Occupational Health Accepted author version posted online: 06 Nov 2014.Published online: 12 Dec 2014.

Click for updates To cite this article: Derek R. Smith Deputy Editor-in-Chief (2015) Assessing Productivity Among University Academics and Scientific Researchers, Archives of Environmental & Occupational Health, 70:1, 1-3, DOI: 10.1080/19338244.2015.982002 To link to this article: http://dx.doi.org/10.1080/19338244.2015.982002

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Archives of Environmental & Occupational Health (2015) 70, 1–3 C Taylor & Francis Group, LLC Copyright  ISSN: 1933-8244 print / 2154-4700 online DOI: 10.1080/19338244.2015.982002

Emerging Topics in EOH Research

Downloaded by [Colorado College] at 15:38 04 March 2015

Assessing Productivity Among University Academics and Scientific Researchers The ongoing drive for accountability in higher education has led authorities to increasingly assess research performance among academic and scientific institutions.1 At the same time, as scientific enterprise becomes larger and more complex, it becomes more difficult, expensive, and necessary to identify those individuals and groups who are making the greatest contribution.2 Not surprisingly, therefore, the idea to assess and quantify the research activity of individuals, particularly the differences in productivity rates between them, has long remained a hotly debated issue. This article discusses the quantitative assessment of research performance that has evolved over the past few centuries in university and institutional academia, most often via the analysis of individual research publications and other information usage metrics. Although few studies of this nature have specifically focused on the field of Environmental and Occupational Health (EOH),3 virtually all are relevant to the performance of our discipline in contemporary society. Some of the earliest research on the performance of individuals was undertaken by the British scientist Francis Galton (1822–1911) in the 19th century. Although he is probably best known for pioneering work in the fields of genetics and statistics, Galton was also interested in the measurement of science itself. In 1874, he published a landmark sociological study that examined 180 “eminent men of science” and was titled English Men of Science.4 Across the Atlantic, Galton’s book inspired James McKeen Cattell (1860–1944), an American psychology professor and editor of the journal Science between 1894 and 1945,5 to publish his own book in 1906. In this work, titled American Men of Science, Cattell described the demographics and “scientific merit” of 1000 university and institutional scientists.4 More detailed mathematical investigations of individual scientific productivity were also being conducted during the early 20th century. In 1926, for example, Alfred Lotka (1880–1949) described what would become the famous “inverse square law” of scientific productivity,6 where he proposed that the number of scientists producing n contributions is approximately equal to 1/n2. This concept of progressively diminishing returns was also shown to exist within the scientific literature, as demonstrated by Bradford’s 1934 study of engineering journals where there were found to be a few very productive sources, a larger number of moderate usefulness, followed by an even larger proportion of increasingly diminishing value.7 By the middle 20th century, productivity measurements for individual scientists were becoming more

common. In 1942, for example, Logan Wilson (1907–1990) published a sociological study of higher education titled The Academic Man, in which he reported that (among other things) Chicago professors produced around 11 publications every 5 years.8 It is interesting to note that Wilson’s book is probably most famous for coining the term “publish or perish.”9 As the 20th century progressed, additional evidence further outlined how the contribution of individual researchers was not necessarily uniform. One of the more famous examples appeared in 1963 when Derek de Solla Price (1922–1983) published Little Science Big Science, in which he reported that around one quarter of authors were responsible for producing around three quarters of all scientific articles.10 By the 1990s, the concept of journal-based metrics was becoming increasingly well known, and not surprisingly, therefore, many studies of research productivity began to focus on the journals that scientists were publishing in, rather than on the individuals themselves.11 Perhaps the most well-known citation-based measure of journal performance is the journal impact factor (IF), which was first proposed by Eugene Garfield in 1955.12 As the IF rose in popularity and entered mainstream institutional consciousness last century, there was an understandable temptation to use this measure in the assessment of individual researchers, usually by examining IF scores of the journals in which they were publishing. At the height of its popularity, resource allocation, academic promotion, and even cash incentives were being given to researchers who published in high-IF journals. Over time, however, it became increasingly clear that a journal’s IF should not be used as a surrogate for evaluating the actual research undertaken13 or the individuals who undertook it,14 due to various inherent limitations which naturally occur over time.15 Opposition to the use of journal-based metrics for evaluating individual researchers has now been formalized by various stakeholders,16 suggesting that in EOH, as elsewhere, it is time to consider new models. Individual productivity measures such as the h-index, for example, offer one possible alternative in this regard.17 When considering the numbers we might use for the assessment process, it is important to remember that scientific research does not take place in a vacuum. Given that scientific research is fundamentally undertaken by human beings, the human context of science represents a crucial facet of any discussion, for a few reasons.18 Firstly, article citations themselves are not necessarily a fair or objective record of scientific influence. At a micro level, citations from one author to another are not usually a random phenomenon, as they share

Downloaded by [Colorado College] at 15:38 04 March 2015

2 various social characteristics19 and tend to reflect the needs and idiosyncrasies of the citer themselves.20 Authors form social networks by virtue of their academic publications, and within these networks authors select coauthors to work with and reference the work written by others.19 The issue of authorship itself has become a point of serious contention in recent years, not the least of which is because the average number of authors per article essentially doubled during the second half of last century.21 The absolute number of authors per article has also expanded dramatically over time, with over 130 scientific papers having been published by 2005 that each contained at least 500 authors.22 Aside from choosing coauthors to work with, scientists also search for new material to include and refer to, which is, ideally, the most relevant and the most recent literature available. Because human searching (as with most human behavior) is not usually a random process, its assessment might conceivably be undertaken by utilizing mathematical techniques. As with citation analysis, various possibilities exist. Perhaps the most famous contemporary example of how human behavior influences searching in large systems was described in PageRank, a fundamental element of the original Google search engine algorithm. In their 1998 publication,23 Sergey Brin and Lawrence (Larry) Page suggested that PageRank is a model of (human) user behavior that describes the probability that a random Internet “surfer” will visit a particular Web page. It has also been described as a random walk starting from any node along the edges; after an infinite number of steps, the PageRank value is the probability that a given node is actually visited.19 One key aspect of Brin and Page’s original algorithm was the addition of a damping factor, which is the probability that an Internet “surfer” will not keep clicking on successive Web page links at random. Rather, they will eventually get bored and jump to another, random, Web page.24 The original damping factor value of 0.85 proposed by Brin and Page in 1998 was a major factor in changing a Web site’s rank at the time.25 This value and the concept itself may be relevant to consider in models of literature searching for the field of EOH. Regardless of whether one looks at research productivity from either a past (assessment) or future (planning) standpoint, it is important to recognize that at a macro level, the institution of science itself has its own characteristic values, norms, and reward systems,26 and these will naturally influence the behavior of individual researchers. One of the earliest and, perhaps, most relevant studies of this phenomenon in the past was undertaken by Cole and Cole, who examined reward systems in scientific institutions during the 1960s. In their landmark 1967 study of university physicists,27 the authors described how the productivity of individuals could be classified into 4 categories: Type I (prolific), Type II (mass producer), Type III (perfectionist), and Type IV (silent). These productivity categories are somewhat reminiscent of Lotka’s aforementioned inverse square law6 and Bradford’s law of scattering.7 When looking forward from a research planning standpoint, investigations of research performance in the university sector have also raised their own challenges, not the least of which is how to develop incentives to increase productivity among

Emerging Topics in EOH Research current staff.28 Although scientific research efforts are known to be stubbornly resistant to management by numbers,29 the means may also offer a solution in itself. A recent study from Denmark, for example, found that research productivity actually increased following the introduction of a performance indicator.30 Regardless of one’s opinion on the topic and what should be done about it, there can be no doubt that the measurement of research productivity among individuals, institutions, and countries will likely remain a key issue in EOH, as elsewhere, for many years to come.31 The ideal direction to take in meeting these challenges is difficult to conclusively establish, however. As previously described, much of the work already undertaken in this space has looked at an individual’s publication record and, more recently, their citation profile.32 Better suited, more practical, and cost-effective measures are clearly needed for 2015 and beyond. Bibliometrics certainly has much to offer in this regard, not the least of which is that when properly applied, citation analysis introduces a very useful measure of objectivity into the evaluation process at relatively low cost.2 Indeed, it may now be prudent to undertake a revised look at some old measures.33 For these reasons, the first few articles in our new series of discussion papers in the Archives of Environmental & Occupational Health (AEOH) will focus on the emerging topic of research assessment in universities, industry, and scientific institutions. The author welcomes, as always, opinion and feedback from AEOH readers and other interested parties. Derek R. Smith Deputy Editor-in-Chief Archives of Environmental & Occupational Health

References 1. Panaretos J, Malesios C. Assessing scientific research performance and impact with single indices. Scientometrics. 2009;81:635–670. 2. Garfield E. Is citation analysis a legitimate evaluation tool? Scientometrics. 1979;1:359–375. 3. Smith DR. Identifying a set of ‘core’ journals in occupational health, part 2: lists derived by bibliometric techniques. Arch Environ Occup Health. 2010;65:173–175. 4. Godin B. From eugenics to scientometrics: Galton, Cattell, and men of science. Soc Stud Sci. 2007;37:691–728. 5. Sokal M. Science and James McKeen Cattell, 1894 to 1945. Science. 1980;209:43–52. 6. Lotka AJ. The frequency distribution of scientific productivity. J Washington Acad Sci. 1926;16:317–324. 7. Bradford SC. Sources of information on specific subjects. Engineering. 1934;137:85–86. 8. Altbach PG. Logan Wilson and the American academic profession. Society. 1996;34:86–91. 9. Carpenter CR, Cone DC, Sarli CC. Using publication metrics to highlight academic productivity and research impact. Acad Emerg Med. 2014;21:1160–1172. 10. Price DDS. Little Science, Big Science. New York: Columbia University Press; 1963. 11. Smith DR. Historical development of the journal impact factor and its relevance for occupational health. Ind Health. 2007;45:730– 742.

Downloaded by [Colorado College] at 15:38 04 March 2015

Archives of Environmental & Occupational Health 12. Garfield E. Citation indexes for science; a new dimension in documentation through association of ideas. Science. 1955;122:108–111. 13. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:497. 14. Epstein RJ. Journal impact factors do not equitably reflect academic staff performance in different medical subspecialties. J Invest Med. 2004;52:531–536. 15. Smith DR. Citation analysis and impact factor trends of 5 core journals in occupational medicine, 1985–2006. Arch Environ Occup Health. 2008;63:114–122. 16. San Francisco Declaration on Research Assessment Web page. Available at: http://www.ascb.org/dora-old/files/ SFDeclarationFINAL.pdf. Accessed September 9, 2014. 17. Franco G. Research evaluation and competition for academic positions in occupational medicine. Arch Environ Occup Health. 2013;68:123–127. 18. Reif F. The competitive world of the pure scientist. Science. 1961;134:1957–1962. 19. Fu TJ, Song Q, Chiu D. The academic social network. Scientometrics. 2014;101:203–239. 20. Seglen PO. The skewness of science. J Am Soc Inform Sci. 1992;43:628–638. 21. Smith DR. Authorship, scholarship and ergonomics. Trav Humain. 2009;72:397–403. 22. King C. Multiauthor papers redux: a new peek at new peaks. Sci Watch. 2007;18(6):1–4. 23. Brin S, Page L. The anatomy of a large-scale hypertextual Web search engine. Comput Networks ISDN Syst. 1998;30:107–117.

3 24. Page L, Brin S, Motwani R, Winograd T. The PageRank Citation Ranking: Bringing Order to the Web. Stanford University Web site. Available at: http://ilpubs.stanford.edu:8090/422/1/1999–66.pdf. Accessed September 18, 2014. 25. Hwai-Hui F, Lin DKJ, Hsien-Tang T. Damping factor in Google page ranking. Appl Stoch Models Bus Ind. 2006;22:431– 444. 26. Merton RK. Priorities in scientific discovery: a chapter in the sociology of science. Am Sociol Rev. 1957;22:635–659. 27. Cole S, Cole JR. Scientific output and recognition: a study in the operation of the reward system in science. Am Sociol Rev. 1967;32:377–390. 28. Miller JC, Coble K, Lusk J. Evaluating top faculty researchers and the incentives that motivate them. Scientometrics. 2013;97:519– 533. 29. Frame JD. Quantitative management of technology. Scientometrics. 1984;6:223–232. 30. Ingwersen P, Larsen B. Influence of a performance indicator on Danish research production and citation impact 2000–12. Scientometrics. 2014;101:1325–1344. 31. Smith DR. Impact factors, scientometrics and the history of citationbased research. Scientometrics. 2012;92:419–427. 32. Franco G. Scientific research of senior Italian academics of occupational medicine: a citation analysis of products published during the decade 2001–2010. Arch Environ Occup Health. In press. doi: 10.1080/19338244.2013.845136. 33. Guidotti TL. Can we get still more useful information from mortality data? Arch Environ Occup Health. 2013;68:130–131.

Assessing productivity among university academics and scientific researchers.

Assessing productivity among university academics and scientific researchers. - PDF Download Free
73KB Sizes 1 Downloads 6 Views