Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

Contents lists available at ScienceDirect

Best Practice & Research Clinical Gastroenterology

11

The integrity of science – Lost in translation? Matthias Kaiser, Dr. Phil., Professor * Centre for the Study of the Sciences and the Humanities (SVT), University of Bergen, Allegaten 34, PO Box 7805, N – 5020 Bergen, Norway

a b s t r a c t Keywords: Scientific integrity Scientific misconduct Ethics of science Ethical guidelines

This paper presents some selected issues currently discussed about the integrity of science, and it argues that there exist serious challenges to integrity in the various sciences. Due to the involved conceptual complexities, even core definitions of scientific integrity have been disputed, and core cases of scientific misconduct influenced the public discussion about them. It is claimed that ethics and law may not always go well together in matters of scientific integrity. Explanations of the causes of scientific misconduct vary, and defining good scientific practices is not a straightforward task. Even though the efficacy of ethics courses to improve scientific integrity can be doubted, and universities probably need to come up with more innovative formats to improve ethics in scientific training, ethics talk may be the only practical remedy. Ó 2014 Published by Elsevier Ltd.

Nowadays the integrity of science is seriously challenged. This is the claim defended in this article. One needs to realize a number of basic things about such a claim at the start. First, it is not, and cannot be, a factual statement, or objective statement if you like. It is essentially an evaluative statement, resting on judgment, which, in the end, is always subjective. Whatever one would cite in support of such a statement, or as disproof of it, fundamentally it all depends on how we interpret and judge the evidence. Second, we need to explain what we mean by the terms used in the claim. Even though many people behave as if these terms are self-explanatory, they may not really be so. In fact, some of us may have observed that people use these terms sometimes differently, dependent on whether they agree with the statement or not. So, we shall do this in the next few paragraphs. Third,

* Tel.: þ47 55 58 24 86; þ47 917 33 928; fax: þ47 55 58 96 64. E-mail addresses: [email protected], [email protected].

http://dx.doi.org/10.1016/j.bpg.2014.03.003 1521-6918/Ó 2014 Published by Elsevier Ltd.

340

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

we should be clear about our intentions: we engage in this debate, we raise this claim, for a certain purpose. In other words, we (or rather I as author at least) have an agenda of our own. We raise critical points about science today because we believe it is a worthwhile activity to try to change it, because some old-fashioned and often quoted virtues of science deserve to be defended. Thus, basically we believe that good science can be a good thing, and not only for scientists. To be sure, the latter statement is not self-evident. And, finally, a piece of self-reflexivity: science is a big thing, and we who are part of it often believe we can view the whole thing from where we are placed. Those who, like the author, have science and technology as the object of their study in particular tend to believe that they have a privileged view on the whole activity. But what we ascribe to others we certainly must also ascribe to ourselves, namely that our views will always be perspectival. There is no ‘view from nowhere’ on science, while there obviously will be some people with relatively important insights into certain aspects of science because this is what they have studied in detail. This implies that we do not expect that the claim is accepted uncritically, but we hope that it might trigger the kind of self-reflexion and critique, which is acclaimed as the great achievement of the scientific and rational spirit. Concepts What do we mean by ‘science’? Obviously, this term is not value-neutral and is not easily translatable either. In the English language it typically designates the scholarly activities that emerged out of the empirical methods of testing which were propagated during the Scientific Revolution. Therefore it is often reserved for the natural sciences. The German term ‘Wissenschaft’, on the other hand, covers all scholarly activities, including the humanities and their philological and hermeneutical methods. Some people defend that science proper is only academic science, basic science pursued at institutions of higher education, universities. Other scholarly activities outside these holy halls are then seen as ‘merely’ instrumental or applied. They are close to engineering, which they then define as something else, which is not science, perhaps as industrial research, which is also seen as not proper science. The point is that many such delineations and demarcations are drawn up, and all serve their particular interests of keeping some parts in and some parts out, i.e. there are power interests behind. When seen from outside the system, one typically will find a much larger tolerance what counts as science. The occasional homeopath with perhaps a doctoral title in front of his name and a number of publications in strange journals, will, much to the dismay of the medical profession, in the popular press go through as a scientist. (And many people will trust his skills.) So will also the climate sceptic who opposes virtually all claims that the scientific community of climate researchers accept. Sometimes the latter will even get a high position and large funds to administer other researchers; we have seen examples of this. Our point is that we should be relatively inclusive in our understanding of science, accept that we will by necessity be in the company of some people with whom we normally will rather not be associated. But this may perhaps already point us in the direction of the theme of this article. It may not be so crucial whether a scholarly activity is basic or applied, is quantitative and experimental or qualitative and hermeneutical, is done in universities or in institutes and companies, produces knowledge or things and products – what counts is perhaps whether the activity is devoted to a certain ethos. This is how social scientists typically identify social institutions: through the set of basic values and norms that delineate the institution from others. It was Robert K Merton who in 1942 [1] defined the scientific ethos (¼moral character, the set of basic ethical commitments, the ethics of science) through four characteristic basic norms (1) communism (i.e. common ownership of knowledge), (2) universalism (disregard for who puts forth a knowledge claim or where that person comes from), (3) disinterestedness (no value-based bias), and (4) organized scepticism (control of claim through peers by appropriate mechanisms). Later he added (5) originality. His famous article ‘Science and the Social Order’ of 1938 started with the words: ‘Forty-three years ago Max Weber observed that ‘the belief in the value of scientific truth is not derived from nature but is a product of definite cultures.’ We may now add: and this belief is readily transmuted into doubt or disbelief.’ [2, p. 321]

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

341

One can read this as implying that people’s faith in the integrity of science can be easily shattered if behaviour emerges, which is not in tune with the ethos of science. In other words, the occurrence of widespread misconduct in science will tend to undermine people’s belief in the value of science for society; very much as news about sexual misconduct of priests towards minors tends to undermine people’s belief in the sanctity of the church. As indications of what this ethos of science implies, one can look at ethical guidelines and standards, a topic to which we shall return at the end of the article. In other words, no matter what the potential goods of science may be, the observed behaviour of its practitioners counts heavily in public perception. The notion of integrity is both easier and more complex at the same time, compared with the notion of science. It is easier in the sense that we have a common sense understanding of what it is about, and it is more difficult in the sense that it is quite a challenge to explain its psychological function of generating trust in greater detail. First, there is a sense of integrity that implies that a certain system or collection of things is undisturbed, i.e. not tempered with. We expect soundness of the system. We can e.g. talk of the integrity of an ecosystem or the integrity of a pile of original documents. It is more probable that a true state of affairs is revealed through certain traces or sources related to this state of affairs, given that their integrity is preserved. The second and more common meaning of integrity applies to people, typically in relation to their actions. We say that a person has integrity if what the person does is based on transparent beliefs and values, such that we may detect a serious effort behind the actions to act morally. Honesty is a prerequisite. Betrayal, lies, corruption and egoism do not go together with integrity. When talking about the integrity of science, one is obviously talking about a social system, which displays soundness in its functions, and much like an individual is judged in relation to its ethics, i.e. its practitioners behave in accordance with the accepted rules of good conduct within that system. The question is then what these rules of good conduct are? And the next really problematic question would then be how often there are breaches of these rules in the community of scientists?

Defining misconduct? Scientific misconduct is obviously a violation of scientific integrity. What exactly constitutes scientific misconduct? The definition one finds on the webpage of the US Office for Research Integrity (ORI) is this: ‘Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. (a) Fabrication is making up data or results and recording or reporting them. (b) Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record. (c) Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit. (d) Research misconduct does not include honest error or differences of opinion.’ (http://ori.hhs. gov/definition-misconduct; accessed 8 February 2014) It is this definition and its core elements that have led to the common abbreviation FFP (¼Fabrication, Falsification, Plagiarism) for this form of misconduct. It is worthwhile, though, to be aware of the fact that this very definition has undergone dramatic changes over time, and that the definition also has some more inclusive variants in the legislation of some other countries. There have been debates whether the restriction to FFP and intentional actions, coupled to the exclusion of phrases like ‘deviation from commonly accepted research practices’, is really satisfying (cf. [3,4]. Early definitions of misconduct, both in the USA and in some other countries, were often more inclusive in e.g. operating with phrases that would not specify that the act was done on purpose, or it would include an open phrase which simply referred to accepted practices. One example of a definition that was thought to be too vague was the proposal of the US Commission on Research Integrity from 1995:

342

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

‘Research misconduct is significant misbehaviour that improperly appropriates the intellectual property or contribution of others, that intentionally impedes the process of research, or that risks corrupting the scientific record or compromising the integrity of scientific practices. . ’ [5, p. 15]. Often, discussions about the proper definition of scientific misconduct were stimulated by current problematic cases, like the ‘Baltimore case’ in the early 1990s in the USA (showing why the case was misnamed to start with). In 1989 the US Congress opened hearings due to accusations that research done by Thereza Imanishi-Kari, a colleague of the Nobel Laureate David Baltimore, and then published with both as authors, was not reproducible, and this led eventually to a verdict of scientific misconduct against Imanishi-Kari. The research reported success in genetically modifying mice to amplify their immune system. Imanishi-Kari and Baltimore always defended the research and denied any fraud. When the verdict came, Baltimore resigned from his office as President of the Rockefeller University. Upon appeal a new committee determined in 1996 that the paper did not contain any fraudulent data, but contained some errors, later acknowledged by the authors. The debate about this well-publicised case showed that the line between committing research fraud and making an honest error is drawn by the intention of the scientists, and that it is very difficult in practice to determine either. It also showed that co-authorship can be a dangerous undertaking for scientists, and that co-authors should be prepared to take on responsibility for the research reported. Daniel J. Kevles wrote a book about this case [6], which painted an even more discomforting picture of intrigue, personal animosity, and political conflict. In Europe the discussions around the well-publicised Lomborg-case also influenced debates about the appropriate way to define scientific misconduct. The political scientists Bjørn Lomborg was found guilty of ‘objective’ scientific misconduct because of clear imprecision, bias and misleading presentations in his book The Skeptical Environmentalist. He did not fabricate, or plagiarize any results, he merely presented the material in such a biased manner that it could be characterised as a gross violation of scientific objectivity. When is gross bias a falsification of the research record, when a breach of accepted scientific practice? Yet, his ‘subjective’ guilt was not proven, i.e. the intentionality of this action could not be shown, which strictly speaking implied an acquittal of Lomborg, given the Danish definition of misconduct. However, the fine difference between objective guilt and subjective innocence due to lack of proof understandably escaped most people. The case was widely discussed in the press, and some commentators feared findings of misconduct could be used for political reasons or streamlining mere dissenters in the scientific community. Where to draw the line between the consensus of a scientific majority being confronted with a deviant view of a critic, and the necessary conflict between truthful reports on the one hand versus wilfully fraudulent or biased claims serving other interests on the other? Many commentators felt that this task could not be in the hands of an ethics committee alone. In Norway it was the Jon Sudbø case which was an eye-opener for the authorities who quickly adopted a new law on research ethics, though discussions hade been going on for several years [7]. As in many other countries, denials that research misconduct was a serious issue in Norwegian research, coupled to the debate whether it should be State authorities or the scientific community itself (‘selfpolicing’) that properly address scientific integrity, was blocking any progress in relation to actually dealing with the topic. The medical researcher and dentist Jon Sudbø was a success story in Norwegian science. He was relatively young, good looking with a sympathetic appearance, and he had an impressive list of publications in the best scientific journals. Thus he also received substantive funding from various sources. Sudbø published a paper in The Lancet in 2005 (NB: with co-authors) reporting that an ibuprofen-like substance was capable of reducing the risk of oral cancer in smokers. The research was supposed to be based on tests with ca 900 subjects. The problem was that the data on these subjects were supposed to come from a database which at the time of the research was not yet existent. As it turned out, they were all fabricated, and several hundred of them were supposed to have identical birthdays. The fraud was soon established and Sudbø later admitted of research fraud also in earlier papers. A commission investigating his fraud was examining 38 scientific articles and questioning some 60 co-authors from various countries. Retractions followed including the revocation of his doctoral degree. Sudbø alone was found guilty of fraud, but the commission indirectly also criticised the co-authors for their lack of vigilance and critical questions.

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

343

The Norwegian Sudbø case resembles in some important respects a more recent case in social psychology: the Diederik Stapel case from the Netherlands [8], also involving a seemingly very successful and charming scientist with a lot of media appeal. Again, the investigating commissions raised serious criticisms of the involved co-authors and peers who they felt were possibly too gullible and accepted data, which were too good to be true. As it turned out, they were fabricated. In the end, at least 54 scientific papers were retracted. The current Norwegian definition of scientific misconduct is close to the ORI definition, but includes serious breaches of good scientific practices and couples intention to gross negligence: ‘Scientific misconduct is defined as falsification, fabrication, plagiarism and other serious breaches of good scientific practice that have been committed willfully or through gross negligence when planning, carrying out or reporting on research.’ (Norwegian Act of 30 June 2006 No. 56 on ethics and integrity in research, Section 5) What one currently observes in Norway and presumably some other countries as well, is that findings of misconduct are contested by lawyers or brought to (civil) court. And this is not always to the benefit of improved integrity in science, rather the opposite. For instance, a Norwegian case of plagiarism in a PhD which was discussed for years, but while all parties agreed substantial plagiarism could be shown in the work (>15–20%, which is more than what under a student examination would be allowed), some raised the question whether one could conclude with a finding of misconduct or not. The national body for the investigation of misconduct in science concluded that this was a clear case of misconduct, but upon appeal from the defendant the court concluded that gross negligence could not be proven, and that therefore the defendant could not be found guilty of scientific misconduct. If plagiarizing 15% of your scientific publication is not sufficient to attribute gross negligence to you as a scientific professional, what does? It is clear that court findings like this are undermining all efforts to improve scientific integrity. I have also other experiences, which undermine my belief that all is well with scientific integrity and that things are moving in the right direction. For instance, as member of the university board for scientific misconduct I had to pass judgement on a case where a researcher had received medical records from ca 18,000 patients, and they had all agreed that these data could be coupled to other data about them from a social registry. However, 11,000 more patients were also invited to do so, and they declined. But then all the data, also those from people who did not consent, were sent to the researcher by a technical mistake. After a while the researcher as Principal Investigator co-published a paper in which also the 11,000 non-participants of the study were analysed, a clear breach of all rules, which demand informed consent for such studies. It was precisely the fact that they were non-participants, which was central in the paper. The Board in a first ruling opted that this qualified as scientific misconduct on the part of the responsible researcher. After the lawyers of the researchers wrote a longish complaint and claimed that no intent on the part of the researcher could be proven, the majority of the Board revised its judgement, and declared an institutional fault in oversight but accepted that no intent and thus no scientific misconduct could be proven. This was then also concluded by the University leadership, and thus the researcher was officially acquitted of misconduct. I disagreed with the findings, among others on the grounds that the efforts of the Principal Investigator to cover up the issue of lacking approval of the ethics committee show awareness that all is not well in terms of scientific integrity. The article described the ethical issues among others in this manner: ‘Written informed consent was gathered from all participants. For the nonparticipants, only registry data were used; in principle, this is public information and is made available for research purposes through application to the (proper authority).’ (Source withheld for reasons of privacy) Of course, this is a way of diluting the simple fact that one needs a permission from the appropriate ethics committee to do this research, and that no such permission was given. Thus, I wonder what it would take to characterize scientific behaviour as gross negligence or intended misconduct? There is a clear tension between the tendency of lawyers and courts to associate the characteristic of scientific misconduct with standards in criminal law, and the effort of scientists to uphold ethical standards for the professional conduct of their profession. It has been clear from the very beginning of these discussions that a restriction to the FFP is not really appropriate when discussing scientific integrity. There is known behaviour in science which

344

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

clearly is not in the spirit of a scientific ethos but which is not FFP. For instance, honorary authorship, withholding of scientific information (in applications), duplicate publications, non-compliance with regulations and law in the conduct of the research, not keeping original data material or not making it available to others, excluding participating colleagues from publications, not retracting a publication though mistakes are apparent, misuse of statistics to enhance significance of findings etc, all of these behaviours are clearly not conforming to good conduct in research. They were therefore labelled ‘Questionable Research Practices’ (QRP), and typically all efforts to define them are open-ended, i.e. allowing for new forms of QRP to be added. QRP will not qualify as research misconduct, but it can still qualify as unethical scientific behaviour, which may invite corrective measures from the involved institutions. Many have wondered about the prevalence of FFP and/or QRP in science. One of the questions many ask is whether we have more of this now than earlier. The only honest answer to this question is that we do not know. While we do have some good studies to tell us something about prevalence cf. [9,10], we do not know whether we experience more, less or same level of FFP misconduct and QRP as earlier. It is quite possible that the awareness about this issue has increased and thus we get a better reporting of misconduct. However, in principle one can assume that in any given study of this sort one will find something around 0.3–0.5% of the respondents who would admit to FFP while the response to QRP is much higher, sometimes more than 10% of the respondents, depending on what one defines as QRP to start with. It is interesting that so far these results seem very similar in different countries. Science is international after all. The causes of ailing scientific integrity? Whatever the precise prevalence of scientific misconduct and shortcomings in scientific integrity, the numbers are simply too high to live comfortably with them. If we could identify the causes of this behaviour, we could quite possibly apply measures to avoid this problem. So how can we explain breaches of scientific integrity? Basically there are three types of explanations around. (i) The rottenapple approach: Since science is undeniably a human activity and since it is known that there will always be individuals who display deviant behaviour, it is no surprise that there always will be individuals within the scientific community who cheat and take short-cuts. Scientists are not intrinsically more moral than, say, taxi drivers. (ii) The lack-of education approach: The training as a scientist is more and more under time constraints and the close personal relationships with one’s mentor is now replaced with larger teams. This leads to a lack of positive role-models, in particular when it comes to ethics. Scientific misconduct, whether serious or less serious, is then the result of lack of knowledge and good example. (iii) The systemic approach: The production of knowledge has changed dramatically during the years after WWII, and the traditions of academic science are no longer the main motor of knowledge production. There is institutional change and larger pressure to produce certain predescribed types of results on young researchers nowadays. People talk now about mode II science [11,12], post-normal science [13], or post-academic science [14], and imply that the kind of institution that Merton intended to portray through his ethos is a matter of the past. In other words, the kind of institutional framework, societal expectations and rewards that follow scientific knowledge production nowadays is such that ethics is more and more remote, lost in translation somewhere. This is not the place to adjudicate between the above mentioned three explanations. From my personal viewpoint I must admit that all three theories seem to have a lot going for them. On the other hand, in my teaching I especially stress the third aspect since this is the one, which many scientists seem to have a hard time to accept. Pure Science, Pure Truth – undiluted and straight? One of the issues, which clearly emerge when taking a closer look at scientific integrity is that very little about science is really straightforward and simple. As we mentioned earlier, there have been long discussions about the proper way to define scientific misconduct. There are even more difficult discussions about how to define e.g. plagiarism in science (in contrast to other fields like literature etc). But if misconduct is difficult to describe, perhaps good conduct, good science is easier to describe?

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

345

Could one not just assume that good science is, in a nutshell, about telling the whole truth, while bad science is about lying at least about some aspects of the work? Well, what truth, whose truth? The trouble is that all agree that even the best science is fallible, and all agree that what we hold true now will after a certain time be considered wrong, the half-life of a scientific ‘fact’ is short, depending on the discipline. And if we go even only one level deeper, we soon find that virtually all that is published in the scientific peer-reviewed literature is also contested by some scientists. There is unfortunately no simple truth to be found when looking at scientific productions. What we find is complexity and uncertainty [15]. As John Ionnanidis [16] has argued, the probabilities are such that most published research findings are more likely false than true. Producing simple truths is more difficult than taught in most scientific textbooks, and there are numerous pitfalls and hidden biases, which affect even the most well-meaning and honest scientists. Honest errors are not the same as scientific misconduct, but if the honest errors are endemic, what should we then say about scientific integrity? Perhaps what we end up with is the honest struggle to get to the truth as constitutive of scientific integrity, rather than stating the truth in a publication? Fanelli and Ionnanidis say as much in the opening lines of a recent article [17]: ‘Science is a struggle for truth against methodological, psychological, and sociological obstacles. Increasing efforts are devoted to studying unconscious and conscious biases in research and publication, because these represent a threat for human health, economic resources, and scientific progress . The publication of false, exaggerated, and falsified findings is believed to be more common in research fields where replication is difficult, theories are less clear, and methods are less standardized, because researchers have more “degrees of freedom” to produce the results they expect.’ ([17, 15031]) So what is the bias that we are aware of and what is the bias that enters our work quietly from behind without our noticing it? How much does e.g. a funding source affect the findings of a study? It is, among others, clearly shown that results from research on whales is influenced by the source of funding and that conflict of interest may have led to a misrepresentation in both the primary and the secondary literature on the effects of noise on marine mammals [18]. Similar results have been obtained for medical research, especially in relation to fundings from the pharmaceutical industry (cf. [19]). This is the challenge: the science we know is not pure and undiluted, it is not the reliable producer of incontestable truths, but the overwhelming majority of people in the scientific community believe that they contribute to producing more robust knowledge than would be available otherwise, some believe even wholeheartedly in the truth of their own findings. The integrity of science works somewhere in this grey area between not-the-final-truth and a-deliberate-lie-and-distortion. Shades of grey may be difficult to detect especially when the darker areas are perceived as threatening the reputation of those in the brighter areas. Guidelines and education to the rescue? What can we do to improve the situation? What is actually done? Obviously, awareness about issues of integrity in science has improved considerably during the last decades. With it the number of ethical codes and guidelines has risen. The author has in a previous function as chair of the Standing Committee on Responsibility and Ethics in Science (SCRES), a former ethics committee of the International Council of Science (ICSU), presented a study of 115 such codes to the General Assembly of ICSU in Rio de Janeiro in 2002. A striking feature of these standards was their growth in number with time. The period of before 1970 had three national and three international standards in the material, while 1990–95 had nine national and 11 international standards, and the period 1996–2002 contained 25 national and 16 international standards. So the question is what role ethical guidelines and standards can play for defining responsible professional behaviour among scientists and educate young scientists accordingly. Anderson and Shaw [20] detect ten different dimensions of codes, which can impair their utility in international research environments, and they see this as a considerable challenge. I [21] agree that codes follow different formats and goals, but believe that these differences are unavoidable and in the end beneficial. One of the central differences among such codes is their aspirations or goals. Some codes are designed on the notion that they should be adequate representations of the standards that

346

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

are commonly accepted by most practicing researchers and scientists. Other codes are more aspirational in the sense that they normatively describe behaviour, which is seen to be essential in reaching some goals of science (in general cf. [22]. A typical example of the latter would be a code which e.g. ascribes to what has been called a new-social contract for science [23], and thus setting up sustainability as one of the aspired outcomes of scientific research. Another important difference is who the guidelines are made for in the first place, who is the intended audience? An obvious audience is the community of scientists themselves, especially the younger people among them. Guidelines can – and in fact in many places they do – play a role in the curriculum for the PhD education. But ethical guidelines can also be seen as a ‘showcase for science’, directed towards a more general public, funders and policy makers. Here they are meant to ‘restore’ public trust in science, especially in the light of scandals of scientific integrity. Guidelines are thus also an answer to society what the essential values are that characterize one’s institution and that one is willing to defend against breaches and outside pressure. The ethical guidelines or standards of good conduct in science are an attempt to make explicit what else easily is pushed in the background and invisible, what is easily ‘lost in translation’. Again, it is important to note that in reality there is no absolute authority anywhere to define what these standards should be. All issuers of guidelines operate from their own perspective and promote their own institutional interests, in other words, all such guidelines are in effect more value-based than is actually expressed in them. But the point of such guidelines is – as I see it – that they invite the professionals to discuss good conduct with the guidelines as a starting point. Good conduct in science is a moving target, and changes emerge slowly as a result of community interactions among those who take an active interest in the issue. Guidelines function as frameworks for these dialogues. Since I have started out by saying that things are not all well with the integrity of science, a subjective assessment, and since I also stated that I believe the virtues of scientific thinking and methods are well-worth defending, I should by way of conclusion say how the situation can be improved. I have already indicated in the last paragraph that I think that ethical guidelines and the discussions they may generate may actually be important for improving the integrity of science. But the field where I and many other scholars spend most time is education, in particular the education of young scientists, PhDs. Can we promote scientific integrity through teaching the ethics of science more regularly and more extensively? A case could be made for this since there is evidence that research integrity is not routinely conveyed from supervisors and mentors to younger scientists [24]. On the other hand, many of the cases of misconduct that I know have been about people who have passed ethics courses. Is ethics teaching effective in promoting scientific integrity? May and Luth [25] have reported a study of the effectiveness of ethics teachings. While some aspects clearly could be attributed as benefits of such teachings, like participants’ perspective-taking or moral efficacy, other aspects scored worse, like moral judgement and knowledge of responsible conduct of research practices. Thus we may have to improve our ethics teachings somehow, but ethics courses in the traditional sense may not be the sole answer. One has probably not much choice but to strengthen the ethics education of young scientists in order to improve scientific integrity, but as some experiences indicate, much more work is needed to find appropriate and better formats of teaching. In the end, it is probably like with all ethics: we need to talk about it. Statement This study was not sponsored by anybody. Conflict of interest None. References [1] Merton RK. The normative structure of science. In: Merton Robert K, editor. The sociology of science: theoretical and empirical investigations. Chicago: University of Chicago Press; 1942. 1973.

M. Kaiser / Best Practice & Research Clinical Gastroenterology 28 (2014) 339–347

347

[2] Merton RK. Science and the social order. Philosophy Sci 1938;5(3):321–37. [3] Fanelli D. The black, the white and the grey areas: towards an international and interdisciplinary definition of scientific misconduct. in [22]; 2012. pp. 79–89. [4] Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics 2006; 12(1):53–74. [5] US Commission on Research Integrity. Integrity and misconduct in research. US Department of Health and Human Services; 1995. available at: http://ori.hhs.gov/images/ddblock/report_commission.pdf [accessed 16.02.14]. [6] Kevles DJ. The Baltimore affair. A trial of politics, science and character. New York/London: W.W.Norton and Company; 1998. [7] Kaiser M. Scientific dishonesty and research ethics in Norway. In: Annual report 2006: the Danish committees on scientific dishonesty. Copenhagen: Danish Agency for Science, Technology and Innovation; 2007. pp. 18–21. [8] Flawed science. In: Levelt Committee, Noort Committee, Drenth Committee, editors. Flawed science – the fraudulent research practices of social psychologist Diederik Stapel. available at: https://www.commissielevelt.nl/wp-content/ uploads_per_blog/commissielevelt/2013/01/finalreportLevelt1.pdf; 2012 [accessed 17.02.14]. [9] Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature 2005;435:737–8. [10] Elgesem D, Jåsund K, Kaiser M. Fusk i forskning: Uredelig og diskutabel forskningspraksis i Norge [An empirical study of scientific misconduct and problematic research practices in Norway]. Oslo: The National Committees for Research Ethics in Norway; 1997. [11] Gibbons M, Limoges C, Nowotny H, Schwartzman S, Scott P, Trow M. The new production of knowledge: the dynamics of science and research in contemporary societies. London: SAGE; 1994. [12] Nowotny H, Scott P, Gibbons M. Re-thinking science: knowledge and the public in an age of uncertainty. Cambridge: Polity Press; 2001. [13] Funtowicz S, Ravetz J. Science for the post-normal age. Futures 1993;25(7):739–55. [14] Ziman J. Real Science: what it is and what it means. New York: CUP; 2004. [15] Funtowicz S, Ravetz J. Uncertainty and quality in science for policy. Dordrecht, The Netherlands: Kluwer Academic Publishers; 1990. [16] Ioannidis JPA. Why most published research findings are false. PLoS Med 2005;2(8):e124. [17] Fanelli Daniele, Ioannidis John PA. US studies may overestimate effect sizes in softer research. Proc Natl Acad Sci 2013; 110(37):15031–6. [18] Wade L, Whitehead H, Weilgart L. Conflict of interest in research on anthropogenic noise and marine mammals: does funding bias conclusions? Mar Policy 2010;34:320–7. [19] Finucane TE, Boult CE. Associaiton of funding and findings of pharmaceutical research at a meeting of a medical professional socity. Am J Med 2004;117(11):842–5. [20] Anderson MS, Shaw MA. “A framework for examining codes of conduct on research integrity”, in: [22], 133–148. [21] Kaiser M. Dilemmas for ethical guidelines for the sciences. in [22]; 2012. pp. 149–56. [22] Mayer T, Steneck NH. Promoting research integrity in a global environment. Singapore: World Scientific Publishing Co; 2012. [23] Lubchenko J. Entering the century of the environment: a new social contract for science. Science 1998;279:491–7. [24] Kalleberg R. “Lesons from 17 years with national guidelines for research ethics in Norway”, in: [22], 173–176. [25] May DR, Luth M. The effectiveness of ethics edsucation: a quasi-experimental field study. Sci Eng Ethics 2013;19:545–68.

The integrity of science - lost in translation?

This paper presents some selected issues currently discussed about the integrity of science, and it argues that there exist serious challenges to inte...
207KB Sizes 0 Downloads 3 Views