letters to the editor Research fraud and its combat: what to do in the case of qualitative research Griet Peeraer1 & Ren ee E Stalmeijer2 In their editorial, ten Cate et al.1 offer suggestions for policies that a medical education journal can employ to minimise the risk for fraudulent data manipulation. As both researchers and editors of a journal on medical education, we fully support and applaud these suggestions to battle research fraud. However, we feel that an additional perspective is needed to further complement the policies described in the editorial. One of the main suggestions is the more rigorous screening of data reported by reviewers. We worry that this type of data checking implies a return to the post-positivist stance that long dominated and hampered the acceptance of qualitative research in health sciences education. To rerun data, as ten Cate et al.1 suggest, represents a mission impossible for the qualitative researcher because qualitative analysis is a resource-intensive process and reliability in data coding is not something that is generally aimed for in most qualitative research. In addition, something as simple as a language barrier can 1 Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium 2 Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands

Correspondence: Professor Griet Peeraer, Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein 1, S/2, Wilrijk, Antwerp 2610, Belgium. Tel: 00 32 3 820 2548; E-mail: griet.peeraer@ ua.ac.be doi: 10.1111/medu.12379

make the rerunning of data analyses extremely difficult at best. As a result, ten Cate et al.’s proposal1 for the external review of the dataset is unlikely to represent a viable solution for the battling of research fraud in qualitative studies. We agree that there is a need for a strong culture of integrity within a research team, and support suggestions for having multiple authors look deeply into datasets and for the instigation of member checking procedures, and advocate that these activities should take place within the framework of the qualitative research paradigm. During the last decade, the tendency to evaluate qualitative research against conventional (post-positivist) criteria has diminished. In order to move beyond a subjective appreciation of the quality of qualitative research, as well as to battle possible fraud in qualitative research, we see the need for more specific guidelines in medical education aimed at qualitative research review. The concept of the ‘audit trail’, in use within the domain of qualitative research since the 1980s,2,3 was designed to facilitate the ascertaining of the trustworthiness of qualitative research. We think that it is the qualitative researcher’s obligation to keep extensive and meticulous documentation about the choices he or she makes with regard to methodology, sampling, data collection and analysis,4 and, at the same time, to be able to produce the raw data. Although several of these aspects are usually

ª 2014 John Wiley & Sons Ltd. MEDICAL EDUCATION 2014; 48: 333–335

briefly described within the methods section of a manuscript (e.g. the role of the author and his or her perspective, sampling strategy, method of analysing data), the audit trail gives a more elaborate overview of the research process and the choices made. When a reviewer doubts the quality of the research paper under review, he or she must communicate this to the editorial office, which, in turn, might decide to ask an ‘auditor’ for assistance in determining the merit and quality of the work. The auditor will be a person who is versed in qualitative research and its quality criteria and who is able to scrutinise the research from a ‘…broader understanding of the rationale and assumptions behind qualitative research’.5 He or she is appointed by the journal for the specific task of auditing articles under review; he or she uses the audit trail to perform a thorough quality check of the research performed by the researcher.6 After an initial orientation to the project, the auditor will investigate ‘the raw data, categorised data material, and the findings’.6 In addition, the auditee’s research journal, which includes memos and reflections, is available7 to help the auditor understand the perspectives from which certain choices were made. This must all occur in the recognition that, according to the qualitative paradigm, we are all situated, meaning that we bring our own

333

letters to the editor preconceptions, histories and understandings to any endeavour.7,8 It is therefore not expected that the auditor or reviewer will come to exactly the same conclusions as the auditee or author, but that he or she will decide to what extent the research is credible, dependable and confirmable.4,7

REFERENCES 1 ten Cate O, Brewster D, Cruess R, Calman K, Rogers W, Supe A,

Gruppen L. Research fraud and its combat: what can a journal do? Med Educ 2013;47:638–40. 2 Guba EG. Criteria for assessing the trustworthiness of naturalistic inquiries. Educ Comm Tech J 1981;29: 75–91. 3 Lincoln YS, Guba EG. Naturalistic Inquiry. Newbury Park, CA: Sage Publications 1985. 4 Morse JM, Barrett M, Mayan M, Olson K, Spiers J. Verification strategies for establishing reliability and validity in qualitative research. Int J Qual Meth 2002;1: 13–22.

5 Barbour R. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ 2001;322: 1115–7. 6 Akkerman S, Admiraal W, Brekelmans M, Oost H. Auditing quality of research in social sciences. Qual Quant 2008;42:257–74. 7 Koch T. Establishing rigour in qualitative research: the decision trail. J Adv Nurs 2006;53:91–103. 8 Gadamer H-G. Philosophical Hermeneutics. Berkeley, CA: University of California Press 1976.

Removing the rose-coloured glasses: it’s high time we published the actual data Martin V Pusic1 Editor – I congratulate Dr Ten Cate and colleagues for highlighting research malfeasance and making suggestions for its prevention.1 Their ideas would form the basis of an incremental improvement in the well-established process of peer review and research publication. I would encourage one further change to the process: namely, that the researcher(s) (and journal) should be required to publish a digital version of the full dataset that is the basis for the main results of the study. As Ten Cate et al.1 mention, allowing peer reviewers to examine the dataset would allow a more finely 1 Department of Emergency Medicine, New York University, New York, New York, USA

Correspondence: Martin V Pusic, MD, PhD, 550 First Avenue, New York, New York 10016, USA. Tel: 00 1 212 263 2053; E-mail: [email protected] doi: 10.1111/medu.12312

334

grained look at the data. This logic applies not only to studies that report numerical measurements, but also to qualitative studies, in which allowing third parties to see de-identified original transcripts might discourage fraud. Journal space constraints often mandate a keyhole view of the data and summary statistics chosen by the author(s) predominate. However, those space constraints are mainly historical and attributable to the costs of publishing journals in paper form. Given the ever-expanding web presence of journals such as Medical Education, the publication of digital datasets should not be difficult. A numerical dataset of 5000 participants with 30 numerical variables has a CSV (comma separated values) file size of 1 Mb. A full transcript of a 1hour focus group discussion uses even less space.

There would be logistical issues to overcome. Some standardisation to stipulate the use of a file type that can be opened in common statistical packages would be important. Journal editors and staff may have some but not all of the analytical skills necessary to oversee this process. Privacy concerns would need to be addressed. We can imagine a number of other benefits to publishing research datasets. Consider how the ability to inspect a dataset might facilitate the task of the independent researcher who is concerned with repeating or synthesising research studies. With appropriate safeguards to blunt hindsight bias, the datasets could be re-analysed secondarily in ways that the original researchers may not have envisaged. These datasets might deepen educational activities such as journal clubs and data analysis courses.

ª 2014 John Wiley & Sons Ltd. MEDICAL EDUCATION 2014; 48: 333–335

Research fraud and its combat: what to do in the case of qualitative research.

Research fraud and its combat: what to do in the case of qualitative research. - PDF Download Free
40KB Sizes 4 Downloads 0 Views