How many papers can be published from one study? There’s no easy answer to this question – but it is one that is constantly put to us as editors when we meet potential authors around the world. The issue of how many papers are acceptable from a single study vexes researchers, editors and publishers and the practice of publishing multiple outputs from a single study, somewhat disparagingly, referred to as ‘salami slicing.’ Comparing research outputs to salami is not necessarily as derogatory as it may seem: both take time, effort and expertise to produce. Research findings may take teams years to achieve, and require patience and perseverance and expertise and the right personnel; similarly, salami is hardly fast food. This cured, fermented, airdried sausage can take weeks to prepare, and requires attention to detail to control humidity, salt levels, acidity and temperature. When cured it becomes very hard and, due to its texture and strong flavour, is served in very thin slices. Used in the derogatory sense, salami slicing is a good analogy for what happens when researchers, perhaps lacking ‘fresh’ material, produce numerous papers from the same source, just as meals from the same salami may be served over many days and weeks. While salami slicing might be good for supplementing diets, it is not a good practice for science as the result of over-publication of data from the same study can lead to inflation of the value of the results. Readers who do not read closely may be misled into assuming that multiple results derived from multiple rather than a single study. There have been attempts to define salami slicing. The Committee on Publication Ethics (www.publicationethics.org) provides case examples reporting salami slicing but, unlike other aspects of publication ethics (e.g. plagiarism, duplication, fabrication and authorship), COPE does not give specific definition or guidance. Elsevier recently published a document offering brief guidance on salami slicing (http://www.elsevier.com/__data/assets/pdf_file/0004/ 183406/ETHICS_SS01a_updatedURL.pdf; accessed 11 September 2014). The International Committee of Medical Journal Editors provides some guidance on its website link
© 2014 John Wiley & Sons Ltd
to duplicate publication, a close relation to salami slicing (http://www.icmje.org/recommendations/browse/publishingand-editorial-issues/overlapping-publications.html; accessed 10 October 2014). Further guidance can be found on the US Department of Health and Human Services Office of Research Integrity website (http://ori.hhs.gov), where salami slicing is referred to as data fragmentation. Each offers a valuable perspective but in the absence of consensus on definitions and ethical practice, we thought it would be useful to state both our collective view as Editors and also JAN’s position on this issue. Importantly, it is not ours or JAN’s position that producing multiple papers from a single study is necessarily wrong. Researchers opting to address broader ‘lumped’ rather than more focussed ‘split’ questions often produce more data. Following this logic, large epidemiological studies with multiple outcomes, for example, may require several papers. Systematic reviews that contain a large volume of primary studies looking at different types and formats of complex interventions and different types of evidence may also warrant more than one report. Additionally, a single study protocol may logically plan for a review paper, intervention development paper, a primary outcomes paper, and one or more papers reporting secondary analyses or a process evaluation, particularly if a large amount of data were collected. Authors are not expected to force all data from a single study into a single paper. Nor should authors withhold (not publish) data that may be important to science and practice because they did not ‘fit’ within the confines of a primary results paper. It is now considered best practice to develop a publications strategy alongside the study protocol. JAN’s stance is thus one of recognition of the worst case scenario of salami slicing resulting in thin papers and overreportage but equally recognizing that not all papers produced from a single study produce thinly sliced salami. Rather, the qualitative ‘thickness’ of the slice should be considered the defining criteria. Our intention here is to provide some guidance on what is generally deemed acceptable for JAN with regard to multiple publications from single studies.
Good practice Reporting guidelines are designed to deter authors from unnecessarily splitting research reports as to do so would contravene the best practice principles for publishing specific types of studies. As a rule of thumb, large methodologically rigorous and well conducted and reported data rich studies are judged to be higher quality than data thin studies. High quality studies are more likely to be included in systematic reviews and guidelines to inform patient care and therefore have greater impact. Authors who slice their studies thinly are, therefore, running the risk of ultimately doing themselves a disservice that could be harming their academic reputation. Several practices should be considered by authors submitting to JAN about the question of ‘how many papers.’ The first involves when to decide how many papers there should be, or decision timing. It is now accepted good research practice that decisions related to publications emanating from research studies should be made at the start of the project and outlined in a protocol (Research Councils UK 2009, UK Research Integrity Office 2009). The research project may be small or large, carried out by a lone researcher or by a team and including doctoral research – the practice applies equally. Mainly aimed at addressing authorship and avoiding disputes, the practice of deciding on the number of papers to publish at the outset of a study also ensures that researchers establish prospectively a publication plan aligned with the study intended outcomes. Retrospective decision-making may increase temptation to spread the data across more papers than are warranted. There can be exceptions to this decision timing of course, especially where large datasets are subsequently used to address new questions. It is also considered good practice to publish a review or study protocol which, by definition, will report the same methods and any deviations from the protocol in the papers reporting study outcomes. Moving from the published protocol to the results paper is one of the few exceptions whereby it is considered good practice to replicate the content (albeit not always in the same level of detail) in both papers. Other differently conceived exceptions include agreements such as the one between Wiley and Cochrane whereby it possible to publish a review in full in the Cochrane library managed by Wiley and in a more accessible format for a specific target audience in Wiley journals such as JAN. Likewise, the new National Institute for Health Research (NIHR) Journal Library in the UK is another example whereby authors are encouraged to produce journal outputs for different audiences with data drawn from 2458
the comprehensive open access research funder report published in the NIHR Journal Library. There are also some circumstances that require separate consideration. For example research funders are increasingly keen that anonymized data collected in primary research projects are made available for secondary analysis by other groups of researchers to produce their own subsequent publications. For example, in the UK, the Economic and Social Research Council funds a repository of raw anonymized qualitative data (such as sets of interview transcripts) for secondary analysis. Secondary analysis in this context is considered as a new study with a new set of outputs that are separate from any existing studies derived from the same data. In addition to identifying papers in advance of completing the work, another good practice is determining a priori the focus of those papers. For example, a survey design study generally collects data using a set of instruments or questionnaires as the researcher is almost always concerned with how concepts measured by the instruments relate to each other and perhaps to one or more outcomes. Thus, the results of that survey study should be published in a single paper. So, for example, if a researcher is collecting data about relationships among stress, personality, burnout and general health, one single results paper is expected, rather than four papers each focused on the results of a separate instrument. Neither should the results be divided for publication into papers based on the gender, age or other demographic characteristics of the participants, unless there is a compelling reason to do so. Another good practice is to publish the results of all follow-up assessments in a single paper. For example, if a clinical trial has follow-up period, at 6, 12 and 18 months subsequent to an intervention then, without a compelling scientific rationale, all follow-up results should be reported in the same paper. It is not acceptable to analyse and publish the data at each followup period, especially if the purpose of the follow-up was to investigate the sustainability of the intervention. Separate publication of initial, early, and late effects may suggest that an intervention which is shown to be unsustainable is more effective than it really is. The incidence of salami slicing seems unfortunately common in the reporting of clinical trials where a single protocol has been designed for a single intervention with multiple outcomes. For example, a psychosocial intervention may be designed to address pre-operative anxiety and the outcome measures may include anxiety, stress and a range of postoperative outcomes. Poor publication planning might salami slice the data to publish separate papers on each outcome measure. In addition to producing a stream of papers that may skew the literature in favour of the © 2014 John Wiley & Sons Ltd
intervention and, in the process, boost the CV of the author, such a practice could result in an apparent Type I error, where the null hypothesis is rejected in error. Were the statistical methods adapted to take into account the fact that other hypotheses have been tested on the same or related variables based on the same data or clinical trial, the boundary of significance drops with each conjecture being tested (Jackson et al. 2014). This equally applies to the same hypothesis being tested about one outcome measure at different time points such as follow-up periods, at 6, 12 and 18 months. However, it is obvious that if results that rely on hypothesis tests are published sequentially in a stream of papers, where not all data analyses and statistical methods of subsequent papers were pre-specified in each earlier paper, it is impossible, retrospectively, to modify the significance boundary in earlier publications and revise situations when a null hypothesis was rejected in error because of a modified boundary. This evidently leads to the situation that certain results of all earlier publications in a stream of papers are devalued and undermined, when later ad hoc papers appear, and the likelihood of their conclusions being correct is decreased. Better practice is to determine prospectively how papers will be prepared based on the study’s theoretical or empirical model. Thus, there may be multiple publications reporting results that are conceptually distinct although studied simultaneously. For example, it may make sense to publish the results of an intervention’s effects on physiological outcomes separate from its effects of psychological outcomes. However, care has to be taken even in this practice, as the slicing can get thin if the data are not rich. Moreover, if the theoretical or research model posited the outcomes in a way that suggested their interdependence, then the results should be published in a single paper. The same issues are also seen with systematic reviews whereby authors develop a single question, undertake a single search, and then divide included studies into small groups for publication in different papers. Some authors have also used a ‘hub and spoke’ approach by publishing the outcome of the quality appraisal or risk of bias assessment as separate paper(s) in addition to a further series of papers reporting smaller groups of studies or outcomes of interest. The splitting of a single overarching review into more than one paper may, however, be appropriate and even desirable in some circumstances such as if the review starts off with production of a separate knowledge map from which studies are subsequently selected for systematic review, or the review contains hundreds of studies and there is an obvious division to simplify reporting, or the review design contains multiple methods and separate © 2014 John Wiley & Sons Ltd
streams of synthesis each of which address a different question with different types of evidence. Ultimately, along with plans for papers and their foci, the key to making sensible publication decisions about any type of primary or secondary research is now inextricably linked with choice of appropriate journals. Journals such as JAN and the BMJ have introduced new flexibilities to make it easier for authors to publish data rich empirical papers. At JAN and the BMJ, authors can now request some flexibility in the word limit for very high quality papers. JAN also provides an additional online file facility free of charge that is not counted in the manuscript word count so that authors can publish lengthy tables and supplemental more detailed results in a single manuscript to avoid unnecessary multiple publications. The increasing availability of open access publishing opportunities with more generous or no word restrictions should further mitigate the need to split reporting. Finally, the practice that is particularly recommended involves author transparency. Although rarely prosecuted, salami slicing (Norman & Griffiths 2008), duplicate publication, data fragmentation (whatever we choose to call it) could be the basis for scientific misconduct or copyright infringement laws. Thus, authors need to be very clear with editors and publishers about what and where their research findings have been presented or previously published. When authors submit a manuscript reporting research results that have been previously reported in part or full, or that are related to previously published results, this information should be included in their covering letter. Editors may legitimately ask to be provided with copies of the related material to help the decision whether the submitted material can be reviewed and (potentially) published. This is especially important where preliminary results have previously been published or where a submission is reporting later follow-up results. JAN generally does not consider abstracts published as part of conference proceedings or research results presented at scientific meetings to be a concern. The point is that editors do not wish to be surprised to learn that a manuscript we have spent time and effort on helping to publication is really just a thin slice of a ‘sausage’ for which someone else already has copyright, with early discussion and assessment, everyone benefits: authors and editors can work together to promote and expedite appropriate and ethical publication; reviewers have their work respected and their time is not wasted on reviews that are not used; journals are able to maintain their reputation and prestige, serving their readers by delivering work fairly presented and accounted; the worlds of science and health care are enabled to give due weight to research, minimizing risk of bias or inflation of results – a ‘win–win’ situation for all. 2459
Irene Hueter Statistical Editor JAN
Roger Watson, Rita Pickler, Jane Noyes, Lin Perry, Brenda Roe, Mark Hayter and Irene Hueter Roger Watson Editor-in-Chief JAN Rita Pickler Editor JAN Jane Noyes Editor JAN Lin Perry Editor JAN Brenda Roe Editor JAN
References Jackson D., Walter G., Daly J. & Cleary M.J. (2014) Editorial: multiple outputs from single studies: acceptable division of findings vs. ‘salami’ slicing. Journal of Clinical Nursing 23(1–2): 1–2. doi: 10.1111/jocn.12439. Norman I. & Griffiths P. (2008) Duplicate publication and ‘salami slicing’: ethical issues and practical solutions. International Journal of Nursing Studies 45(9), 1257–1260. doi: 10.1016/ j.ijnurstu.2008.07.003. PMID:18755331[PubMed – indexed for MEDLINE] Research Councils UK (2009) Integrity, Clarity and Good Management: RCUK Policy and Code of Conduct on the Governance of Good Research Conduct. Research Councils UK, Swindon. UK Research Integrity Office (2009) Promoting Good Practice and Preventing Misconduct: Code of Practice for Research. UK Research Integrity Office, London.
Mark Hayter Editor JAN
© 2014 John Wiley & Sons Ltd