Journal of Plastic, Reconstructive & Aesthetic Surgery (2015) 68, 1485e1490

REVIEW

Laboratory animal research published in plastic surgery journals in 2014 has extensive waste: A systematic review M. Felix Freshwater* Voluntary Professor of Surgery, University of Miami School of Medicine, 9155 S. Dadeland Blvd. Suite 1404, Miami, FL 33156-2739, USA Received 13 April 2015; accepted 12 June 2015

KEYWORDS Animals; Disease models, animal; Peer review, research; Research/trends; Research design; Systematic review

Summary Laboratory animal research must be designed in a manner that minimizes bias if it is to yield valid and reproducible results. In 2009, a survey that examined 271 animal studies found that 87% did not use randomization and 86% did not use blinding. This has been called “research waste” because it wasted time and resources. This systematic review measured the quantity of research waste in plastic surgery journals in 2014. Method: The PRISMA-P protocol was used. SCOPUS and PubMed searches were done for all animal studies published in 2014 in Aesthetic Plast Surg, Aesthet Surg J, Ann Plast Surg, JPRAS, J Plast Surg Hand Surg and Plast Reconstr Surg. These were supplemented by manual searches of the 2014 issues not indexed. Articles were analyzed for descriptions of randomization, randomization methodology, allocation concealment, and blinding of the primary outcome assessment. Corresponding authors who mentioned randomization without elaborating were emailed for details. Results: 112 of 154 articles met the inclusion criteria. Only 24/112 (21.4%) had blinding of the primary outcome measure, 28/110 (25.5%) of articles that required randomization mentioned it. While 12/28 articles clearly described randomizing the intervention, only 4/28 described the method of randomization, and 2/28 mentioned allocation concealment. Only two authors responded and described the randomization methodology. Conclusion: The quality of plastic surgery laboratory animal research published in 2014 was poor. Use of the National Centre for the Replacement Refinement & Reduction of Animals in Research’s “Animal Research: Reporting In Vivo Experiments” (ARRIVE) Guidelines by authors, and enforcement of them by editors and reviewers could improve research quality and reduce waste. ª 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

* University of Miami School of Medicine, 9155 S. Dadeland Blvd. Suite 1404, Miami, FL 33156-2739, USA. Tel.: þ1 305 670 9988. E-mail address: [email protected]. http://dx.doi.org/10.1016/j.bjps.2015.06.012 1748-6815/ª 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

1486

M.F. Freshwater

Introduction Plastic surgeons use laboratory animals to test hypotheses in order to improve patient care. Research must be designed in a logical manner that minimizes bias if it is to yield valid and reproducible results. Randomized blinded studies are the best means of accomplishing this as they lessen bias. In 2009, the need to improve the quality of all animal research was demonstrated by a survey that examined 271 studies from the US and UK, 90% of which had been funded by either charitable or government sources. The survey found that 87% of studies did not use randomization and 86% did not use blinding.1 This lost opportunity has been called “research waste” because it wasted time and resources.2 The aim of this systematic review was to measure the current quantity of laboratory animal research waste in plastic surgery journals by reviewing studies published in 2014.

Methods The PRISMA-P protocol for systematic reviews was used.3 Ovid and PubMed searches of the MEDLINE database were done for all animal studies published in 2014 in Aesthetic Plast Surg, Aesthet Surg J, Ann Plast Surg, JPRAS, J Plast Surg Hand Surg and Plast Reconstr Surg. Searches were done on 11/15/2014, 12/31/2014, 03/01/ 2015 and 03/21/2015. The searches were supplemented by manual searches of the six named journals for all 2014 issues. Editorials, reviews, commentaries, letters and nonhypothesis driven articles were excluded. Articles were analyzed for descriptions of randomization of intervention, randomization methodology, allocation concealment, and blinding of assessment of the primary outcome. The grey literature that consisted of meeting abstracts was excluded because even when articles described randomization methodology, allocation concealment, and blinding of assessment of the primary outcome, the respective abstracts did not. The author extracted the data and repeated the data extraction eight weeks later in order to increase the probability of accurate data extraction, while decreasing the probability of recall bias. Corresponding authors who mentioned randomization were emailed for further information about their methodology if it was not described. Figure 1 contains the attrition flow chart.

Figure 1 PRISMA type attrition flow chart for 2014 plastic surgery animal research. The PubMed search string was: ((((“Journal of plastic, reconstructive & aesthetic surgery: JPRAS” [Journal] OR “Annals of plastic surgery” [Journal]) OR “Aesthetic surgery journal/the American Society for Aesthetic Plastic surgery” [Journal]) OR “Aesthetic plastic surgery” [Journal]) OR (“Plastic and reconstructive surgery” [Journal] OR “Journal of plastic surgery and hand surgery” [Journal])) AND (“2013/12/15” [CRDAT]: “3000” [CRDAT]) AND “animals” [MeSH Terms:noexp].

Discussion Laboratory animals are used to test hypotheses in order to understand normal biology, enhance patient care by understanding disease pathogenesis, improving diagnostic measures and determining safety and effectiveness of various interventions. As animal research is prospective, it provides the ideal opportunity to test hypotheses in a logical manner that minimizes bias if it is to yield valid and reproducible results. Randomized blinded studies are the best means of accomplishing this as they lessen bias. As every article reviewed had more than one author, it was feasible for the research to have been designed with proper allocation concealment and outcome blinding to minimize bias, yet only one did so. The low response by only 2 corresponding authors for details of randomization

Results 112 of 154 articles met the inclusion criteria. (Appendix 1) The two articles that studied diagnostic test accuracy were not required to randomize as suggested by the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-II) tool, but neither article blinded the evaluators to the comparative test results.4 Only one article described its randomization methodology, and had both allocation concealment and blinding of the primary outcome.5 Only two of the authors contacted who mentioned randomization responded and described their methodology. The remaining findings are in Table 1.

Table 1 Results of a Systematic Review of Laboratory Animal Research Published in Plastic Surgery Journals in 2014. Randomization was not required for 2 studies of diagnostic accuracy.

Randomization mentioned Randomization of intervention Randomization methodology described Allocation concealment described Blinding of the primary outcome assessment

Yes

No

28 12 4 2 24

82 16 24 26 88

Laboratory animal research published in plastic surgery journals

1487

Table 2 Recommended Reporting Guidelines for Laboratory Animal Research. Authors insert relevant section/paragraph number for each checklist item.

1488

M.F. Freshwater

Laboratory animal research published in plastic surgery journals methodology details bodes poorly for other researchers who wish to replicate or reproduce the authors’ results. Concerned that there were substantial problems with replication and reproducibility in experimental spinal cord research, the National Institute of Neurological Disorders and Stroke (NINDS) launched a program to independently verify published studies of experimental interventions that claimed to reduce injury, improve recovery or enhance axon regeneration after spinal cord injury. The program found that there was “a surprising preponderance of failures to replicate” and the causes included unintended and unrecognized bias.6 Similarly, failure to replicate studies has been reported for animal models of stroke and multiple sclerosis.7,8 In 2012, NINDS held a workshop and recommended that, at a minimum, studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data.9 In 2009, Kilkenny et al. published a systematic review of 271 papers in all fields that used laboratory animals for research; 90% reported funding, and 90% of the funding came from UK or US governmental agencies or charities. Only 32/271 (11.8%) of studies reported randomization, 12/ 271 (4.4%) reported random allocation, 3/271 (1.1%) reported randomization methodology, and 16/271 (5.9%) reported blinding of assessments.10 In 2010, Kilkenny et al. proposed guidelines similar to the clinical CONSORT guidelines in order to improve the quality of laboratory animal research. The guidelines, referred to as ARRIVE (Animals in Research: Reporting In Vivo Experiments), consist of a 20 item checklist that contains the minimum information that should be included, such as the number and specific characteristics of animals used, details of housing and husbandry; and the experimental, statistical, and analytical methods (including details of methods used to reduce bias such as randomisation and blinding).11 (Table 2) The ARRIVE guidelines have been endorsed by over 450 journals including those published by PloS, BioMed Central, and the Nature Group.12 Major funders including the Wellcome Trust, the Biotechnology and Biological Sciences Research Council, and the Medical Research Council also have endorsed ARRIVE. Recently, Baker et al. measured the quality of papers on experimental autoimmune encephalomyelitis (EAE) in rodents in the journals that endorsed ARRIVE for the two year periods before and after they endorsed ARRIVE. They found little significant improvement and suggested that authors, reviewers and editors were ignoring the guidelines and they suggested that journals develop policies to monitor compliance.13 This systematic review appears to have has several limitations. One could argue that only six plastic surgery journals were reviewed, thus these findings may not be applicable to research that plastic surgeons publish elsewhere. However, the purpose of this review was to measure the quality of research published in plastic surgery journals rather than the quality of research published by plastic surgeons. In addition, as other systematic reviews have shown that there is a global problem with the quality of laboratory research, it less likely that plastic surgeons reserve their higher quality research for non-plastic surgery journals. Another seeming limitation is that only one person

1489

extracted the data and it has been shown that is a 21.7% difference in the frequency of errors between a single reviewer and two reviewers.14 To overcome this, the author extracted the data on two occasions eight weeks apart. An eight-week separation has been shown to decrease the possibility of recall bias.15 Despite there being a single reviewer, the results were so striking that even if the author had missed 100% of the data points, there still would have been profound lack of description of randomization, randomization methodology, allocation concealment and blinding of primary outcome assessment. Meeting abstracts were reviewed but not fruitful, as even articles that described their methodology in the full text did not do so in their abstracts. Finally, the search by publication date may not be reproducible as articles published electronically may appear before they appear in print and article indexing may be delayed as seen by the lack of uniformity in the Pdate column in Appendix 1.

Conclusion The quality of plastic surgery laboratory animal research published in 2014 was poor. The National Centre for the Replacement Refinement & Reduction of Animals in Research produced “Animal Research: Reporting In Vivo Experiments” (ARRIVE) Guidelines. Awareness of these guidelines by authors and enforcement of these guidelines by editors and reviewers could improve research quality and reduce waste.

Ethical approval Not required.

Funding None.

Competing interests The author serves on the editorial boards of JPRAS and Annals of Plastic Surgery and has been a reviewer for Plastic and Reconstructive Surgery and the Aesthetic Surgery Journal.

Appendix A. Supplementary data Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.bjps.2015.06.012.

References 1. Kilkenny C, Parsons N, Kadyszewski E, et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS One 2009;4:e7824. 2. Glasziou P, Altman DG, Bossuyt PM, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014;383:267e76.

1490 3. Shamseer L, Moher D, Clarke M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015;349. g7647eg7647. 4. Whiting PF, Rutjes AW, Westwood ME, et al., QUADAS-2 Group. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155: 529e36. 5. Krammer CW, Ibrahim RM, Hansen TG, Sørensen JA. The effects of epinephrine and dobutamine on skin flap viability in rats: a randomized double-blind placebo-controlled study. J Plast Reconstr Aesthetic Surg 2015;68:113e9. 6. Steward O, Popovich PG, Dietrich WD, Kleitman N. Replication and reproducibility in spinal cord injury research. Exp Neurol 2012;233:597e605. 7. Cumberland Consensus Working Group, Cheeran B, Cohen L, et al. The future of restorative neurosciences in stroke: driving the translational research pipeline from basic science to rehabilitation of people after stroke. Neurorehabil Neural Repair 2009;23:97e107. 8. Vesterinen HM, Sena ES, Ffrench-Constant C, Williams A, Chandran S, Macleod MR. Improving the translational hit of experimental treatments in multiple sclerosis. Mult Scler 2010; 16:1044e55.

M.F. Freshwater 9. Landis SC, Amara SG, Asadullah K, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature 2012;490:187e91. 10. Kilkenny C, Parsons N, Kadyszewski E, et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS One 2009;4:e7824. 11. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol 2010;8: e1000412. 12. ARRIVE: animal Research Reporting In Vivo Experiments, ARRIVE endorsers. https://www.nc3rs.org.uk/arrive-animalresearch-reporting-vivo-experiments#journals [accessed 08.06.15]. 13. Baker D, Lidster K, Sottomayor A, Amor S. Two years later: journals are not yet enforcing the ARRIVE guidelines on reporting standards for pre-clinical animal studies. PLoS Biol 2014;12:e1001756. 14. Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol 2006;59: 697e703. 15. Gatewood R, Field HS, Barrick M. Human resource selection. 7th ed. South Boston: Cengage Learning; 2010. p. 120.

Laboratory animal research published in plastic surgery journals in 2014 has extensive waste: A systematic review.

Laboratory animal research must be designed in a manner that minimizes bias if it is to yield valid and reproducible results. In 2009, a survey that e...
2MB Sizes 2 Downloads 8 Views