Clinica Chimica Acta 456 (2016) 49–55

Contents lists available at ScienceDirect

Clinica Chimica Acta journal homepage: www.elsevier.com/locate/clinchim

Publication outcome of abstracts presented at the AACC annual meeting Dina N. Greene a, Andrew R. Wilson b, Nicole M. Bailey c, Robert L. Schmidt d,⁎ a

Department of Laboratory Medicine, University of Washington, Seattle, WA, United States School of Nursing, University of Utah Health Sciences Center, Salt Lake City, UT, United States c Department of Family and Preventive Medicine, University of Utah Health Sciences Center, Salt Lake City, UT, United States d Department of Pathology and ARUP Laboratories, University of Utah Health Sciences Center, Salt Lake City, UT, United States b

a r t i c l e

i n f o

Article history: Received 26 January 2016 Received in revised form 16 February 2016 Accepted 25 February 2016 Available online 27 February 2016

a b s t r a c t Background: Failure to publish study results causes duplication of effort and is a significant source of waste. It also can lead to distortions in the evidence base that can lead to misallocation of resources and medical harm. Failure to publish is commonly studied by comparing the conversion rate of meeting abstracts or publication rate of registered trials and has not been studied in clinical chemistry. The objective of this study was to determine the abstract conversion rate in clinical chemistry. Methods: For the set of abstracts published from the 2011 annual meeting of the American Association for Clinical Chemistry, we determined which converted to full publications and which had not. We used 3 methods to match publications to abstracts: 1) a survey sent to corresponding authors of abstracts, 2) a web scrape of Google Scholar, and PubMed, and 3) a manual search using Scopus. Publication rates were compared by topic, country of corresponding author, institution type, and award recognition. Results: Matching publications were found for 38% (95% CI: 34–42%) of the abstracts. The acceptance rate for submitted manuscripts was 34% (95% CI: 28–43%) among those who responded to the survey. Publication rates varied by topic (range 13% to 59%); rates from academic institutions were higher than commercial institutions (42% vs 16%, p b 0.001). The publication rate of abstracts recognized “with distinction” was significantly greater than the publication rate of non-winners (68% vs 37%, p = 0.001). Conclusion: A significant proportion of abstracts presented at the AACC national meeting are not followed by full publication. © 2016 Elsevier B.V. All rights reserved.

1. Introduction Conflicting data between published and unpublished studies results in publication bias and can occur by a variety of mechanisms [1]. Most commonly, publication bias occurs when the publication rate of studies depends on the study findings [2]. For example, publication probability may depend on the direction or effect size of the result. Indeed, studies with favorable results are more likely to be published than studies with negative results, directly biasing the published evidence [3,4]. Because guidelines and clinical decisions are often based on published studies, publication bias can have a negative impact on population health and resource utilization. Further, failure to publish negative results can cause duplication of effort and wasted resources. Understanding, measuring, and reducing publication bias are integral to scientific progress.

⁎ Corresponding author at: Department of Pathology, University of Utah Health Sciences Center, Salt Lake City, UT, United States. E-mail address: [email protected] (R.L. Schmidt).

http://dx.doi.org/10.1016/j.cca.2016.02.019 0009-8981/© 2016 Elsevier B.V. All rights reserved.

Several different methods have been developed to study publication bias. A conceptually straightforward approach is to compare the results of published and unpublished studies to identify differences; however, such studies are impractical because it is very difficult to adequately collect and represent unpublished studies. A more common method is to follow cohorts of studies from inception to publication and to determine whether publication rates are related to study properties. Studies using this approach generally measure publication rates of registered studies or of abstracts presented at meetings. Two Cochrane reviews (covering studies prior to 2003 and 2007, respectively) showed that approximately half of clinical trial abstracts presented at national meetings are converted to publications [5,6]. Similarly, a second meta-analysis, (covering studies published through November 2013) showed that less than half of studies included in trial registries or approved by research ethics committees are published [7]. A substantial body of literature exists documenting publication rates in various biomedical fields. These studies show considerable variation in publication rates between disciplines and even among subdisciplines [8–12]. As a result, the data is not generalizable across specialties and threats of bias must be evaluated for each specific discipline. Here, we

50

D.N. Greene et al. / Clinica Chimica Acta 456 (2016) 49–55

conducted a study to estimate the publication rate and factors associated with publication of abstracts presented at the 2011 National Meeting of the American Association for Clinical Chemistry (AACC). We selected the 2011 meeting because others have shown that most studies are published within a 5-y window [13].

2. Methods 2.1. Abstract selection process The AACC Annual Meeting and Clinical Laboratory Expo is a 5-day event that draws over 15,000 participants. It is the largest clinical chemistry-focused scientific meeting in North America. Attendees have the opportunity to submit abstracts between September and February of each year. Abstracts are required to include data that has not been previously published. In 2011 a total of 926 abstracts were submitted, and 652 were accepted. The abstracts were anonymously reviewed by two independent qualified reviewers and scored on a scale of 1–4 based on merit (with 1 as the low and 4 as the high score). There are no specific criteria to define an abstract score, but instructions to reviewers suggest a holistic approach that includes research quality, writing, applicability, novelty, and impact. Additionally, reviewers are required to comment on deficiencies (ex: commercial bias, out of scope, study design inadequate) if an abstract is given a score less than or equal to 2. Members of the meeting organizing committee resolved large discrepancies between reviewers' scores (~1% of all reviewed abstracts). Abstracts with an average score b 2 were not accepted. The abstracts with a score of 3.5 or greater were presented to the National Academy of Clinical Biochemistry (NACB) board of directors who chose

subsequent reviewers to rank the quality of submissions. A second accepted abstracts were evaluated in a second round of reviews to identify outstanding abstracts. The NACB designated these abstracts (n = 28) as “distinguished abstracts.” The total number of abstracts selected as distinguished were independent of the number of abstracts submitted or accepted to the AACC annual meeting and were purely merit based. More specifically, the quantity of abstracts recognized with distinction was not designed to meet or exceed a numeric quota. 2.2. Publication matching The full abstracts were obtained from the annual abstract issue of Clinical Chemistry [14]. AACC provided a Microsoft Word document containing the title, list of authors, author affiliations, contact details for the corresponding author, keywords, abstract number, and poster session title (i.e., abstract category). These data were transferred to an Excel spreadsheet using an R script. The abstracts were classified by country (according to corresponding author) and institution type (commercial, academic, or commercial/academic). Abstracts with authors from both academic and commercial institutions were classified as commercial/academic. Mayo Medical Labs and ARUP Laboratories were classified as academic labs. For each abstract (n = 652), we made three attempts to identify a corresponding publication: 1) email survey, 2) automated web-scraping followed by manual review, and 3) manual review (Fig. 1). 2.2.1. Email survey An email survey was sent to the corresponding author for each abstract on January 26, 2015. A second email was sent to nonresponders

Fig. 1. Abstract matching process. A match was defined as an accepted abstract that resulted in a publication. Three approaches were taken to identify matches: an email survey, a webscraping tool, and a manual search.

D.N. Greene et al. / Clinica Chimica Acta 456 (2016) 49–55

on February 2, 2015. Authors were asked if they had submitted a manuscript for publication and, if so, whether it had been accepted. If accepted, they were asked to supply the citation. The survey did not ask for information on the submission history (number of attempts and the target journals) or rationale for not transitioning their abstract into a manuscript. 2.2.2. Web-scraping A web-scraping tool was developed as an R script, using both the RISmed package and custom packages to query PubMed and Google Scholar. This tool provided a method to batch submit a list of abstracts (with publication date restrictions) and find potential matches in these publication repositories. These methods were supplemented with a manual review by two authors (AW, NC) who used PubMed and Google Scholar directly to identify matching publications. However, although Google Scholar scrapers are being developed by our group and others, there is a limit set by Google in how many hits are returned and Google will block queries for a time, so our web-scraping was primarily focused on PubMed. Webscraping was performed once. The code for the PubMed-scraping tool is presented in Appendix A. 2.2.3. Manual search Abstracts that were not matched by the survey or web-scraping were evaluated by a third author (RLS) using Scopus and Google Scholar. The Scopus search was based on authors (first, last, or recognized key author); the Google Scholar search was based on title fragments and or/ keywords. Abstract-to-publication matching was completed by comparing the meeting abstract against the abstract of a candidate article and was subjective based on author experience. For example, we considered an abstract published if an abstract described a method that was used in a subsequent clinical article. We were also aware that abstracts sometimes publish preliminary results and that the title and authors of a subsequent manuscript could be somewhat different. 2.2.4. Inclusion and exclusion criteria The web-scraping and manual search on considered matches for articles published after 2010. Only full length articles or scientific letters indexed in PubMed, Scopus or Google Scholar were considered as published articles. We selected a search period that included the year prior to the meeting to allow for the possibility that some manuscripts were published prior to the meeting.

51

Table 1 Target journals for abstracts from the 2011 ACCC national meeting. NA = not available. Journal

Number of publications

Impact factor (2014)

Clin Chimica Acta Clinical Chemistry Clinical Chemistry and Laboratory Medicine Clinical Biochemistry PLOS One American Journal of Clinical Pathology Journal of Clinical Laboratory Analysis Leukemia Molecular Biology Reports Point of Care Archives of Pathology and Lab Medicine BMC Research Notes Gene Hepatobiliary & Pancreatic Diseases Int Journal of Hypertension Journal of Urology Lab Medicine Online Therapeutic Drug Monitoring Tumor Biology Other (n = 116)

22 19 13 10 7 5 3 3 3 3 2 2 2 2 2 2 2 2 2 141

2.8 7.9 2.7 2.3 3.2 2.5 1.0 10.4 2.0 NA 2.8 NA 2.1 1.5 4.7 4.4 NA 2.4 3.6

percent of the abstracts were published within three years of the meeting (Fig. 2). The publication rate varied considerably by topic (13%–59%; Fig. 3) and showed moderate, statistically significant heterogeneity (I2 = 49%, p = 0.0067). The lowest publication rates were seen in the more practical aspects of Clinical Chemistry: automation/computer applications, mass spectrometry validation/applications, and laboratory management. The highest publication rates were observed in subspecialties with more direct clinical applicability: cancer/tumor markers, clinical study outcomes, and cardiac markers. Abstracts accepted to the 2011 AACC annual meeting represented 65 countries (Fig. 4) with the majority of abstracts (94%) received from 20 countries. There were 11 countries that submitted at least 10 abstracts. Among this group, the publication rate ranged from 20% (Brazil) to 55% (China). The U.S. accounted for 45% of all accepted abstracts and had a publication rate of 33%. Academic institutions were responsible for the majority of accepted abstracts (n = 483; 74%); industry or commercial firms made up 18%

2.2.5. Statistical analysis Heterogeneity in publication rates across multiple categories was assessed using meta-analysis. We obtained a pooled estimate of the publication rate (across categories) and assessed heterogeneity using the I2 statistic. Meta-analysis was performed using R. 3. Results Fig. 1 summarizes the screening process used to identify abstracts with or without a corresponding publication match. Of the 652 accepted abstracts, 160 authors (25.5%) responded to the initial or follow-up email surveys with 62 of the respondents (37%) having submitted an article for publication. Thirty four percent of those submitted were accepted for publication. The web-scraping tool was successful at confirming all matches documented by the survey and identified an additional 105 matches. The final, manual search classified 82 matches. Combined, the data suggests that of the 652 abstracts accepted to the AACC 2011 annual meeting, 245 (38%) were ultimately published. Abstracts were published in 129 different journals (Table 1). Abstracts were most frequently published in Clin Chimica Acta (n = 21), Clinical Chemistry (n = 14), Clinical Chemistry and Laboratory Medicine (n = 13) and Clinical Biochemistry (n = 9). Ninety three

Fig. 2. Number of publications by year for abstracts presented at the 2011 AACC annual meeting that were converted to manuscripts.

52

D.N. Greene et al. / Clinica Chimica Acta 456 (2016) 49–55

Fig. 3. Publication rates vary between clinical chemistry subspecialties.

(Fig. 5). Collaborative submissions between academic and commercial institutions accounted for 8% of accepted abstracts. Publication rates paralleled the abstract acceptance results, ranging from 16% (commercial institutions) to 55% (non-profit laboratories).

Abstracts awarded “with distinction” by the NACB were significantly more likely to be published compared to those that were not selected (Table 2; Fisher's exact test, p = 0.001). Overall, the publication rate for NACB award-winning abstracts was 68% with the majority of these

Fig. 4. Publication rates vary by country. The top 20 countries and the corresponding number of abstracts presented at the meeting are listed.

D.N. Greene et al. / Clinica Chimica Acta 456 (2016) 49–55

53

Fig. 5. Publication rates are higher when the abstract originates from an academic institution or non-profit laboratory as compared to commercial submissions.

abstracts originating from academic institutions. None of the industrysubmitted abstracts were selected for recognition by the NACB review process. 4. Discussion Our study found that 38% of abstracts presented at the 2011 AACC National Meeting were published within four years. We found that the publication rate varied by topic, country, and author institution. Failure to publish is a well-documented problem in the biomedical literature. Approximately half of clinical trials are never published [7, 15–19]. Our study was patterned after other studies on publication rates of scientific abstracts. A meta-analysis of publication rates for biomedical abstracts found that the average publication rate was 44.5% [5]. We are aware of two studies in pathology and laboratory medicine on abstract conversion rates. Song et al. found that the abstract conversion rate for abstracts in anatomic pathology was 36% [20]. Korevaar et al. found that 54% of test accuracy studies registered at clinicaltrials.gov were subsequently published [21]. Thus, our findings are consistent with the publication rates of abstracts from other biomedical fields and, particularly, pathology. Studies on abstract publication rates generally find that publication is associated with study characteristics. For example, a Cochrane Review found that publication rates are associated with positive results (RR = 1.28, 95% CI 1.15 to 1.42) [5]. In our study, we found that publication rates varied by topic and by institution type. For example, studies focused on biomarkers and clinical outcomes were approximately 40% more likely to be published compared to those focused on management, mass spectrometry applications, and automation or computer applications. The publication rate from academic institutions was greater than the publication rate of abstracts from commercial institutions. Our findings are consistent with other studies that found that abstracts originating from academic institutions have a greater conversion rate than abstracts originating from commercial enterprises [22,23]. This result is not surprising because publishing is highly rewarded in academic institutions.

Table 2 Cross-tabulation of institution type against National Academy of Clinical Biochemistry awards. Institution type

Commercial Academic Joint academic/commercial Total

NACB award winner? No

Yes

117 456 51 624

0 27 1 28

Total

117 483 52 652

Studies with positive results tend to be published more frequently than studies with negative results. This form of publication bias has been shown to occur across wide range of disciplines. [3,5,7,24]. The value of negative results may depend on the type of research. For example, negative data can be valuable for interpreting the results clinical trials; however, publication bias has been demonstrated in clinical trials [7,24]. Studies on publication bias typically report on the number of studies (registered clinical trials, studies approved by Institution Review Boards, or presented as abstracts) that are published in any indexed journal without reference to impact factor. Publication outlets could provide an additional source of bias. For example, studies with positive results might be preferentially published in relatively high-impact journals whereas studies with negative results get published in lowimpact journals. Differential selection by impact factor could add an additional source of bias; however, to our knowledge, this source of bias has not been studied. We did not survey authors to identify the reasons why abstracts were not published. Our data suggest that authors who submit manuscripts have a high success rate. Among those who responded to our survey, 34% of those who submitted manuscripts were successful; however, the success rate could be biased because successful publishers may be more likely to respond to a survey. Thus, the abstract conversion rate is mostly determined by decisions of authors rather than peer reviewers and journal editors [24–27]. Several studies have investigated reasons why scientists do not submit manuscripts. The most common reason is a lack of time [25,28]. Other reasons include difficulties in relationships with coauthors and a low priority for publication [25,29]. Authors undoubtedly perform a cost–benefit analysis and use various criteria to evaluate the likelihood of acceptance and/or impact of a manuscript. For example, Weber et al. showed that abstracts accepted to meetings have a higher conversion rate than rejected abstracts [30]. Similarly, abstracts that are selected for oral presentations or that receive higher quality scores have a higher conversion rate [20,23,31–35]. Abstract rejection, selection for an oral presentation, and quality scores are most likely used as signals by authors to determine whether effort to produce a manuscript is likely to be rewarded. We did not investigate reasons why authors chose not to pursue publication. This is a limitation of our study; however, there is little reason to believe that the factors influencing clinical chemists to publish differ from other investigators. Failure to publish has several undesirable consequences and most commonly leads to duplication of effort and wasted resources. Lack of information dissemination affects the evidence base leading to suboptimal medical decisions and patient harm [36]. Abstracts accepted to scientific meetings are generally available and can be helpful, but provide incomplete information. Thus, it is difficult to appraise the quality of research or to reproduce or validate studies based on the information provided. This is because abstracts often illustrate preliminary results and, as a result, final published results often differ from those published

54

D.N. Greene et al. / Clinica Chimica Acta 456 (2016) 49–55

in abstracts [37–39]. Finally, abstracts are more difficult to locate because they are often not indexed in electronic databases. Expanding abstracts into published manuscripts reduces these deficiencies. Several approaches have been proposed to improve reporting and publication rates of clinical trials [18]. These proposals include research sponsor guidelines, prospective study registration and mandatory trial result availability, changes to the peer review process, right to publication, electronic publication, and open access; however, it is unclear whether any of these measures are effective [36]. In contrast, others have questioned whether increased publication is beneficial and suggest that investigators and editors should focus on quality rather than quantity [40,41]. The scientific literature is exploding and a significant proportion of scientific manuscripts are never cited. This raises the question of whether more publication is better. Selective publication leads to the “file drawer problem” which results in publication bias. Universal publication leads to the “cluttered office problem” in which it is difficult to identify good papers. Some suggest that selective reporting is preferable because science is self-correcting and that publication bias is corrected over time [42]. Others counter this view and believe that it is best to make all the information available [43]. A limitation to publication rate studies such as this one is ensuring that all corresponding publication matches are detected. We used three levels of review (email survey, web-scraping, and manual review) and searched three databases (Google Scholar, PubMed, and Scopus) to maximize match retrieval. Most studies on abstract publication rates use a combination of data-mining and manual database search. We used those techniques but, in addition, we conducted an email survey that enabled us to test the sensitivity of our methods. Our search techniques identified 100% of the manuscripts reported to us in the email survey, suggesting that our methodology was quite sensitive. In addition, we supplemented automated methods (web-scraping) with professional judgment. For example, we tried to identify situations in which methods were later incorporated in clinical studies or when a subsequent article was based on an abstract but substantively changed. However, given the lack of a gold-standard software or approach to identify matches, we cannot rule out that our estimation of the publication rate is falsely low. Our results use abstracts from a single year and from one meeting. The publication rate may vary from year to year and different meetings may have different rates. For example, the publication rate from the WorldLab conference, which is sponsored by the International Federation of Clinical Chemists (IFCC) may differ from the publication rate of a conference sponsored by the AACC. Thus, our results may not be a representative. As noted above, our findings are consistent with those from a wide range of clinical disciplines. Still, it would be useful to expand the study to more years and other meetings. We did not classify abstracts with respect to the direction of the findings (positive, neutral, negative). Consequently, we were unable to predict whether the selective publishing is likely to cause bias in the published literature. We conducted our study four years after the 2011 AACC meeting. It is conceivable that some studies are still in progress or in review. This would cause us to underestimate the publication rate; however, research shows that approximately 90% of abstracts are converted within four years [5]. Thus, the impact on underestimation would be relatively small. Our study shows that, similar to clinical fields, underreporting is common in clinical chemistry. This represents a potential source of waste and creates the potential for a distorted evidence base. Indeed, some consider underreporting a form of scientific misconduct [38]. Steps should be taken to encourage full publication of study results. Publication bias in all its forms, including the barriers to publishing valuable data, should be highlighted by training programs and supporting organizations.

Acknowledgments We thank the AACC and NACB for providing data for this study. We also wish to acknowledge the assistance of Arbab Ameen who wrote the script to transfer the abstract data from Word to Excel. Appendix A. PubMed citation-scraping code

References [1] F. Song, L. Hooper, Y.K. Loke, Publication bias: what is it? How do we measure it? How do we avoid it? Open Access J. Clin. Trials 5 (2013) 51–81. [2] K. Dickersin, The existence of publication bias and risk factors for its occurrence, JAMA 263 (1990) 1385–1389.

D.N. Greene et al. / Clinica Chimica Acta 456 (2016) 49–55 [3] F. Song, S. Parekh-Bhurke, L. Hooper, Y.K. Loke, J.J. Ryder, A.J. Sutton, et al., Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies, BMC Med. Res. Methodol. 9 (2009). [4] L. Sridharan, P. Greenland, Editorial policies and publication bias: the importance of negative studies, Arch. Intern. Med. 169 (2009) 1022–1023. [5] R.W. Scherer, P. Langenberg, E. von Elm, Full publication of results initially presented in abstracts, Cochrane Database Syst. Rev. (2007) Mr000005. [6] S. Hopewell, K. Loudon, M.J. Clarke, A.D. Oxman, K. Dickersin, Publication bias in clinical trials due to statistical significance or direction of trial results, Cochrane Database Syst. Rev. (2009) Mr000006. [7] C. Schmucker, L.K. Schell, S. Portalupi, P. Oeller, L. Cabrera, D. Bassler, et al., Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries, PLoS One 9 (2014). [8] Z.J. Daruwalla, S.S. Huq, K.L. Wong, P.Y. Nee, D.P. Murphy, “Publish or perish”presentations at annual national orthopaedic meetings and their correlation with subsequent publication, J. Orthop. Surg. Res. 10 (2015). [9] G. Amarilyo, J.M.P. Woo, D.E. Furst, O.L. Hoffman, R. Eyal, C. Piao, et al., Publication outcomes of abstracts presented at an American College of Rheumatology/Association of Rheumatology Health Professionals annual scientific meeting, Arthritis Care Res. 65 (2013) 622–629. [10] R. Autorino, G. Quarto, M. De Sio, E. Lima, E. Quarto, R. Damiano, et al., Fate of abstracts presented at the World Congress of Endourology: are they followed by publication in peer-reviewed journals? J. Endourol. 20 (2006) 996–1001. [11] R. Autorino, G. Quarto, G. Di Lorenzo, F. Giugliano, C. Quattrone, F. Neri, et al., What happens to the abstracts presented at the Societè Internationale d'Urologie meeting? Urology 71 (2008) 367–371. [12] A.A.B. Jamjoom, M.A. Hughes, C.K. Chuen, R.L. Hammersley, I.P.Z. Fouyas, Publication fate of abstracts presented at Society of British Neurological Surgeons meetings, Br. J. Neurosurg. 29 (2015) 164–168. [13] S. Hopewell, M. Clarke, L. Stewart, J. Tierney, Time to publication for results of clinical trials, Cochrane Database Syst. Rev. (2007) Mr000011. [14] American Association for Clinical Chemistry, Abstracts of the scientific posters, 2011 AACC annual meeting, Clin. Chem. 57 (2011) A1–A235. [15] F.T. Bourgeois, S. Murthy, K.D. Mandl, Outcome reporting among drug trials registered in clinicaltrials.gov, Ann. Intern. Med. 153 (2010) 158–166. [16] J.S. Ross, G.K. Mulvey, E.M. Hines, S.E. Nissen, H.M. Krumholz, Trial publication after registration in clinicaltrials.gov: a cross-sectional analysis, PLoS Med. 6 (2009). [17] F.T. van de Wetering, R.J.P.M. Scholten, T. Haring, M. Clarke, L. Hooft, Trial registration numbers are underreported in biomedical publications, PLoS One 7 (2012). [18] F. Song, S. Parekh, L. Hooper, Y.K. Loke, J. Ryder, A.J. Sutton, et al., Dissemination and publication of research findings: an updated review of related biases, Health Technol. Assess. 14 (2010) 1–220. [19] M.J. Galsworthy, D. Hristovski, L. Lusa, K. Ernst, R. Irwin, K. Charlesworth, et al., Academic output of 9 years of EU investment into health research, Lancet 380 (2012) 971–972. [20] J. Song, M. Li, D.H. Hwang, R.W. Ricciotti, A. Chang, The outcome of abstracts presented at the United States and Canadian Academy of Pathology annual meetings, Mod. Pathol. 23 (2010) 682–685. [21] D.A. Korevaar, E.A. Ochodo, P.M.M. Bossuyt, L. Hooft, Publication and reporting of test accuracy studies registered in clinicaltrials.gov, Clin. Chem. 60 (2014) 651–659. [22] S. Castaldi, M. Giacometti, W. Toigo, F. Bert, R. Siliquini, Analysis of full-text publication and publishing predictors of abstracts presented at an Italian public health meeting (2005–2007), BMC Res. Notes 8 (2015) 492. [23] J.B. Durinka, P.N. Chang, J. Ortiz, Fate of abstracts presented at the 2009 American Transplant Congress, J. Surg. Educ. 71 (2014) 674–679.

55

[24] M. Van Lent, J. Overbeke, H.J. Out, Role of editorial and peer review processes in publication bias: analysis of drug trials submitted to eight medical journals, PLoS One 9 (2014). [25] R.W. Scherer, C. Ugarte-Gil, C. Schmucker, J.J. Meerpohl, Authors report lack of time as main reason for unpublished research presented at biomedical conferences: a systematic review, J. Clin. Epidemiol. 68 (2015) 803–810. [26] K. Dickersin, I. Chalmers, Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO, J. R. Soc. Med. 104 (2011) 532–538. [27] S. Hopewell, K. Dickersin, M.J. Clarke, A.D. Oxman, K. Loudon, Publication bias in clinical trials, Cochrane Database Syst. Rev. (2007). [28] E.C. MacKinney, R.H. Chun, L.D. Cassidy, T.R. Link, C.G. Sulman, J.E. Kerschner, Factors influencing successful peer-reviewed publication of original research presentations from the American Society of Pediatric Otolaryngology (ASPO), Int. J. Pediatr. Otorhinolaryngol. 79 (2015) 392–397. [29] S. Sprague, M. Bhandari, P.J. Devereaux, M.F. Swiontkowski, I.P. Tornetta, D.J. Cook, et al., Barriers to full-text publication following presentation of abstracts at annual orthopaedic meetings, J. Bone Joint Surg. Ser. A 85 (2003) 158–163. [30] E.J. Weber, M.L. Callaham, R.L. Wears, C. Barton, G. Young, Unpublished research from a medical specialty meeting. why investigators fail to publish, J. Am. Med. Assoc. 280 (1998) 257–259. [31] W.M. Gilbert, R.M. Pitkin, Society for maternal-fetal medicine meeting presentations: what gets published and why? Am. J. Obstet. Gynecol. 191 (2004) 32–35. [32] J.C. Dumville, E.S. Petherick, N. Cullum, When will i see you again? The fate of research findings from international wound care conferences, Int. Wound J. 5 (2008) 26–33. [33] N. Glick, I. MacDonald, G. Knoll, A. Brabant, S. Gourishankar, Factors associated with publication following presentation at a transplantation meeting, Am. J. Transplant. 6 (2006) 552–556. [34] P.J. Hackett, M. Guirguis, N. Sakai, T. Sakai, Fate of abstracts presented at the 2004– 2008 International Liver Transplantation Society meetings, Liver Transpl. 20 (2014) 355–360. [35] A.P. Sawatsky, T.J. Beckman, J. Edakkanambeth Varayil, J.N. Mandrekar, D.A. Reed, A.T. Wang, Association between study quality and publication rates of medical education abstracts presented at the Society of General Internal Medicine annual meeting, J. Gen. Intern. Med. 30 (2015) 1172–1177. [36] K. Thaler, C. Kien, B. Nussbaumer, M.G. Van Noord, U. Griebler, I. Klerings, G. Gartlehner, Inadequate use and regulation of interventions against publication bias decreases their effectiveness: a systematic review, J. Clin. Epidemiol. 68 (2015) 792–802. [37] S. Hopewell, S. McDonald, Full publication of trials initially reported as abstracts in the Australian and New Zealand Journal of Medicine 1980–2000, Intern. Med. J. 33 (2003) 192–194. [38] A. Chokkalingam, R. Scherer, K. Dickersin, Agreement of data in abstracts compared to full publications, Control. Clin. Trials 19 (3) (1998) S61–S62 Supplement 1. [39] W.H. Weintraub, Are published manuscripts representative of the surgical meeting abstracts? An objective appraisal, J. Pediatr. Surg. 22 (1987) 11–13. [40] P.M. Ridker, N. Rifai, Expanding options for scientific publication is more always better? Circulation 127 (2013) 155–156. [41] L.D. Nelson, J.P. Simmons, U. Simonsohn, Let's publish fewer papers, Psychol. Inq. 23 (2012) 291–293. [42] J. de Winter, R. Happee, Why selective publication of statistically significant results can be effective, PLoS One 8 (2013). [43] M.A.L.M. Van Assen, R.C.M. Van Aert, M.B. Nuijten, J.M. Wicherts, Why publishing everything is more effective than selective publishing of statistically significant results, PLoS One 9 (2014).

Publication outcome of abstracts presented at the AACC annual meeting.

Failure to publish study results causes duplication of effort and is a significant source of waste. It also can lead to distortions in the evidence ba...
565B Sizes 1 Downloads 29 Views