552798

research-article2014

JREXXX10.1177/1556264614552798Journal of Empirical Research on Human Research EthicsWells et al.

Research Environments

Survey of Organizational Research Climates in Three Research Intensive, Doctoral Granting Universities

Journal of Empirical Research on Human Research Ethics 2014, Vol. 9(5) 72­–88 © The Author(s) 2014 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/1556264614552798 jre.sagepub.com

James A. Wells1, Carol R. Thrush2, Brian C. Martinson3, Terry A. May4, Michelle Stickler5, Eileen C. Callahan6, and Karen L. Klomparens4

Abstract The Survey of Organizational Research Climate (SOuRCe) is a new instrument that assesses dimensions of research integrity climate, including ethical leadership, socialization and communication processes, and policies, procedures, structures, and processes to address risks to research integrity. We present a descriptive analysis to characterize differences on the SOuRCe scales across departments, fields of study, and status categories (faculty, postdoctoral scholars, and graduate students) for 11,455 respondents from three research-intensive universities. Among the seven SOuRCe scales, variance explained by status and fields of study ranged from 7.6% (Advisor–Advisee Relations) to 16.2% (Integrity Norms). Department accounted for greater than 50% of the variance explained for each of the SOuRCe scales, ranging from 52.6% (Regulatory Quality) to 80.3% (Integrity Inhibitors). It is feasible to implement this instrument in large university settings across a broad range of fields, department types, and individual roles within academic units. Published baseline results provide initial data for institutions using the SOuRCe who wish to compare their own research integrity climates. Keywords research integrity, organizational climate, institutional assessment, survey research Since at least 2002, national research leaders have expressed the need for a tool that research institutions can use in their internal efforts to promote research integrity (National Research Council [NRC], 2002). Until recently, few scientifically validated tools were available that would provide self-assessment data for institutional leaders to answer pressing questions such as “What practices contribute to the establishment and maintenance of ethical cultures and how can these practices be transferred, extended to, integrated into other research and learning settings?” (National Science Foundation [NSF], 2014). “Do certain institutional subunits or disciplines have a ‘culture of academic integrity?’” (NSF, 2014). “Which institutional contexts promote ethical research and practice?” Even if such a tool were available, institution leaders might still wonder about worrisome questions such as “But what administrative unit would want to participate? To do so might risk financial, legal, or administrative retribution. Would vengeful students make untrue accusations? Would the survey be perceived as a witch hunt?” (Sieber, 2007, pp. 1-2). These are exactly the kinds of questions that the field (e.g., NSF, 2014) and experts (Sieber, 2007) are encouraging research institutions to grapple with. This article

represents early work by three vanguard institutions to engage this spirit of inquiry. Our focus is on organizational climates as they pertain to the integrity of research. “Organizational climate is the shared meaning organizational members attached to the events, policies, practices, and procedures they experience and the behaviors they see being rewarded, supported, and expected” (Ehrhart, Schneider, & Macey, 2013, p. 115). As used here, the term “organizational research climate” refers to the current patterns of organizational life and behavior related to research activities among organizational members and leaders (Schein, 2000).

1

Jim Wells Consulting, Edgerton, WI, USA University of Arkansas for Medical Sciences, Little Rock, AR, USA 3 HealthPartners Institute for Education and Research, Bloomington, MN, USA 4 Michigan State University, East Lansing, MI, USA 5 Virginia Commonwealth University, Richmond, VA, USA 6 University of Wisconsin–Madison, WI, USA 2

Corresponding Author: James A. Wells, Jim Wells Consulting, 405 W. Rollin St., Edgerton, WI 53534-1121, USA. Email: [email protected]

73

Wells et al.

Theoretical and Practical Interest The organizational climate for research integrity is of both theoretical and practical interest, as it is believed to be a particularly relevant driver of individual and group behaviors (Mumford et al., 2007; Treviño, 1990; Walumbwa, Hartnell, & Oke, 2010). In 2002, the NRC recommended that institutions seeking to promote responsible research conduct and foster integrity should (a) establish and continuously measure their structures, processes, policies, and procedures; (b) evaluate the institutional environment supporting integrity in the conduct of research; and (c) use this knowledge for ongoing improvement (NRC, 2002). In response, research by Thrush et al. (2007) laid the groundwork for development of a survey responsive to the NRC’s call for such a measure and provided evidence of content validity for items designed to assess the organizational climate for research integrity in academic health center settings. Members of our current project team adapted the survey items for broad use in both biomedical and general higher education settings as the basis for research reported here. Subsequent refinements based on later research have established the standard psychometric properties of the tool, the Survey of Organizational Research Climate (SOuRCe; Crain, Martinson, & Thrush, 2012; Martinson, Thrush, & Crain, 2012). Development of the SOuRCe is theoretically grounded in the NRC’s (2002) open systems conceptual framework for research integrity, which recognizes research integrity as an outcome of processes influenced by multiple factors. More specifically, the SOuRCe focuses on organizational climate factors such as an institution’s visible ethical leadership, socialization and communication processes, and the presence of policies, procedures, structures, and processes to deal with risks to integrity. The primary reason for such a focus is that these are the elements of the “system,” that should be most subject to influence from within a given institution itself and that are readily measurable.

Institutional Responsibilities A recent requirement by the NSF calls for research-engaged educational institutions supported by NSF (2013) “to provide appropriate training and oversight in the responsible and ethical conduct of research to undergraduates, graduate students, and postdoctoral researchers.” Similarly, there has been a long-standing requirement by the National Institutes of Health (NIH; 2013) to provide individualized instruction in the responsible conduct of research (RCR) to trainees. How will institutions know if they are responding responsibly? Furthermore, how are institutions to assess whether their efforts are effective beyond simple compliance? Additional concern has been raised by science leaders as to whether such RCR training requirements could be expected

to have any effect on an individuals’ research in practice (NSF, 2014).

Quality Management Ultimately, we view the development and maintenance of ethical organizational research climates as part of management and self-regulation of research quality. As with most issues of quality management, how efficiently and effectively institutions do this hinges on them being able to collect reliable data about their climates. This in turn allows them to identify organizational units that may benefit from change initiatives, to target those initiatives to the specific issues or areas of need, and to subsequently assess what impact such initiatives may have had. The SOuRCe is intended to provide a measure of how individuals within an organization perceive the quality of the environments in which they are immersed and the extent to which their organizational units support responsible research practices and research integrity. As a tool for institutional self-assessment, the SOuRCe can be used to generate comparative data about the perceived performance of subunits such as departments, centers, or graduate programs within an institution and strive to raise the performance of lower performing organizational units.

Linkage to Behavior Outcomes Although having an ethical organizational climate is valuable in its own right, as Heitman, Anestidou, Olsen, and Bulger (2005) have argued, the “holy grail” of such organizational initiatives is establishing their effects on the behavior of individual organizational members. Recent work by Crain et al. (2012) has provided evidence of the predictive validity of the SOuRCe, demonstrating that higher SOuRCe scores, department averages on SOuRCe scales, and respondent deviations from their own departmental means were all related to higher likelihood of self-reported desirable behavior (Crain et al., 2012). Conversely, a number of frequently reported undesirable behaviors (e.g., inadequate record keeping, circumventing or ignoring aspects of human subjects research requirements, etc.) were less likely to be reported in local climates with better SOuRCe scores (Crain et al., 2012).

Purpose and Context of This Study In addition to using the SOuRCe as a tool for institutional self-regulation, the SOuRCe may prove useful in broad research initiatives to better understand how research integrity operates and is fostered within an institution. In 2008, deans of the graduate schools at Michigan State University (MSU), Pennsylvania State University (PSU), and the University of Wisconsin–Madison (UW-Madison) initiated

74 a collaborative project to collect and compare research integrity climate data using the newly developed SOuRCe instrument. This collaboration among three large, land grant research universities in the United States was undertaken in concert with their participation in a larger project by the Council of Graduate Schools (CGS) called the Project on Scholarly Integrity (PSI) about which further details can be found at www.scholarlyintegrity.org. For the current study, we pooled data from these three institutions in an effort to characterize and describe differences in organizational climates across academic departments, fields of study, and by the professional status of organizational members (e.g., faculty, research scientist, postdoctoral scholar, or graduate student). There has been little empirical study to date that would allow one to compare organizational research climates across these various factors of interest. The purpose of this article is to describe the SOuRCe scores on seven scales, how they differ across departments, fields of study, and status categories, and the practical utility for organizational leaders in fostering research integrity climates. Because these three universities for our study were not randomly selected but rather represent a small sample of convenience, we refrained from making university-level comparisons in this study. One hope is that by publishing such baseline data we will provide reasonable comparative data for institutions using the SOuRCe to assess the quality of their own research integrity climates.

Method Study Design and Participants The SOuRCe was administered via web-based data collection in spring 2009 at each of the three universities. Each university received prior approval from their local institutional review board (IRB) to participate in this project. A census approach was taken to invite study participants, with each university generating comprehensive listings of their members, including all graduate students, all faculty, all postdoctoral fellows, and all research personnel to be surveyed.

Instrument Climate of research integrity.  The SOuRCe is a 32-item survey designed to assess an individual’s perception of the organizational climate for research integrity both in one’s general organizational setting and in one’s specific affiliated department or division. Research to establish the standard psychometric properties of the SOuRCe was based on a web- and mail-based survey administered in the second half of 2009 to 2,837 randomly selected biomedical and social science faculty and postdoctoral fellows at 40

Journal of Empirical Research on Human Research Ethics 9(5) academic health centers in top-tier research universities in the United States (Martinson et al., 2012). Measures included the SOuRCe as well as measures of perceptions of organizational justice. Exploratory and confirmatory factor analyses yielded seven scales of organizational research climate, all of which demonstrated acceptable internal consistency (Cronbach’s α ranging from .81 to .87) and adequate test–retest reliability (Pearson’s r ranging from .72 to .83). A broad range of correlations between the seven scales and five measures of organizational justice document both construct and discriminant validity of the instrument (unadjusted regression coefficients ranging from 0.13 to 0.95; Martinson et al., 2012). The SOuRCe also has demonstrated predictive validity, showing that the SOuRCe is predictive of self-reported research behaviors (Crain et al., 2012). The final validated version of the SOuRCe contains 28 items comprising seven scales and 4 global items that are not included in scale computations. Sample survey items can be requested from the authors (see also https://sites.google. com/site/surveyoforgresearch-climate and https://nationalethicscenter.org/sorc). The SOuRCe employs 11 items to assess an individual’s perceptions of research climate in the organization as a whole and 21 items to assess one’s primary program or subunit (e.g., center, department, or graduate program) in the organization. To set the frame of reference for the SOuRCe items, each section begins with two generic questions about the respondents’ perceptions of both the institution and their department in terms of (a) commitment to maintaining standards of research integrity and (b) the degree to which the overall climate of integrity reflects high values for RCR. All SOuRCe items are rated by respondents using the following 5-point scale: (1) not at all (2) somewhat (3) moderately (4) very (5) completely. A 6th option, “no basis for judging,” is offered to avoid forcing a response about a specific level of perception where none exists. The survey yields scale scores on seven scales. The two institutional-level scales are as follows: •• The RCR Resources scale (six items) measures perceptions of the effectiveness of RCR educational opportunities, accessibility of research resources (e.g., ethics experts, policies, procedures), commitment and effectiveness of research-related communications by institutional administrators, and familiarity with procedures for reporting misconduct. •• The Regulatory Quality scale (three items) assesses respondents’ perceptions of the degree to which regulatory committees such as IRBs and Institutional Animal Care and Use Committees (IACUCs) treat researchers fairly and with respect, and how well these committees understand the research they review.

Wells et al. The five scales at the level of department or graduate program are as follows: •• The Integrity Norms scale (four items) assesses perceptions of the extent to which scholarly integrity (e.g., honesty, data integrity/confidentiality) is valued in the department. •• The Integrity Socialization scale (four items) assesses perceptions of departmental commitment to effective socialization of junior researchers. •• The Advisor–Advisee Relations scale (three items) measures perceptions about fairness, respect, and the availability of advisors. •• The Integrity Inhibitors scale (six items) assesses the degree to which respondents’ believe certain conditions produce negative effects in a department or program, including difficulties in conducting responsible research due to a lack of adequate human or material resources, pressure to obtain funding, pressure to publish, and competition and suspicion among researchers. In these analyses, the integrity inhibitors scale is reverse coded so the scale represents an absence of integrity inhibitors. •• The Departmental Expectations scale (two items) measures perceptions about the fairness of departmental expectations to publish and obtain external funding. Scale scores are computed by averaging the items included within each scale. Scores are only computed for an individual if he or she provided valid responses for a minimum of half of the constituent items for a given scale. “No Basis for Judging” responses are excluded. We also remove from the analysis any observations where a respondent gave the exact same response for every SOuRCe item (i.e., “response-set” in standard survey nomenclature), as this typically indicates a lack of genuine engagement and thoughtfulness in the response process, yielding meaningless responses. In the current study, we observed acceptable internal consistency for the SOuRCe scales (Cronbach’s α ranged from .77 to .87). Factors of interest—Department, field of study, status. To operationalize the variables of interest for this study, we included a brief set of classification and demographic items to allow for classification of individuals with respect to their primary academic department or program and employee/student status within the institution. The survey was tailored by institution such that individuals were presented with a drop-down list of departments or graduate programs at their own University and asked to indicate their department of primary appointment or involvement. This

75 yielded responses from individuals working in a large number of unique organizational units (N = 379). To illustrate the breadth and range of departmental units surveyed, the appendix provides a listing of the names of all the departments/graduate programs. Some names on the list describe units in all three of the universities involved in the study, while others are unique to a single university. For purposes of this analysis, we have aggregated these departments at two levels of granularity to define a “field of study” both broadly and narrowly. The categories used for both broad and narrow fields of study were defined using the CGS International Graduate Admissions Survey Taxonomy (Bell, 2010). There are eight broad fields of study: arts and humanities, business, education, engineering, life sciences, physical and earth sciences, social sciences and psychology, and other fields; and 51 narrow fields of study used in the taxonomy (Bell, 2010). Conceptually, department and field of study are often close to synonymous in a university setting; however, in our data, these entities exist at three levels: Unique departments/ graduate programs are aggregated up to a generic taxonomy of field of study (e.g., departments of biochemistry and biology would be aggregated up into the field of study labeled biological sciences). In turn, for example, biological sciences would be aggregated along with agriculture and health and medical sciences into the broad field of study labeled life sciences. With regard to status, the population of individuals surveyed included the following categories of individuals across the three institutions: •• Graduate students in course-based master’s programs; •• Graduate students in research master’s programs; •• Graduate student in doctoral programs; •• Postdoctoral trainees/research associates; •• Fixed-term faculty, not tenure-track; •• Tenure-track faculty, not tenured; •• Tenure-track faculty, tenured; •• Research scientists. For purposes of the current study, we have limited analysis to individuals in status categories that would have adequately exposed them to the research environments of these institutions and enabled them to give an informed response to the survey items. We also excluded some categories of individuals who were not consistently included in each institution’s survey frame. These include undergraduate students, graduate students in course-based master’s programs, research technicians, and a small number of clinical faculty. Finally, we collapsed these categories into three groups for analysis (graduate students, postdoctoral scholars, and faculty).

76

Survey Methods Survey administration was conducted by the PSU Survey Research Center via an online, web-based survey, which was launched in mid-April 2009 and remained open for 5 weeks. The three universities each provided the Survey Center with lists of email addresses and names of faculty, research scientists, postdoctoral researchers, and graduate students, which they used to send a presurvey notification email, under signature of each campus’ respective Graduate School dean, to prospective participants to introduce the survey and indicate they would be contacted. This notification was followed by the survey invitation itself and up to four reminders. These mailings occurred over approximately a 4-week period and were staggered by day of week, occurring only up to the point at which an individual responded, or survey fielding period closed. The invitation and follow-up emails each included a URL link to the survey and provided respondents with a unique personal identification number to protect privacy and to ensure that the intended respondent and no one else completed their questionnaire. All invitation and reminder emails included language informing respondents that their participation was voluntary and that their data would be kept confidential. These mailings also made appeals to respondent loyalty and altruism by indicating that their participation in this project would help “to make the university a better place to conduct research and other scholarly work.” On completion of the questionnaire, all identifiers were destroyed. In total, 33,472 individuals were invited to participate—12,531 from PSU, 9,910 from MSU, and 11,031 from UW-Madison. Ultimately, usable data were returned from 15,182 respondents (45%), with similar proportions responding across the three institutions (45%, 48%, and 43%, respectively). A total of 1,101 respondents were excluded due to missing data on status, department/graduate program, or both. In addition, 2,716 fell into status categories not addressed in this article (undergraduates 254, course-based master’s students 2,135, support staff and technicians 303, and clinical faculty 24). These exclusions leave a working file of 11,455 respondents.

Analysis The analysis employs both univariate and multivariate approaches. We present the frequency distributions of status, broad field of study, and field of study. Due to the large number of departments/programs, we do not provide a frequency distribution but do indicate the number of departments and cases within each field of study. Also, we present figures to illustrate the range and variability of departmental means on the integrity scales. Finally, we report the mean, standard deviation, and reliability coefficient for each of the integrity scales.

Journal of Empirical Research on Human Research Ethics 9(5) The multivariate analysis employs hierarchical, fixed effects analysis of variance using the SPSS procedure “General Linear Model” to estimate effects, variance explained, and tests of significance. This approach is used to explain each of the seven SOuRCe scales as a function of the following: •• •• •• •• •• ••

Status, Broad field of study, Field of study (within broad field of study), Department/program (within field of study), Interaction of status and broad field of study, Interaction of status and field of study (within broad field of study), •• Interaction of status and department/program (within field of study). Employing a hierarchical model implies that some factors are nested within others. In the present case, each department or graduate program may have a somewhat unique climate. Departments and graduate programs, however, can be categorized into a field of study as well. Each department falls into one, and only one, field of study. In turn, each field of study falls within one and only one higher level broad field of study. Status can be cross-classified with the others factors. That is to say, it is possible for any value of status—graduate student, postdoctoral scholar, or faculty member—to occur within a department/graduate program, field of study, or broad field of study. This analytic approach is used to estimate main and interaction effects of the classification variables and to decompose total explained variation for each climate scale into components attributable to status, broad field of study, field of study, and department/graduate program. The decomposition is sequential (Type I) so that the only explained variance attributed to each variable in the model is that which is not shared with explanatory variables already entered. We entered the status variable first to give it the opportunity to explain as much variance as possible, followed by broad field of study, field of study (within broad field), and department/program (within field) in that order. For each scale, we calculated two values related to variation accounted for by classification variables: (a) the percentage of total variation in that scale accounted for by the explanatory variable and (b) the percentage of “accounted for” variation attributable to each explanatory variable. Thus, for example, if status were to account for 2% of the Integrity Norms scale and all the variables explain 10% of that scale, then status would account for 2% of the total variation and 20% of the accounted for variance. The latter calculation is done to simplify the comparison among classification variables as to which has the greater impact on each of the SOuRCe scales.

77

Wells et al. Table 1.  Number of Cases, Mean Score, Standard Deviation, and Reliability by Integrity Scale.

n of cases M SD Reliability (α)

Integrity Norms

Integrity Socialization

[Lack of] Integrity Inhibitors

Advisor– Advisee Relations

Departmental Expectations

RCR Resources

Regulatory Quality

8,323 4.20 0.68 .867

8,457 3.60 0.93 .871

8,257 3.93 0.90 .873

8,636 3.85 0.77 .865

8,707 3.79 0.84 .765

9,271 3.69 0.79 .871

5,842 3.72 0.83 .843

Note. RCR = responsible conduct of research.

We also assessed the interactions between status and the hierarchical variables broad field of study, field of study (within broad field), and department/program (within field). This addresses the possibility that differences in response to the scales may be a joint function of both individual status and organizational subunit with which they are affiliated. Explanatory factors that were not statistically significant are omitted from the model and the final version of the model is reported.

Results Distribution of Variables Table 1 reports the number of cases, mean scale score, standard deviation, and reliability coefficient for each of the seven integrity scales. Effective sample size ranges from about 8,000 to 9,000 for all the scales except Regulatory Quality. This scale is based on items that inquire about interactions with regulatory committees such as IRBs or IACUCs. Because fewer scholars use these resources, the effective sample size for this scale is 5,842. Scales are scored from 1 to 5 reflecting the average of the scores of its constituent items. Average scores range from a high of 4.20 for Integrity Norms to a low of 3.60 for Integrity Socialization. Thus, in this sample, respondents reported that the climate reflects support for norms of integrity, but actions to socialize around those norms are less apparent. The scales of Integrity Inhibitors (3.93), Advisor– Advisee Relations (3.85), and Department/Program Expectations (3.79) have average scores between the extremes of the Integrity Norms and Integrity Socialization scales. Slightly lower than these three are the two institutional integrity scales, RCR Resources and Regulatory Quality, with means of 3.69 and 3.72, respectively. Cronbach’s alpha, a measure of reliability, ranged from .765 for Department/Program Expectations to .873 for Integrity Inhibitors. Except in the case of Department/ Program Expectations, these reliability coefficients are slightly greater than in the validation sample described above.

Table 2 reports the number of cases and percent distribution for respondent status and broad field of study. We will discuss the remaining columns in this table in the next section of the article. By status, respondents are 57.0% graduate students, 6.9% postdoctoral scholars, and 36.1% faculty. Among the broad fields of study, the modal category is the life sciences (29.6%), followed in size by engineering (15.8% of respondents) and the physical and earth sciences (15.4%). Additional fields include the arts and humanities (11.2%), social sciences and psychology (10.3%), and education (9.5%). Business (2.5%) and other fields (5.6%) account for the remainder of respondents. Table 3 presents number of cases and percent distribution of field of study within broad field of study. The table also reports the number of departments encompassed by each field. The arts and humanities include arts, both criticism and performance, English language, foreign language, history, and philosophy. As with Table 2, we will discuss the remaining columns in this table in the next section of the article. The seven fields of arts and humanities account for 55 departments across the three institutions. “Foreign languages and literatures” is the largest field in this group comprising 2.9% of respondents. Business comprises four fields although the largest in both number of respondents and departments is business administration and management. It should be remembered that master’s of business administration students are excluded; only research-based master’s or PhDs would be reflected here. The broad field of education comprises eight fields and 32 departments. “Education, other” (2.7%) is the largest field followed by “curriculum and instruction” (1.8%). Engineering comprises seven fields and 38 departments with “mechanical engineering” (3.3%), “engineering, other” (3.2%), and “electrical and electronics information” (3.0%). The life sciences, with three fields and 139 departments, account for 30% of respondents and 37% of departments across the three institutions. The three fields subsumed in this broad category are “agriculture,” “biological sciences,” and “health and medical sciences.” Within that category, the “health and medical sciences” alone account for 62

78

Journal of Empirical Research on Human Research Ethics 9(5)

Table 2.  Number of Cases, Percent, and Mean Integrity by Status and Broad Field of Study.

Variable Status   Graduate student  Postdoctoral scholar  Faculty  Total Broad field of study   Arts and humanities  Business  Education  Engineering   Life sciences   Physical and earth sciences   Social sciences and psychology   Other fields  Total

Integrity Norms

Integrity Socialization

[Lack of] Integrity Inhibitors

Advisor– Advisee Relations

57.0 6.9

4.24 4.00

3.65 3.43

3.85 3.72

3.85 3.75

3.82 3.81

3.68 3.54

3.80 3.72

4,131 11,455

36.1 100.0

4.17 —

3.55 —

4.07 —

3.88 —

3.75 —

3.74 —

3.63 —

1,288

11.2

4.23

3.64

3.92

3.88

3.60

3.63

3.49

286 1,084 1,811 3,392 1,764

2.5 9.5 15.8 29.6 15.4

4.25 4.39 4.00 4.16 4.14

3.64 3.87 3.43 3.56 3.48

4.07 3.92 3.62 3.98 3.92

3.99 3.93 3.81 3.85 3.81

3.77 3.81 3.80 3.85 3.88

3.76 3.90 3.54 3.73 3.57

3.61 3.86 3.77 3.80 3.72

1,183

10.3

4.30

3.65

4.14

3.82

3.72

3.70

3.53

647 11,455

5.6 100.0

4.32 —

3.80 —

4.00 —

3.92 —

3.72 —

3.87 —

3.69 —

n of cases

%

6,531 793

Departmental RCR Expectations Resources

Regulatory Quality

Note. All bivariate associations between status or broad field of study and the integrity variables are statistically significant, Fα = .01, g−1, n−g−1. RCR = responsible conduct of research.

departments; a larger number of departments than in any of the other broad fields of study. The physical and earth sciences comprise six fields and 32 departments. “Chemistry” (3.7%) is the largest field followed by “physics and astronomy” (3.5%). Social sciences and psychology comprises six fields and 38 departments with “sociology” (2.7%) and “psychology” (2.1%) being the largest fields. Finally, “other fields” has eight fields and 33 departments. “Communications” (2.2%) and “family and consumer sciences” (1.5%) are the largest fields.

Bivariate Results Table 2 also presents means of the seven SOuRCe scales by levels of status and broad field of study. All associations depicted in the table are statistically significant, that is, one or more levels of status and broad field of study differ significantly from one another on each of the seven scales. Among status categories, graduate students score more highly than postdoctoral scholars and faculty on Integrity Norms, Integrity Socialization, Department/Program Expectations, and Regulatory Quality. Faculty score more highly than the other status categories on Integrity Inhibitors, Advisor–Advisee Relations, and RCR Resources. Postdoctoral scholars, by contrast, have the lowest scores on all scales except Department/Program Expectations and Regulatory Quality.

Among broad fields of study, education has the highest mean score on four of the seven integrity scales: Integrity Norms, Integrity Socialization, RCR Resources, and Regulatory Quality. The social and psychological sciences field of study has the highest mean scale score on Integrity Inhibitors, while business has the highest mean score on Advisor–Advisee Relations and physical and earth sciences has the highest mean score on Departmental Expectations. In contrast, the engineering field of study has the lowest mean scale score on five of the seven integrity scales: Integrity Norms, Integrity Socialization, Integrity Inhibitors, Advisor–Advisee Relations (tied with the physical and earth sciences), and RCR Resources. Arts and humanities has the lowest mean score on the other two integrity scales: Departmental Expectations and Regulatory Quality. In general, the engineering, life sciences, and physical and earth sciences fields have lower mean scores on all scales except Departmental Expectations (where they have the highest mean score) and Regulatory Quality. No particular field of study appears to score consistently across all scales, although education, business and other fields are consistently represented among the fields with higher mean scores. Table 3 presents means of the seven scales by field of study and broad field of study. In this table, we have redacted mean scores for fields of study represented by fewer than three departments in the sample to maintain departmental and institutional confidentiality.

79

Life sciences

Engineering

Education

Business

Arts and humanities

Arts—History, theory, and criticism Arts—Performance and studio English language and literature Foreign languages and literatures History Philosophy Arts and humanities, other Accounting Banking and finance Business administration and management Business, other Education administration Curriculum and instruction Elementary education Evaluation and research Higher education Special education Student counseling and personnel services Education, other Chemical engineering Civil engineering Electrical and electronics engineering Industrial engineering Materials engineering Mechanical engineering Engineering, other Agriculture Biological sciences Health and medical sciences

CGS broad field of study CGS field of study 60 254 232 336 231 66 109 25 20 199 42 88 208 122 96 64 105 88 313 180 179 342 156 208 375 371 971 1,369 1,052

9 5 21 7 4 6 1 1 7 3 3 3 2 3 1 5 4 11 3 4 3 3 6 5 14 34 43 62

n of cases

3

n of departments

1.4 1.8 3.3 3.2 8.5 12.0 9.2

2.7 1.6 1.6 3.0

0.4 0.8 1.8 1.1 0.8 0.6 0.9 0.8

2.0 0.6 1.0 0.2 0.2 1.7

2.9

2.0

2.2

0.5

4.08 3.92 4.04 4.15 4.16 4.12 4.22

4.34 4.10 3.95 3.95

3.96 4.47 4.44 * 4.36 * 4.40 4.37

4.36 4.44 4.23 * * 4.31

4.26

4.26

4.02

4.20

% Integrity (of cases) Norms

3.47 3.46 3.41 3.55 3.54 3.50 3.67

3.83 3.45 3.26 3.37

3.29 3.97 3.94 * 3.72 * 3.90 3.83

3.70 3.42 3.62 * * 3.74

3.69

3.58

3.58

3.82

Integrity Socialization

Table 3.  Number of Departments, Number of Cases, Percent, and Mean Integrity by Field of Study.

3.61 3.52 3.63 3.88 3.93 4.02 3.95

3.86 3.66 3.46 3.47

3.67 4.07 3.82 * 3.96 * 3.92 3.80

4.09 4.54 4.00 * * 4.14

3.82

4.10

3.55

3.79

[Lack of] Integrity Inhibitors

3.79 3.70 3.82 3.93 3.88 3.83 3.85

3.91 3.76 3.83 3.78

3.61 4.09 4.02 * 3.84 * 3.74 3.80

3.94 4.04 3.95 * * 4.09

3.83

3.85

3.82

4.03

Advisor– Advisee Relations

3.71 3.84 3.80 3.86 3.79 3.90 3.84

3.77 3.97 3.82 3.67

3.59 3.98 3.89 * 3.89 * 3.70 3.69

3.60 3.82 3.63 * * 3.81

3.69

3.44

3.49

3.83

3.61 3.50 3.57 3.62 3.70 3.66 3.85

3.88 3.61 3.42 3.43

3.72 3.91 3.90 * 3.75 * 4.02 3.96

3.59 3.56 3.69 * * 3.81

3.68

3.51

3.67

3.83

(continued)

3.78 3.76 3.74 3.79 3.80 3.74 3.85

3.94 3.89 3.74 3.71

3.54 4.08 3.81 * 3.60 * 3.96 3.82

3.09 3.64 3.56 * * 3.64

3.67

3.44

3.47

3.61

Departmental RCR Regulatory Expectations Resources Quality

80 419 314 290 333 405 3 123 154 161 240 305 200 75 257 170 16 15 3 47 64 11,455

9 7 7 1 3 4 5 3 7 16 5 9 4 1 2 1 3 8 379

n of cases

3 5

n of departments

0.1 0.1 0.0 0.4 0.6 100.0

2.2 1.5

2.9 3.5 0.0 1.1 1.3 1.4 2.1 2.7 1.7 0.7

2.5

3.7 2.7

* * * 4.45 4.11 —

4.42 4.33

4.16 4.12 * 4.26 4.35 4.26 4.39 4.27 4.24 4.10

4.23

4.14 4.07

% Integrity (of cases) Norms

* * * 3.93 3.63 —

3.89 3.84

3.53 3.40 * 3.75 3.66 3.46 3.72 3.67 3.61 3.51

3.46

3.53 3.49

Integrity Socialization

* * * 4.19 3.80 —

4.06 4.04

3.77 4.12 * 4.06 4.27 4.20 4.21 4.11 4.00 3.68

4.08

3.89 3.65

[Lack of] Integrity Inhibitors

* * * 3.97 3.61 —

4.03 3.90

3.94 3.84 * 3.77 4.05 3.85 3.69 3.81 3.87 3.76

3.82

3.73 3.76

Advisor– Advisee Relations

* * * 4.00 3.80 —

3.75 3.73

3.78 3.94 * 3.61 3.94 3.75 3.74 3.60 3.79 3.41

3.83

4.00 3.80

* * * 4.02 3.89 —

3.84 4.01

3.66 3.55 * 3.66 3.31 3.69 3.86 3.71 3.71 3.66

3.59

3.52 3.53

* * * 3.69 3.81 —

3.48 3.98

3.69 3.83 * 3.39 3.52 3.04 3.71 3.54 3.63 3.51

3.65

3.75 3.64

Departmental RCR Regulatory Expectations Resources Quality

Note. All bivariate associations between field of study and the integrity variables are statistically significant, Fα = .01, g-1, n-g-1. CGS = Council of Graduate Schools; RCR = responsible conduct of research. * Mean integrity is not reported for fields of study with fewer than 3 departments represented.

Physical and earth Chemistry sciences Computer and information sciences Earth, atmospheric, and marine science Mathematical sciences Physics and astronomy Physical sciences, other Social sciences and Anthropology/archaeology psychology Economics Political science Psychology Sociology Social sciences, other Other fields Architecture and environmental design Communications Family and consumer sciences Library and archival studies Public administration Religion and theology Social work Other fields Total

CGS broad field of study CGS field of study

Table 3. (continued)

81

Wells et al. Table 4.  Percentile Distributions and Variability of Mean Integrity Scale Scores by Department/Program.

Variable n of cases Reference points  Maximum  75th percentile   M  25th percentile  Minimum Variation  Interquartile range   SD

Integrity Norms

Integrity Socialization

[Lack of] Integrity Inhibitors

Advisor–Advisee Relations

Departmental Expectations

RCR Resources

Regulatory Quality

374

374

376

376

374

376

366

5.00 4.39

5.00 3.83

5.00 4.21

5.00 4.07

5.00 4.00

5.00 3.89

5.00 3.96

4.22 4.03

3.61 3.38

3.93 3.68

3.89 3.73

3.79 3.63

3.71 3.54

3.74 3.55

3.00

1.75

1.67

2.25

2.00

2.00

2.00

0.36

0.45

0.53

0.34

0.37

0.35

0.41

0.29

0.40

0.45

0.34

0.37

0.35

0.40

Note. All bivariate associations between department/program and the integrity variables are statistically significant, Fα = .01, g-1, n-g-1. RCR = responsible conduct of research.

All seven associations between field of study and the SOuRCe scales are statistically significant. The statistical tests in this table show only that field of study is a factor in explaining variance in the model. The statistical test we performed does not elucidate which fields of study are significantly higher or lower on any given scale, which is appropriate because we are not testing hypotheses about these questions. What the test tells us is that if a university is to compare a given department against these data, it should be doing so within the appropriate field of study. Overall, the patterns we report in Table 3 for field of study are similar to the patterns observed in Table 2 for broad field of study. Table 4 shows the distributional characteristics for the scale means aggregated by department. The interquartile ranges are narrow, approximately 0.5 scale point in most instances. In contrast, the range of scores in the upper quartile “best practice” group range across about a full scale unit and some departments/programs have a perfect average score across respondents. In the lower quartile, scores range from 1.5 to 2.0 scale units and are as low as 1.67 for the Integrity Inhibitors scale. Thus, it is the spread of each scale from top to bottom that is striking. For example, the mean scores by department/graduate program range from 3.00 to 5.00 for Integrity Norms, from 1.75 to 5.00 for Integrity Socialization, from 1.67 to 5.00 for Integrity Inhibitors, and from 2.25 to 5.00 for Advisor–Advisee Relations. They ranged from 2.00 to 5.00 for Departmental Expectations, RCR Resources, and Regulatory Quality.

Multivariate Results Table 5 presents multivariate analyses of the SOuRCe scales by status, broad field of study, field of study, and department/

program. For each scale, an ANOVA model is used to decompose variance explained by each of the classification variables and their interactions. The decomposition is hierarchical, that is, variance is attributed to status, to broad field of study net of any variance explained by status, to field of study (within broad field of study) net of status and broad field of study, and to department/program net of the others. In addition, variance is attributed to interactions of status with the other variables in similar hierarchical order: Status × Broad field of study, Status × Field of study (within broad field of study), and Status by Department/program (within field of study). These results are shown for each of the SOuRCe scales in the column labeled “R2 Increment.” Below each column, the row labeled “Total” provides the total variance explained for each scale. These range from a low of 7.6% for Advisor–Advisee Relations to a high of 16.2% for Integrity Norms. The effects shown are all statistically significant p < .01. This includes all main effects in each of the models and interaction effects in four of the seven models. Even though the status variable was entered first in these models, its “Attributable Percent” explained is quite small. Variance attributable to status ranges from only 1.3% of total explained variance for the Department/Program Expectations scale to 11.2% for the Integrity Inhibitors scale. In contrast, the “attributable percent” explained by broad field of study and field of study are larger than what is accounted for by status. The percentage of variance explained that is attributable to Broad Field of Study ranges from a low of 3.9% for Advisor–Advisee Relations to high of 21.7% for RCR Resources. The variance explained by field of study ranges from a low of 7.5% for Integrity Norms to a high of 16.5% for both Integrity Inhibitors and Regulatory Quality.

82 5.3 12.7 7.5 37.1 — — 37.4

100.0

0.9 2.1 1.2 6.0 — — 6.0

16.2

Attributable %

10.5

0.5 1.9 0.8 6.0 — 1.3 —

Rincrement %

2

100.0

4.9 17.8 7.9 57.1 — 12.2 — 15.5

1.7 2.2 2.6 9.0 — — — 100.0

11.2 14.0 16.5 58.3 — — —

Attributable %

[Lack of] Integrity Inhibitors

Attributable R2-increment % %

Integrity Socialization

Departmental expectations

7.6

0.2 0.3 1.0 6.1 — — — 100.0

2.3 3.9 13.6 80.3 — — — 10.0

0.1 1.0 1.1 6.3 0.4 1.0 —

100.0

1.3 10.3 11.1 63.5 3.7 10.0 —

9.3

0.4 2.0 1.3 5.6 — — —

100.0

4.1 21.7 14.1 60.1 — — —

Attributable %

RCR resources

R2-increment Attributable R2-increment Attributable R2-increment % % % % %

Advisor–Advisee Relations

Note. All effects shown are statistically significant, p < .01; “—” indicates that a non-significant interaction effect has been omitted from the model. RCR = responsible conduct of research.

Status Broad field of study Field of study Department/program Status × Broad field Status × Field Status × Department/ program Total



Rincrement %

2

Integrity Norms

Table 5.  Increment to R2 and Percent Variance Explained Attributable to Classification Variables.

13.5

1.0 2.1 2.2 7.1 1.1 — —

R2-increment %

100.0

7.4 15.4 16.5 52.6 8.0 — —

Attributable %

Regulatory quality

Wells et al. Notably, the variance explained by department/program is larger in every instance. Variance attributable to department/program ranges from 52.6% for Regulatory Quality to 80.3% for Integrity Inhibitors. It clear that in all but one instance, department/program accounts for more than half of the variance explained in each of the SOuRCe scales. The exception is the model for Integrity Norms, in which department/program is part of an interaction effect with status. Alone, the main effect of department/program accounts for 37.1% of the total variance explained. However, the interaction accounts for an additional 37.4% and, if counted together, department/program accounts for 74.5% of the explained variance in the model.

Discussion Overall, the study results highlight important considerations in assessing the climate of research integrity within an institution. At the broadest level, we learned that it is feasible to implement the SOuRCe in large academic university settings across a broad range of fields of study, department types, and across a broad range of individual roles within these academic units. We also learned that that there is meaningful variability in research integrity climate scores captured by the SOuRCe instrument. Perhaps one of the most interesting findings observed in this study is that of the variability to be explained in research integrity climates; most of this is explained at the departmental/program level, while less is explained by individual status or by field of study. One innovation of the current study was to survey all disciplines within the three participating universities, even those that may not engage in traditional research. This included programs in the arts and humanities, which generally have not been included in studies of research integrity, and individuals across the spectrum of different statuses, from graduate students to faculty. A methodological finding pertaining to applicability of SOuRCe across the breadth of disciplines is that the reliability of the SOuRCe scales is at least as strong and in some instances stronger in this sample than in the validation sample referenced earlier in this article. As the validation sample was collected in academic health centers, the disciplines represented in that sample were more narrow (e.g., biomedical, allied health, and to a lesser extent some social sciences disciplines) than the breadth represented in this sample. In the present case, the range of academic disciplines is much broader and the SOuRCe remains equally reliable, with Cronbach’s alpha estimates comparable with that reported in the validation sample (Martinson, et al., 2012). SOuRCe scales are comparable with one another in that they are computed as mean scales ranging from 1 to 5, where 1 implies “not at all” and 5 implies “completely.” Therefore,

83 one can meaningfully and validly compare the average of one scale with another. For example, the average of Integrity Norms is 4.20 and the average of Integrity Socialization is 3.20, which is a difference of 1.0 on a 5-point scale and is greater than the standard deviation of either scale. This tells us that respondents perceive a greater expression of norms of integrity in their department or program than they perceive for support for instructing junior researchers about integrity. Other scales referring to the departmental climate—Integrity Inhibitors, Advisor–Advisee Relations, and Departmental Expectations—fall between these two extremes. The two scales that refer to climate at the institutional level—RCR Resources and Regulatory Quality— have lower scores. Status is significantly related to all seven SOuRCe scales, that is, two or more levels of status differ significantly from one another on each of the seven scales. Interestingly, the pattern of means by status is not identical across all scales. Graduate students have higher average scores on four scales: Integrity Norms, Integrity Socialization, Department/Program Expectations, and Regulatory Quality; faculty have higher average scores on three scales: Integrity Inhibitors, Advisor–Advisee Relations, and RCR Resources. Postdoctoral scholars, by contrast, have the lowest scores on all scales except Department/Program Expectations and Regulatory Quality. In general, students see higher norms and greater fairness, while faculty see fewer threats to integrity, better mentoring, and more resources. Postdoctoral scholars see less of all these, perhaps reflecting shorter time of exposure to the research environment and less connection to the organization beyond their own research, that is, to department/program or institution-level functions. Both broad field of study (eight categories) and the nested field of study (forty-nine categories) are also significantly related to each of the scales. Department/program (379 categories) was also significantly related to each of the seven SOuRCe scales. There is clearly a good deal of variation in climate of integrity across departments. Thus, from an institutional leadership perspective, there is no comfort in the relatively high overall means on the SOuRCe scales, which range from 3.60 to 4.20, when there are organizational subunits with average scores falling below 2.00. There are units on these campuses whose microclimates of research integrity warrant significant attention. Multivariate analyses indicate that a greater part of the variability of integrity climate is accounted for by differences among small organizational units, rather than broader designations such as fields of study on one hand or of respondents’ organizational status on the other hand. If the climate of research integrity were a function primarily of professional values or of higher level institutional policies

84 and procedures rather than practices at the unit level, then one would expect broad field of study or field of study to explain a greater share of this variability.

Journal of Empirical Research on Human Research Ethics 9(5)

The SOuRCe provides a gauge of research integrity and enables measurement of the climate of integrity at meaningful subunit levels of organizations. Perhaps the level of misconduct or other detrimental research practices would be a better standard, but these are more difficult to measure directly, validly, or on an ongoing basis. Climate of research integrity is not only easier to measure as an ongoing quality indicator but is also directly correlated with misconduct and detrimental research practices (Crain et al., 2012). Furthermore, SOuRCe scales lend themselves to provide comparative norms. For example, comparative data might well indicate that one’s departmental average on Integrity Norms is in the lowest quartile of departments. This should be motivation for directly acting on local Integrity Norms including accessibility of educational opportunities, expert advice, and policies and procedures related to research integrity as well as senior institutional leadership with regard to supporting and communicating high expectations for responsible research.

make the tool more readily accessible to institutional end users such as graduate deans, research deans, organizational culture and climate researchers, research policy leaders (e.g., Research Integrity Officers, human subject protections personnel), and so forth. Another aspect of this work in development that would facilitate future research is to develop a repository of SOuRCe scores with willing institutions that engage with the SOuRCe tool via EthicsCORE. Over time, this would allow for providing comparative SOuRCe data on an increasingly broad representation of institutions and departments allowing end users to benchmark their institutional profiles. We believe another value of SOuRCe score feedback resides not exclusively within an organizational or institutional unit but by end users also being able to make comparisons with other like-departments at peer institutions. The data presented here represent data from only three of the seven schools that participated in the initial CGS PSI project, but all seven schools ultimately implemented the tool. Such evidence of SOuRCe adoption provides an indication of potential interest and usability. The extent to which comparative data among schools may provide a mechanism to compare SOuRCe scores across institutions and units is not yet known but is a ripe opportunity for future research.

Research Agenda

Educational Implications

Having shown the feasibility of using this tool to collect research integrity climate information on a breadth of department types in these universities, it should be acknowledged that there are additional unknowns yet to be explored. For example, although there is evidence that reporting and feedback of organizational data can foster appropriate organizational change (Leape, 2010), we do not yet know how effective such a system will be in bringing about appropriate organizational change to foster research integrity in academic settings. In October 2013, several of the coauthors on this report embarked on a 2-year research project within the Veterans Affairs to conduct a pilot randomized trial using the SOuRCe in a reporting and feedback system testing the effectiveness of such a system for bringing about organizational change (http://projectreporter.nih.gov/project_info_ description.cfm?aid=8486041&icde=20304592&ddparam= &ddvalue=&ddsub=&cr=1&csb=default&cs=ASC). Other efforts underway that will facilitate future research in this area include initiatives to implement the SOuRCe as a semiautomated tool on the EthicsCORE website (http:// nationalethicscenter.org/). EthicsCORE is an NSF-initiated national online ethics resource center and digital library hosted at the National Center for Professional and Research (NCPRE) at the University of Illinois at Urbana–Champaign. Making the SOuRCe available via EthicsCORE with supportive features (that are in development) is one way to

Historically speaking, there has been a plethora of RCR educational efforts, but fewer efforts to systematically evaluate their effectiveness. We believe the SOuRCe offers a tool to both foster integrity as well as assess the impact of RCR educational efforts by focusing on the importance of research integrity climates as indicators of the underlying culture in an organization. Given the variability in SOuRCe scores attributable to departments, the study findings also indicate that tailored locally specific solutions to foster research integrity may be more likely to succeed than more global, “one-size-fits-all” types of solutions. The SOuRCe provides a tool for differentiating between the successes of these two contrasting approaches. The wide variability of results across organizational subunits subject to the same policies suggests that there is variability in the underlying objective rules and standards of these organizational subunits, whatever the uniformity of regulations at the institutional, professional, national, or even international levels. Solutions, pedagogical and structural, for the challenges of research integrity need to be customized to the variability of climate at the subunit level. Certainly, these findings suggest that educational efforts tailored for individual research needs would seem to be more appropriate than a general institutional education approach. The SOuRCe is a reliable and valid tool for assessing the impact and success of such customized solutions.

Best Practices

Wells et al.

Appendix Department/Graduate Program Names Academic Outreach Program Accounting and Information Systems Acoustics Administration of Justice Advertising, Public Relations, and Retailing Aerospace Engineering African American and African Studies African Languages and Literature Afro-American Studies Agricultural and Applied Economics Agricultural and Extension Education Agricultural Economics and Rural Sociology Agricultural Engineering Agricultural, Food, and Resource Economics Agroecology Agronomy American Studies Anatomy Animal Science Anthropology Applied Engineering Sciences Architectural Engineering Architecture Art Art and Art History Art Education Art History Astronomy Astronomy and Astrophysics Atmospheric and Oceanic Sciences Bacteriology Behavioral Science and Education Biobehavioral Health Biochemistry Biochemistry and Molecular Biology Bioengineering Biological Sciences Program Biological Systems Engineering Biology Biomedical Engineering Biomedical Laboratory Diagnostics Program Biomolecular Chemistry Biophysics Biosystems and Agricultural Engineering Biotechnology Botany Business Business Administration Cancer Biology Cartography and Geographic Information Systems Cell and Developmental Biology

85 Cell and Molecular Biology Cellular and Molecular Pathology Center for Advanced Study of International Development Center for Ethics and Humanities in the Life Sciences Center for Latin American and Caribbean Studies Center for Microbial Ecology Chemical Engineering Chemical Engineering and Materials Science Chemistry Chicano—Latin Studies Program Chinese Civil and Environmental Engineering Classics Clinical Investigation Communication Communication Arts Communication Arts and Sciences Communication Sciences and Disorders Communicative Disorders Community, Agriculture, Recreation, and Resource Studies Comparative Biomedical Sciences Comparative Literature Comparative Medicine Comparative Medicine and Integrative Biology Composite Materials and Structures Center Computer Science Computer Science and Engineering Conservation Bio and Sustainable Dev Counseling Psychology Counseling, Educational Psychology, and Special Education Counselor Education, Counseling Psychology, and Rehabilitation Education Creative Writing Criminal Justice Crop and Soil Sciences Curriculum and Instruction Curriculum and Teaching Dairy and Animal Science Dairy Science Development Ecology Ecology, Evolutionary Biology, and Behavior Program Economics Educational Leadership and Policy Analysis Education and Mathematics Education Policy Studies Educational and School Psychology and Special Education Educational Administration Educational Policy Educational Policy Studies Educational Psychology Electrical and Computer Engineering Electrical Engineering Endocrinology—Reproductive Physiology

86 Energy and Geo-Environmental Engineering Energy and Mineral Engineering Engineering Engineering Mechanics Engineering Science and Mechanics English Entomology Environmental Chemistry and Technology Environmental Monitoring: Environment and Resources Environmental Pollution Control Environmental Science and Policy Program Environmental Toxicology Epidemiology Family and Child Ecology Family and Community Medicine Family Medicine Finance Fisheries and Wildlife Food Science Food Science and Human Nutrition Forensic Science Forest Resources Forestry French French Studies French, Classics and Italian Genetics Genetics Program Geography Geological Engineering Geological Sciences Geology Geophysics Geosciences German Global and Area Studies Health Evaluation Sciences Health Policy and Administration Hebrew and Semitic Studies History History of Science, Medicine, and Technology Horticulture Hospitality Business, School Hotel, Restaurant, Institutional Management Human Development and Family Studies Human Ecology Human Medicine Humanities Immunology and Infectious Diseases Industrial and Manufacturing Engineering Industrial Engineering Information Sciences and Technology Integrative Biosciences Integrative Management Program

Journal of Empirical Research on Human Research Ethics 9(5) Internal Medicine International Affairs Italian James Madison Japanese Journalism Journalism and Mass Communication Julian Samora Research Institute Kellogg Biological Station Kinesiology Labor and Industrial Relations Labor Studies and Employment Relations Landscape Architecture Languages and Cultures of Asia Large Animal Clinical Sciences Latin American, Caribbean, and Iberian Studies Law Learning and Performance Systems Legal Institutions Library and Information Studies Life Sciences Communication Limnology and Marine Science Linguistics Linguistics and Applied Language Studies Linguistics and Germanic, Slavic, Asian, and African Languages Lyman Briggs Management Marketing and Supply Chain Management Mass Communications Materials Materials Engineering Materials Science Materials Science and Engineering Mathematics MBA Program Mechanical Engineering Media and Information Studies Medical Microbiology and Immunology Medical Physics Medicine Medicine Other Meteorology Microbiology Microbiology and Immunology Microbiology and Molecular Genetics Molecular and Cellular Pharmacology Molecular and Environmental Toxicology Molecular Medicine Molecular Toxicology Music Music: Education Music: Performance National Food Safety-Toxicology Center

87

Wells et al. National Superconducting Cyclotron Laboratory Neurology and Ophthalmology Neuroscience Neuroscience Program Nondegree Nuclear Engineering and Engineering Physics Nuclear Engineering Nursing Nutrition Nutritional Sciences Obstetrics, Gynecology, and Reproductive Biology Occupational Therapy Osteopathic Surgical Specialties Packaging Pathobiology and Diagnostic Investigation Pediatrics and Human Development Pharmaceutical Sciences Pharmacology Pharmacology and Toxicology Pharmacy Philosophy Physical Medicine and Rehabilitation Physics Physics and Astronomy Physiology Planning, Design, and Construction Plant Biology Plant Breeding and Genetics Plant Breeding and Plant Genetics Plant Pathology Plant Physiology Plant Research Laboratory Political Science Population Health Portuguese Psychiatry Psychology Public Affairs Public Health Sciences Quality and Manufacturing Management Radiology Recreation, Park, and Tourism Management Rehabilitation Psychology Religious Studies Residential College in the Arts and Humanities Rural Sociology Scandinavian Studies School of Music Science Science and Mathematics Education, Science Education Science, Engineering, and Technology Second Language Acquisition Second Language Studies

Slavic Languages and Literatures Small Animal Clinical Sciences Social and Administrative Sciences in Pharmacy Social Science Social Welfare Social Work Sociology Soil Science Southeast Asian Studies Spanish Spanish and Portuguese Spanish, Italian, and Portuguese Special Special Education Statistics Statistics and Probability Surgery Teacher Education Telecommunication, Information Studies, and Media Theater Theater and Drama Theater Arts University Outreach and Engagement Urban and Regional Planning Veterinary and Biomedical Sciences Veterinary Medicine Visual Arts Water Resources Management Wildlife Ecology Writing, Rhetoric, and American Cultures Zoology Acknowledgments We acknowledge the contributions and support of Eva J. Pell, former senior vice president for research and dean of the Graduate School at The Pennsylvania State University and present undersecretary for science of the Smithsonian Institution and Martin T. Cadwallader, vice chancellor for research and dean of the Graduate School, University of Wisconsin–Madison. In addition, we would like to recognize the support of Suzanne Adair, assistant dean for graduate student affairs at The Pennsylvania State University, and Hank Foley, former dean of the graduate school at The Pennsylvania State University and present executive vice president for academic affairs for the University of Missouri System.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The author(s) declared receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded in part by a grant to Michigan State University (KL Klomparens) from the Council of Graduate Schools, Project on Scholarly Integrity.

88 References Bell, N. (2010). Graduate enrollment and degrees: 1999 to 2009, Appendix B. Washington, DC: Council of Graduate Schools. Retrieved from http://www.cgsnet.org/ckfinder/userfiles/files/R_ED2009.pdf Crain, A. L., Martinson, B. C., & Thrush, C. R. (2012). Relationships between the Survey of Organizational Research Climate (SORC) and self-reported research practices. Science and Engineering Ethics, 19, 835-850. doi:10.1007/s11948-012-9409-0 Ehrhart, M. G., Schneider, B., & Macey, W. H. (2013). Organizational climate and culture: An introduction to theory, research, and practice. New York, NY: Routledge. Heitman, E., Anestidou, L., Olsen, C., & Bulger, R. E. (2005). Do researchers learn to overlook misbehavior? Hastings Center Report, 35(5), 49. Leape, L. L. (2010). Transparency and public reporting are essential for a safe health care system. New York, NY: The Commonwealth Fund. Martinson, B. C., Thrush, C. R., & Crain, A. L. (2012). Development and validation of the Survey of Organizational Research Climate (SORC). Science and Engineering Ethics, 19, 813-834. doi:10.1007/s11948-012-9410-7 Mumford, M. D., Murphy, S. T., Connelly, S., Hill, J. H., Antes, A. L., Brown, R. P., & Devenport, L. D. (2007). Environmental influences on ethical decision making: Climate and environmental predictors of research integrity. Ethics & Behavior, 17, 337-366. doi:10.1080/1050842070-1519510 National Institutes of Health. (2013, December). NIH grants policy statement. Retrieved from http://grants.nih.gov/grants/ policy/nihgps_2013/nihgps_ch4.htm#human_subjects_protection_education National Research Council. (2002). Integrity in scientific research: Creating an environment that promotes responsible conduct. Washington, DC: The National Academies Press. National Science Foundation. (2013, January). Grant proposal guide. Retrieved from http://www.nsf.gov/pubs/policydocs/ pappguide/nsf13001/aag_4.jsp National Science Foundation. (2014). Cultivating Cultures for Ethical STEM (CCE STEM) Program Solicitation (NSF 14-546(2014)). Retrieved from http://www.nsf.gov/ pubs/2014/nsf14546/nsf14546.htm Schein, E. H. (2000). Sense and nonsense about culture and climate. In N. M. Ashkanasy, C. P. M. Wilderom, & M. F. Peterson (Eds.), Handbook of organizational culture and climate (pp. xxiii-xxx). Thousand Oaks, CA: SAGE. Sieber, J. E. (2007). Institutional introspection. Journal of Empirical Research on Human Research Ethics, 2(4), 1-2. doi:10.1525/JERHRE.2007.2.4.1 Thrush, C. R., Vander Putten, J., Rapp, C. G., Pearson, L. C., Berry, K. S., & O’Sullivan, P. S. (2007). Content validation of the Organizational Climate for Research Integrity (OCRI) survey. Journal of Empirical Research on Human Research Ethics, 2(4), 35-52. doi:10.1525/JERHRE.2007.2.4.35 Treviño, L. K. (1990). A cultural perspective on changing and developing organizational ethics. In R. Woodman & W. Passmore (Eds.), Research in organizational change and development (Vol. 4, pp. 195-230). Greenwich, CT: JAI Press. Walumbwa, F. O., Hartnell, C. A., & Oke, A. (2010). Servant leadership, procedural justice climate, service climate, employee

Journal of Empirical Research on Human Research Ethics 9(5) attitudes, and organizational citizenship behavior: A crosslevel investigation. Journal of Applied Psychology, 95, 517529. doi:10.1037/a0018867

Authors Biographies James A. Wells is a principal consultant at Jim Wells Consulting in Edgerton, Wisconsin. His current professional activity focuses on determining the prevalence of misconduct, assessment of integrity climates in organizations, and using comparative integrity climate results to stimulate organizational change. He served as principal author of this manuscript, participated in developing the SOuRCe instrument, performed the data analysis, and drafted all sections of the manuscript. At the time of the data collection, he was director, Office of Research Policy at the University of Wisconsin–Madison. Carol R. Thrush is an associate professor of educational development at the University of Arkansas for Medical Sciences in Little Rock. Her current professional activities focus on faculty development and graduate medical education consultation as well as educational research and assessment of professional and research integrity climates in academic health care settings. She participated in developing the SOuRCe instrument and in writing the introduction, methodology, and discussion of the manuscript. Brian C. Martinson is a senior research investigator with HealthPartners Institute for Education and Research in Minneapolis, Minnesota. His current research focuses on fostering research integrity through focusing greater attention on integrity climates in organizations. He participated in developing the SOuRCe instrument, analyzing the data, and in writing the introduction, methodology, and discussion of the manuscript. Terry A. May is the faculty conflict of interest officer at Michigan State University. His professional activity focuses on graduate student training in the responsible conduct of research and the nature and practice of scientific integrity. He assisted in developing the SOuRCe instrument and provided reviews and commentary on drafts of the manuscript at every stage. Michelle Stickler is the executive director, Research Subjects Protection at Virginia Commonwealth University in Richmond. Her current activities focus on ethical and compliance issues pertaining to human and animal research. She participated in data collection and reviewed the manuscript. At the time of the data collection, she was associate director, Office of Research Protections at The Pennsylvania State University. Eileen C. Callahan is the director of Graduate Student Professional Development at the University of Wisconsin–Madison. Her activities include education for graduate students and postdoctoral researchers on the responsible conduct of research. She participated in data collection and reviewed the manuscript. Karen L. Klomparens has served as dean of the Graduate School and Associate Provost for Graduate Education at Michigan State University since 1997 where she is professor of plant biology. Her professional activity focuses on graduate student training in the responsible conduct of research. She was principal investigator of the Project on Research Integrity grant, participated in data collection, and reviewed the manuscript.

Copyright of Journal of Empirical Research on Human Research Ethics is the property of Sage Publications, Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Survey of organizational research climates in three research intensive, doctoral granting universities.

The Survey of Organizational Research Climate (SOuRCe) is a new instrument that assesses dimensions of research integrity climate, including ethical l...
129KB Sizes 0 Downloads 4 Views