Evaluation Review http://erx.sagepub.com/

Assessing Knowledge Sharing Among Academics: A Validation of the Knowledge Sharing Behavior Scale (KSBS) T. Ramayah, Jasmine A. L. Yeap and Joshua Ignatius Eval Rev published online 11 July 2014 DOI: 10.1177/0193841X14539685 The online version of this article can be found at: http://erx.sagepub.com/content/early/2014/05/21/0193841X14539685 A more recent version of this article was published on - Aug 5, 2014

Published by: http://www.sagepublications.com

Additional services and information for Evaluation Review can be found at: Email Alerts: http://erx.sagepub.com/cgi/alerts Subscriptions: http://erx.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://erx.sagepub.com/content/early/2014/05/21/0193841X14539685.refs.html

Version of Record - Aug 5, 2014 >> OnlineFirst Version of Record - Jul 11, 2014 What is This?

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Article

Assessing Knowledge Sharing Among Academics: A Validation of the Knowledge Sharing Behavior Scale (KSBS)

Evaluation Review 1-28 ª The Author(s) 2014 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0193841X14539685 erx.sagepub.com

T. Ramayah1, Jasmine A. L. Yeap1, and Joshua Ignatius2

Abstract Background: There is a belief that academics tend to hold on tightly to their knowledge and intellectual resources. However, not much effort has been put into the creation of a valid and reliable instrument to measure knowledge sharing behavior among the academics. Objectives: To apply and validate the Knowledge Sharing Behavior Scale (KSBS) as a measure of knowledge sharing behavior within the academic community. Subjects: Respondents (N ¼ 447) were academics from arts and science streams in 10 local, public universities in Malaysia. Measures: Data were collected using the 28-item KSBS that assessed four dimensions of knowledge sharing behavior namely written contributions, organizational communications, personal interactions, and communities of practice. Results: The exploratory factor analysis showed that the items loaded on the dimension constructs that they 1 2

School of Management, Universiti Sains Malaysia, Penang, Malaysia School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia

Corresponding Author: Jasmine A. L. Yeap, School of Management, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia. Email: [email protected]

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

2

Evaluation Review

were supposed to represent, thus proving construct validity. A within-factor analysis revealed that each set of items representing their intended dimension loaded on only one construct, therefore establishing convergent validity. All four dimensions were not perfectly correlated with each other or organizational citizenship behavior, thereby proving discriminant validity. However, all four dimensions correlated with organizational commitment, thus confirming predictive validity. Furthermore, all four factors correlated with both tacit and explicit sharing, which confirmed their concurrent validity. All measures also possessed sufficient reliability (a > .70). Conclusion: The KSBS is a valid and reliable instrument that can be used to formally assess the types of knowledge artifacts residing among academics and the degree of knowledge sharing in relation to those artifacts. Keywords knowledge sharing, academics, higher educational institutions, instrument validation, validity, reliability

Introduction The knowledge-based view of the organization considers the organization as a knowledge-creating entity (Prahalad & Hamel, 1990) and argues that effective knowledge management is a crucial factor for success in every sector including higher education. In this respect, higher educational institutions can be aptly likened to a knowledge-creating entity. Essentially, the primary mission of higher educational institutions is the creation, preservation, integration, dissemination, and application of knowledge. For this reason, higher educational institutions engage in a significant level of knowledge management activities that cover various phases such as knowledge identification, creation, organization, storage, sharing, use, and maintenance. Among these phases, knowledge sharing has been claimed to be the most important part of knowledge management (Bock & Kim, 2002). Knowledge sharing is a set of individual behaviors involving sharing one’s work-related knowledge and expertise with other members within one’s organization, which can contribute to the ultimate effectiveness of the organization (Yi, 2009). Simply put, knowledge sharing is the behavior of disseminating one’s acquired knowledge with other members within one’s organization (Ryu, Ho, & Han, 2003). In the context of higher educational institutions, knowledge sharing behavior would refer to the academics

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

3

sharing their work-related knowledge and expertise with other faculty members within the university, which can help elevate the standard of the university. The knowledge shared by the academics can be explicit as well as tacit. Tacit knowledge is not easily expressed or communicated via visual or verbal form because it is subjective, context-specific, and difficult to capture. In contrast, explicit knowledge is objective, can be communicated via visual or verbal means, and is more easily codified (Kim & Ju, 2008). Knowledge sharing in the academic world is of particular concern because academic institutions now face increasing faculty demands for sharing quality resources and expertise (Kim & Ju, 2008). Given that the academic community strives on intellectual prowess, accumulation and dissemination of critical mass of knowledge, knowledge sharing needs to be continuously achieved to justify the existence of higher educational institutions. In this manner, academics as faculty members in academic institutions are one of the most important constituencies representing their parent institutions because of their knowledge production and reuse (Kim & Ju, 2008). They have the responsibility of generating knowledge through research and disseminating the knowledge through teaching (Ramachandran, Chong, & Ismail, 2009).

Knowledge Sharing in Academia Some researchers have noted that there is a relatively weak desire or willingness to share knowledge for achieving common goals in academia compared to profit-oriented organizations (Kong, 1999). For one, Cheng, Ho, and Lau (2009) acknowledged that knowledge hoarding instead of knowledge sharing could be more prevalent in academic institutions though this is a dilemma that happens in all organizations. Within academia itself, it is believed that knowledge interactions between academics tend to be limited to similar disciplines or clustered among those from related disciplines (Park & Moultrie, 2010). For instance, the university biologists would not only have knowledge interactions with their colleagues in the same faculty but would also have shared their knowledge with those from other natural sciences discipline as well such as the academics in chemistry, physics, or the medical faculty. Kim and Ju (2008) were concerned about the fact that academics seemed to place a higher priority on individual scholarly achievement and teaching than on sharing common visions toward university goals and objectives. There seemed to be a tendency for academics to be independent, individualistic, and autonomous while maintaining an objective distance from the

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

4

Evaluation Review

work of their peers (Koppi et al., 1998). They fail to realize that in actual fact, efficient scholarly collaboration among faculty members would increase their effectiveness instead of hampering it. Instead, knowledge sharing is largely perceived to be costly as it may cause (1) jeopardy to one’s self-interest in terms of organizational status and rewards, (2) opportunity costs in terms of time and effort, (3) potential abuse of knowledge by the recipient, and (4) loss of power that cannot be regained or controlled (Casimir, Lee, & Loon, 2012; Cheng, Ho, & Lau, 2009). Although there is a reason to believe that academics (some if not majority) tend to hold on tightly to their knowledge and intellectual resources, to the extent of our knowledge there is no formal instrument to verify the claim mentioned previously. Numerous past studies have examined factors influencing knowledge sharing albeit in various environments (Aizpuru´a, Saldan˜a, & Saldan˜a, 2011; Bartol & Srivastava, 2002; Bresman, Birkenshaw, & Nobel, 1999; Casimir et al., 2012; Davenport & Prusak, 1998; Dong, Liem, & Grossman, 2010; Earl, 1997; Gross & Kluge, 2012; Ipe, 2003; Kim, 2000; Kim & Byun, 2001; Kim & Lee, 2006; Michailova & ¨ zbebek & Toplu, 2011; Wang, Noe, & Wang, 2014; WickHusted, 2003; O ramasinghe & Widyaratne, 2012; Yam, Tang, & Chan, 2012). Nevertheless, there are also several researchers who chose to focus their studies specifically on knowledge sharing in academic institutions. The past few years have witnessed studies by researchers such as Kim and Ju (2008) who identified and analyzed major factors for knowledge sharing among faculty members in a higher educational institution. Similarly, Sohail and Daud (2009) examined the factors and barriers that contributed to successful knowledge sharing among university teaching staff. Cheng et al. (2009) looked into organizational, individual, and technology factors that could possibly influence knowledge sharing among academics in a private university. Using the theory of planned behavior as their foundation, Goh and Sandhu (2013) evaluated the impact of emotional factors namely affective commitment and affective trust toward academics’ knowledge sharing behavior. On the other hand, Dong, Liem, and Grossman (2010) investigated knowledge sharing intentions in Vietnamese organizations using the theory of reasoned action as the foundation of their study along with other additional variables. Apart from factors or barriers to knowledge sharing, Landry, Saihi, Amara, and Ouimet (2010) dealt with the issue of how academics manage their portfolio of knowledge transfer activities. Fullwood, Rowley, and Delbridge (2013) profiled the attitudes of and intentions toward knowledge sharing of U.K. academics.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

5

There are some researchers who incorporated the technology aspect in their studies on knowledge sharing in academia. For instance, Mittal (2008) focused on the various knowledge activities of faculty members to understand the extent of information systems’ impact on those activities and how those activities contribute to value creation and knowledge management. Hou, Sung, and Chang (2009) explored the behavioral patterns of an online knowledge sharing discussion activity among teachers with problem-solving strategy, while Reychav and Te’eni (2009) took an interesting turn in their research by studying the processes of knowledge sharing and the influence of information technology usage at an academic conference. On the other hand, some chose to expand the scope of their studies beyond knowledge sharing to a broader context and unrestricted to merely academics. For instance, Ramachandran, Chong, and Ismail (2009) chose to investigate the practices of six knowledge management processes, which include knowledge creation, capture, organization, storage, dissemination, and application, between public and private universities. Howell and Annasingh (2013) assessed whether path dependency exists in relation to cultural expectations of both knowledge generation and sharing among academic staff in U.K. universities. Instead of focusing on academics, Yuen and Majid (2007) investigated the knowledge sharing behavior of undergraduate students in Singapore, covering issues such as the purpose of sharing knowledge, communication channels preferred for sharing knowledge, and factors that inhibit or motivate knowledge sharing among the students. Along the same lines, Wei, Choy, Chew, and Yen (2012) conducted a descriptive study comparing the knowledge-sharing patterns of Malaysian undergraduate students in public and private universities.

Objective of the Study Beyond the studies discussed, not much effort has been put into the creation of a valid and reliable measurement instrument of knowledge sharing behavior specifically within the academic community. At this point in time, the measurement of knowledge sharing behavior is still considered as a relatively new area of research of which a definitive measure of it has yet to exist (Chalkiti, 2012; Yi, 2009). According to Yi (2009), most of the methods that are used to measure individuals’ knowledge sharing behavior (e.g., number counting, just asking, and taxonomy based on knowledge/technology type) seemed to be problematic in one way or another. For instance, in number counting and taxonomy-based methods, knowledge is regarded as a

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

6

Evaluation Review

product while important processes such as the informal sharing of knowledge are ignored (Huysman & De Wit, 2004). In response to these problems, Yi (2009) developed the Knowledge Sharing Behavior Scale (KSBS). The results of Yi’s study proved that the KSBS is a well-developed instrument with sufficient evidence of its dimensionality, reliability, and validity. Validity and reliability of measures are key indicators of a measuring instrument’s quality (Kimberlin & Winterstein, 2008). Cowles and Crosby (1986) asserted that measure validation cannot be accomplished through merely one study. According to the authors: There is also a need to establish that the measurement model holds across the full range of the underlying variables. If measures are found to behave differently when tested across time and across population groups, this suggests that either the measure is invalid or the concept, poorly understood. It is possible that a measurement instrument taps different constructs when applied at different times or to different population groups. Thus, repeated tests of the factorial, convergent, and discriminant validity of tests need to be made. (Cowles & Crosby, 1986, p. 392)

In other words, the validity of standardized instruments must be established through repeated application of scales in different contexts and among different population groups. Following its pretesting involving 120 distance master of business administration students and 92 working employees, Yi’s KSBS was then validated in a business organization setting using 196 employees of a large American high technology company. Since then, the scale has been rarely tested out in other contexts apart from business ones, such as within the academic context. Thus, the aim of this study is to apply the KSBS developed by Yi (2009) within the academic context by validating the KSBS among academics in higher educational institutions. This study is a departure from Yi and other previous studies that have applied the KSBS in business entities (e.g., Aizpuru´a et al., ¨ zbebek & Toplu, 2011; Palacios-Marque´s, Peris-Ortiz, & Merigo´, 2011; O 2013; Suppiah & Sandhu, 2011) because it is focused on an entirely knowledge-intensive industry (i.e., the universities) whereby managing knowledge is the main activity and product. The KSBS was originally developed and validated in the United States. Other studies that followed suit were largely conducted in European countries such as the United Kingdom, Germany, and Spain. These developed countries are known for their individualistic cultures that emphasize personal achievement at the expense of group goals, resulting in a strong sense

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

7

of competition (The Hofstede Centre, n.d.). On the contrary, this study focused particularly on academics in higher educational institutions located in Malaysia, a newly industrialized country in the southeast Asian region with an economy that is more advanced and developed than those in the developing world but has yet to reach the full signs of a developed country. Unlike the United States where the KSBS was initially developed and tested, Malaysia is known as a collectivist culture whereby family and work group goals are prioritized above individual needs or desires (The Hofstede Centre, n.d.). As Malaysia steadily transitions into a knowledge-based nation, academic institutions especially the public universities are confronted with increasing demands for sharing quality resources and expertise. Consequently, knowledge sharing in academia has become a rising concern. In this respect, a validated KSBS would then be able to provide researchers with an appropriate instrument to formally measure the types of knowledge artifacts residing among the academics and investigate the degree of knowledge sharing among the academics in relation to those artifacts. Findings gathered from the use of the instrument can be used by authorities in formulating policies that can guide the nation toward its aspirations of becoming a knowledge-based society.

KSBS The KSBS developed by Yi (2009) consists of 28 items measuring four dimensions of knowledge sharing behavior namely written contributions (5 items), organizational communications (8 items), personal interactions (8 items), and communities of practice (7 items) on a five-response choice frequency scale. The four dimensions classified in the scale were developed based on four major mechanisms or modes for sharing individual knowledge within organizations as identified by Bartol and Srivastava (2002). According to Bartol and Srivastava, knowledge sharing behavior includes (1) contributing knowledge to organizational databases, (2) sharing knowledge in formal interactions within or across teams or work units, (3) sharing knowledge in informal interactions, and (4) sharing knowledge within communities of practice. Since its introduction, the KSBS has received its fair bit of attention among scholars researching on knowledge management processes. It is only over the last few years that the use of the scale has been gaining momentum among researchers. The KSBS has been applied in various industries across different countries. Nevertheless, the depth of the scale’s usage varied

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

8

Evaluation Review

according to researchers. Some researchers (e.g., Aizpuru´a et al., 2011; ¨ zbebek & Toplu, 2011; Palacios-Marque´s et al., 2013; Ramayah, Yeap, & O Ignatius, 2013) have used the scale in its complete form (i.e., four dimensions, 28 items), while others have utilized only a portion of the scale (e.g., Eaves, 2014; Gross & Kluge, 2012; Jyoti, Gupta, & Kotwal, 2011; Suppiah & Sandhu, 2011; Wickramasinghe & Widyaratne, 2012). ¨ zbebek and Toplu’s (2011) study, knowledge sharing, as measured In O fully by the four dimensions constituting the KSBS, was tested for its relationship with empowerment among employees of fast-moving consumer goods companies in Turkey. Aizpuru´a, Saldan˜a, and Saldan˜a (2011) used the KSBS in testing out the relationship between organizational learning and knowledge sharing in the Spanish hotel industry, whereas Ramayah, Yeap, and Ignatius (2013) applied the KSBS in their study on the factors propelling knowledge sharing among academicians in higher learning institutions of Malaysia. As for Palacios-Marque´s, Peris-Ortiz, and Merigo´’s (2013) study, they adopted all 28 items in the KSBS as measures of knowledge transfer in examining the effect of knowledge transfer on firm performance within the Spanish biotechnology and telecommunications industries. In stark contrast, Jyoti, Gupta, and Kotwal (2011) used only 3 items from the KSBS to measure the knowledge sharing component of knowledge management practices in a study involving Indian telecommunications organizations. Eaves (2014) measured tacit and explicit knowledge sharing behaviors across four leading U.K. communication sector operators using a mixture of items adapted from the KSBS and other sources. Similarly, Gross and Kluge (2012) used some of the items from the KSBS in developing their questionnaire to examine antecedents of knowledge sharing behavior and its impact on shared mental models in German steel production companies. In studying the effects of knowledge sharing mechanisms on voluntary knowledge sharing in software development project teams in Sri Lanka, Wickramasinghe and Widyaratne (2012) generated a list of 13 knowledge sharing mechanisms based partly on the KSBS. On the other hand, Suppiah and Sandhu (2011) used the KSBS to a larger extent by incorporating two dimensions of the KSBS specifically organizational communications and personal interactions in their study that examined the influence of organizational culture types on tacit knowledge sharing behavior in Malaysian organizations. Not all of the four studies that utilized KSBS in its full length (28 items, four dimensions) scrutinized the scale for its dimensionality, validity, and ¨ zbebek and Toplu (2011) merely checked the items reliability. For one, O in the KSBS for their reliability. In contrast, the other researchers (e.g.,

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

9

Aizpuru´a et al., 2011; Palacios-Marque´s et al., 2013; Ramayah et al., 2013) used more stringent methods like conducting confirmatory factor analysis on the KSBS measurement model to assess the scale’s psychometric properties. All in all, the results of the validation procedures carried out by the studies which applied the KSBS in its full length demonstrated that generally the scale had good properties (i.e., factorial loadings above .60, Cronbach’s as above .70, fit indices above .90).

Written Contributions This dimension of knowledge sharing behavior captures behaviors of academics contributing their knowledge in the form of ideas, information, and expertise through written documentation instead of dialogues (Yi, 2009). This includes activities such as publishing articles in journals, magazines, or newsletters, posting ideas and thoughts to department online databases or discussion boards and submitting reports which can benefit other fellow academics, the university, and society at large. In short, these activities are examples of explicit knowledge which is transmitted through a person-todocument channel. Landry et al. (2010) defined it as publication of codified scientific knowledge transferred in the pool of open science. This type of knowledge sharing is usually noncommercial in nature as there are no contractual agreements between the academics and knowledge users. Though noncommercial in nature, the extrinsic motivation for sharing knowledge is generally high because knowledge sharing is perceived to be externally controlled (Kaser & Miles, 2001). What this means is that contributions to the knowledge databases (e.g., journal articles) can be easily tracked, accessed, evaluated, and recorded; therefore, academics can be assured their knowledge sharing will not be ignored or devalued by the university and most importantly it will definitely be rewarded later. For instance, academics who generate excellent research outputs are rewarded not only by their higher educational institutions but also by the government through awards such the nation’s Top Academician Award (Ramachandran et al., 2009).

Organizational Communications This dimension of knowledge sharing behavior refers to behaviors of sharing knowledge through formal social interactions of a person-to-group channel (Yi, 2009). For instance, academics having team/department/ faculty meetings or participating in brainstorming sessions to generate

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

10

Evaluation Review

ideas, thoughts, and solutions from each other. Wickramasinghe and Widyaratne (2012) found brainstorming and collective problem solving to be the most frequently used knowledge sharing mechanism among software development project teams, while workshops were listed as the least popular method. The amount of knowledge shared by the academics commensurates with their willingness to contribute to the success of their university as well as the commitment they have toward the university (Hislop, 2003; MacNeil, 2003). The more academics feel committed to and believe that their contributions will be valuable to the university (Cabrera & Cabrera, 2002) and can help the university as a whole meet its business objectives, the higher their tendency to share their ideas, suggestions, and expertise with one another. Moreover, since this type of knowledge sharing behavior occurs in more formalized settings like formal meetings or workshops, social interactions such as discussions in meetings or presentations in seminars are easily noticed and remembered by superiors and colleagues (Bartol & Srivastava, 2002), thus making the behaviors more likely to be considered and rewarded. In this sense, the high visibility factor could decide the extent of the academics’ knowledge contributions.

Personal Interactions Contrary to organizational communications, personal interactions involve the sharing of knowledge through informal social interactions among individuals (Yi, 2009). Examples of this type of (tacit) knowledge would include colleagues chatting in the hallway, over lunch, phone, or even online and helping fellow academics who approach them. Knowledge sharing such as these usually occurs naturally or is done voluntarily. The aim of sharing knowledge is to help other academics with specific problems, to help them work better and more efficiently, to minimize risks or avoid trouble, or to let others share their genuine passion and excitement on some specific subject (Yi, 2009). Often enough, this type of knowledge sharing can be very productive because such informal, unplanned, or unscheduled knowledge exchanges permit participants to share knowledge that would not have been appropriate to share in a formal context (Antal & Richebe´, 2009). For this dimension, the willingness to share generally depends on the scope and quality of personal relationships that the individual has. As observed by Kubo, Saka, and Pam (2001), the larger the personal networks and the better the personal relationships an individual has, the greater the chance that the individual will share knowledge with people he or she knows in his or her social

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

11

networks. However, Bartol and Srivastava (2002) noticed that rewarding of this type of knowledge sharing behavior will be more difficult because these informal social interactions, such as conversations during lunchtime, are hard for the organization to notice and evaluate.

Communities of Practice Under this dimension, the sharing of knowledge takes place within a community network that comprises of voluntary groups of academics communicating around a topic with common interests in a nonroutine and personal way, as previously described in personal interactions. The difference between knowledge sharing through personal interactions and communities of practice is that the knowledge shared in communities of practice is conducted through informal social interactions of a person-to-group channel instead of a person-to-person basis. As this type of tacit knowledge sharing is based on the general expectation of reciprocity, it is often referred to as a social exchange relationshipbased behavior (Kaser & Miles, 2001). An individual shares his or her knowledge expecting reciprocity, which is based on the trust that others will also share their knowledge because both parties share common areas of interest, shared passion, specific shared problems, and so on. Casimir, Lee, and Loon (2012) found through their study that affect-based trust functions as a building block for social capital and social exchange—in other words, people’s feelings toward their organizations and colleagues will likely influence their knowledge sharing behaviors. Although this type of knowledge sharing behavior may be supported by the university, the university is under no compulsion to convey any acknowledgment toward this type of knowledge sharing behavior (Kaser & Miles, 2001). Therefore, the motivation to share knowledge through communities of practice revolves around intrinsic rewards (Kaser & Miles, 2001) such as the opportunity to strengthen relationship with fellow academics or built expertise or feelings of competence (e.g., by exchanging ideas, creating solutions, and sharing experiences).

Method Population and Sample This is an empirical study that is quantitative in nature as it involves the validation of a survey instrument. The population for this study comprised of all academics from public universities in Malaysia. As it was not feasible to

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

12

Evaluation Review

include all universities in the country in the sample, only several were selected based on judgment. A nonprobability sampling technique was employed with care being given to cover all levels of academic positions (i.e., professor, associate professor, senior lecturer, lecturer, and tutor) in the sampling process. A total of 447 responses were usable, collected from academics representing both arts and science streams in 10 local, public universities.

Measurement Data were collected through interviews and a self-administered questionnaire featuring the KSBS and several other items gleaned from existing measures in the literature. Table 1 presents the measures used in the study and their respective sources. Referring to Table 1, the knowledge sharing behavior construct is measured through 28 items represented by four dimensions namely Written Contributions, Organizational Communications, Personal Interactions, and Communities of Practice. The focus is on these four dimensions of knowledge sharing, as they are the measures to be validated while the other constructs (i.e., Organizational Commitment, Organizational Citizenship Behavior, Explicit and Tacit Sharing) are merely used to facilitate the validation process. In the original KSBS, items measuring the four dimensions were anchored on a 5-point Likert-type response scale ranging from 1 ¼ never, 2 ¼ rarely, 3 ¼ sometimes, 4 ¼ often to 5 ¼ always. However, in this study, a 7-point scale (1 ¼ never to 7 ¼ always) was used instead, as it provides more variance and is able to detect smaller differences that may exist compared to a 5-point scale.

Results Goodness of Measures Goodness of measures indicates whether an instrument or scale used is reliable and valid. In other words, they indicate to what extent an instrument is accurately and consistently measuring a particular concept (reliability), and whether the instrument is indeed measuring the concept that it is supposed to measure (validity; Sekaran & Bougie, 2011). Failure to measure the intended concept accurately would lead to erroneous results. Reliability is established by assessing the stability of the measure through test–retest reliability and parallel form reliability, and internal consistency of the measure through Cronbach’s a. Validity is established through face validity, content validity, construct validity (convergent and

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

13

Table 1. Operationalization of Constructs. Constructs

Definitions

Written Contributions

Includes behaviors of employees contributing their ideas, information, and expertise through written documentation Organizational Includes behaviors of sharing Communications knowledge in formal interactions within or across teams or work units Personal Includes behaviors of sharing Interactions knowledge in informal interactions among individuals Communities of Includes behaviors of sharing Practice knowledge within a community network Organizational The relative strength of an Commitment individual’s identification with, and involvement in, a particular organization A type of discretionary behavior that Organizational is beneficial for an organization but Citizenship falls outside of an employee’s Behavior formal role requirements Explicit Sharing The degree to which one believes that one will engage in an explicit knowledge sharing act Tacit Sharing The degree to which one believes that one will engage in an implicit knowledge sharing act

Items Source 5

Yi (2009)

8

Yi (2009)

8

Yi (2009)

7

Yi (2009)

3

Mowday, Steers, and Porter (1979) Smith, Organ, and Near (1983)

6

2

3

Bock, Zmud, Kim, and Lee (2005) Bock et al. (2005)

discriminant), and criterion-related validity (concurrent and predictive). In this study, the KSBS is evaluated for its validity using content validity, construct validity (factorial structure), convergent validity, discriminant validity, predictive validity, and concurrent validity. The KSBS is also evaluated for its reliability through its internal consistency of measures.

Content Validity Content validity refers to the extent to which an instrument covers the meanings included in the concept (Babbie, 1992). In similar vein, Rubio,

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

14

Evaluation Review

Berg-Weber, Tebb, Lee, and Rauch (2003) referred to content validity as the extent to which the items on a measure assess the same content or how well the content material was sampled in the measure. Essentially, the goals of content validity are to clarify the domain of a concept and judge whether the measure adequately represents the domain (Bollen, 1989). Content validation results in a theoretical definition that explains the meaning of the variable in question (Bollen, 1989) and is guaranteed by the literature overview (Gomez, Lorente, & Cabrera, 2004). From the literature review and interviews conducted with academics on knowledge sharing, we identified items that will accurately represent the domain of the property being measured, which in this case is knowledge sharing within the academic context. Thus, content validity was achieved as the measurement items used in this study reflect the various dimensions of knowledge sharing occurring in the universities.

Construct Validity Construct validity concerns the degree to which the test items measure the construct they were designed to measure. Researchers often use factor analytic techniques to assess construct validity of the scores obtained from an instrument (McCoach, 2002). Factor analysis represents a broad category of approaches and mathematical procedures for determining the latent variable structure of observed variables (Nunnally, 1978). In this study, an exploratory factor analysis (EFA) with an orthogonal rotation of Varimax was used to evaluate the construct validity of the instrument. To evaluate construct validity, we performed a principal components analysis on the set of 28 items of the KSBS. The result of this analysis is summarized in Table 2. The analysis extracted a four-factor solution, each with Eigenvalues above 1, explaining 75.187% of the total variance. The Kaiser–Meyer– Olkin measure of sampling adequacy was 0.834 indicating a meritorious level according to Kaiser and Rice (1974). The Bartlett’s test for sphericity was significant (w2 ¼ 5,555.442, p < .00). Factor 1 was named as Communities of Practice, whereas Factor 2 was named as Organizational Communications. Factor 3 was named as Personal Interactions and Factor 4 Written Contributions. Based on the rotated component matrix, of the 28 items, 3 items (WC5, PI7, and PI8) were dropped due to low item loadings of less than 0.50 as suggested by Hair, Black, Babin, and Anderson (2010). The results drawn from the EFA proved that the items loaded on the constructs that they were supposed to represent, thus proving the adequacy and construct validity of the items used.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

15

Table 2. Results of the Exploratory Factor Analysis.

WC1 WC2 WC3 WC4 OC1 OC2 OC3 OC4 OC5 OC6 OC7 OC8 PI1 PI2 PI3 PI4 PI5 PI6 CP1 CP2 CP3 CP4 CP5 CP6 CP7 Eigenvalue % Variance

1

2

3

4

Communality

.134 .002 .193 .235 .144 .096 .038 .167 .248 .205 .130 .134 .126 .259 .237 .142 .296 .278 .883 .922 .896 .890 .892 .771 .845 5.960 23.840

.298 .121 .225 .307 .845 .861 .826 .730 .778 .781 .646 .689 .322 .253 .233 .377 .316 .287 .240 .195 .202 .237 .228 .036 .041 5.819 23.274

.089 .030 .327 .093 .219 .224 .282 .261 .129 .296 .372 .154 .537 .563 .794 .780 .814 .798 .237 .219 .301 .223 .260 .000 .091 4.101 16.403

.651 .848 .672 .626 .125 .113 .201 .162 .211 .200 .276 .342 .400 .482 .171 .129 .005 .014 .084 .055 .032 .082 .104 .168 .251 2.917 11.670

.539 .735 .646 .549 .798 .814 .803 .656 .727 .781 .648 .633 .567 .681 .770 .788 .850 .797 .900 .939 .935 .905 .926 .624 .788

Note. WC ¼ Written Contributions; OC ¼ Organizational Communications; PI ¼ Personal Interactions; CP ¼ Communities of Practice. The items in boldface indicate that these items fall under the same construct. WC5, PI7, and PI8 were dropped due to low item loadings.

Convergent Validity Further to the construct validity test using factor analysis (between scales), another factor analysis was run but this time using the withinscale approach to test for convergent validity. According to Campbell and Fiske (1959), convergent validity refers to all items measuring a construct actually loading on a single construct. Convergent validity is established when all items measuring a construct all fall into one factor as theorized.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

16

Evaluation Review

Convergent validity was carried out through a within-factor analysis in order to obtain a more in-depth judgment of the dimensionality of the construct under study (Hair, Black, Babin, & Anderson, 2010). All the four factors displayed unidimensionality in the sense that each set of items representing their intended concept loaded on only one construct. Thus, the analysis provided evidence of convergent validity.

Discriminant Validity Discriminant validity refers to the extent to which measures of two different constructs are relatively distinctive, that their correlation values are neither an absolute value of 0 nor 1 (Campbell & Fiske, 1959). Constructs that are similar would produce high correlations while unrelated constructs would result in low correlations. According to Hair et al. (2010), high correlation values of .90 and above is an indication of substantial collinearity between the constructs, demonstrating that the constructs are not distinct from each other. A correlation analysis was done on the four factors of knowledge sharing and the result is presented in Table 3. As can be seen, all constructs are not highly correlated with each other as their coefficients are less than .90, thus implying that the constructs are unique from and unrelated to each other. Further to that, we also included a variable called Organizational Citizenship Behavior (OCB) that is theoretically unrelated to knowledge sharing behavior. The results showed that they are not highly correlated with OCB. Thus, we can conclude that discriminant validity has been established.

Predictive and Concurrent Validity Predictive validity refers to the extent to which measures differentiate individuals in a manner that helps to predict a future criterion (Sekaran & Bougie, 2011). In other words, it assesses the measure’s ability to predict something it should theoretically be able to predict. A correlation analysis was done on the four factors of knowledge sharing generated with Organizational Commitment. It is alleged that commitment encourages interfirm communications as well as knowledge sharing and vice versa (Nyaga, Whipple, & Lynch, 2010). Thus, an academic’s knowledge sharing is believed to foretell how committed or involved the person is to the university. Table 3 shows that all the four factors of knowledge sharing were correlated with the criterion variable which is Organizational Commitment, thus confirming predictive validity. In contrast to predictive validity, concurrent validity is established when a measure

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

17

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

4.15 4.80 4.98 4.08 5.76 5.35 4.42 4.65

1.22 1.05 1.00 1.33 1.03 0.73 1.33 1.29

SD

1.000 .533** .474** .367** .191** .269** .616** .580**

WC 1.000 .665** .401** .360** .363** .433** .566**

OC

1.000 .522** .574** .540** .489** .766**

PI

1.000 .454** .395** .524** .567**

CP

1.000 .341** .288** .362**

OrgCom

1.000 .474** .564**

OCB

1.000 .758**

Explicit

1.000

Tacit

Note. WC ¼ Written Contributions; OC ¼ Organizational Communications; PI ¼ personal Interactions; CP ¼ Communities of Practice; OrgCom ¼ Organizational Commitment; OCB ¼ Organizational Citizenship Behavior; Explicit ¼ Explicit Sharing; Tacit ¼ Tacit Sharing; SD ¼ standard deviation. **p < .01.

WC OC PI CP OrgCom OCB Explicit Tacit

M

Table 3. Means and Intercorrelations.

18

Evaluation Review

correlates well with a measure that has previously been validated. A correlation analysis was also generated between the four factors of knowledge sharing and Explicit Sharing as well as Tacit Sharing. Both Explicit and Tacit Sharing are two types of knowledge sharing measures that have been previously tested and applied by researchers like Bock, Zmud, Kim, and Lee (2005) in their study on managers in Korean organizations. As shown in Table 3, the four factors of knowledge sharing were also correlated with both Tacit and Explicit Sharing which then confirms concurrent validity.

Reliability Reliability measures the degree to which the test score indicates the status of an individual item on the factors defined by the test as well as the degree to which the test score demonstrates individual differences in these traits (McCoach, 2002). A reliability coefficient demonstrates whether the test designer was correct in expecting a certain collection of items to yield interpretable statements about individual differences (McCoach, 2002). Reliability coefficients range between .00 and 1.00. The higher the coefficient, the higher the level of reliability. Generally, Nunnally (1978) proposed .70 to be the minimum acceptable standard for internal consistency. The reliability coefficient was .784 for Written Contributions, .942 for Organizational Communications, .905 for Personal Interactions, and .966 for Communities of Practice (see Table 4). Hence, it can be concluded that these measures possess sufficient reliability.

Discussion and Conclusion Measurement instruments will only be of value if they can be shown to be reliable and valid. The findings of this study have empirically proven that the KSBS is a valid and reliable instrument, thus confirming the findings of Yi (2009). In terms of validity, this study has assessed content validity, construct validity, convergent validity, and discriminant validity. The measures were confirmed to be valid, as they surpassed all the necessary standards suggested by various researchers. In addition, this study took into account Yi’s (2009) suggestion of examining criterion-related validity and found that the measures performed as expected in relation to the external variables used (i.e., Organizational Commitment, Tacit Sharing, and Explicit Sharing). The measures were also tested for reliability and

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

19

Table 4. Reliability Analysis of the Measures. Scale Mean if Scale Variance if Corrected Item Cronbach’s a if Item Deleted Item Deleted Total Correlation Item Deleted Written Contributions a ¼ .784 WC1 WC2 WC3 WC4

11.90 12.63 12.38 12.89

16.331 12.776 14.248 14.270

.522 .632 .639 .583

.765 .711 .707 .735

.838 .840 .829 .724 .789 .857 .748 .729

.931 .931 .931 .938 .934 .929 .937 .939

.668 .689 .779 .807 .797 .750

.898 .902 .882 .878 .881 .888

.920 .950 .937 .924 .940 .684 .808

.957 .955 .956 .957 .956 .974 .967

Organizational Communications a ¼ .942 OC1 OC2 OC3 OC4 OC5 OC6 OC7 OC8

33.35 33.25 33.30 33.41 33.49 33.70 33.89 34.16

53.909 54.637 54.850 56.222 56.230 52.233 54.099 53.687

Personal Interactions a ¼ .905 PI1 PI2 PI3 PI4 PI5 PI6

24.98 25.61 24.66 24.77 24.64 24.61

27.807 24.669 26.106 25.760 27.133 27.932

Communities of Practice a ¼ .966 CP1 CP2 CP3 CP4 CP5 CP6 CP7

24.50 24.45 24.50 24.40 24.39 24.63 24.81

64.630 64.206 63.786 63.874 63.839 69.867 64.114

they exhibited excellent Cronbach’s a coefficients, thus confirming reliability as well. Contrary to common supposition that knowledge sharing behavior is reflected as a unidimensional construct, the EFA results in this study

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

20

Evaluation Review

showed that knowledge sharing behavior among academics is multidimensional. However, 3 items were discarded due to their low loadings. Hence, this instrument can be suitably applied for measuring knowledge sharing behaviors within the academic context specifically in the Malaysian setting. Several implications can be derived from the current findings. Future researchers who intend to assess knowledge sharing in academia should consider measuring knowledge sharing behavior as a multidimensional construct. Measuring knowledge sharing among academics should capture the aspects of knowledge that are not easily expressed or communicated via visual or verbal form as well as the aspects that are objective and can be codified. In this manner, measuring knowledge sharing among academics through four dimensions namely their written contributions, organizational communications, personal interactions, and communities of practice is a holistic representation of both tacit and explicit knowledge residing within the academic. On measuring the written contributions dimension of knowledge sharing among academics in Malaysia, future researchers can choose to disregard the item pertaining to updating university information through online discussion boards because that is not popularly practiced among the academics in local universities. When measuring the personal interactions component of knowledge sharing, future researchers can consider excluding items pertaining to the use of online chats and e-mail communication to help colleagues with work-related problems, as these 2 items did not significantly reflect the type of personal interactions engaged by the academicians. Given that academics work within close proximity to each other in the university, it may seem more convenient for them to share their knowledge, ideas, and experience with their colleagues on a face-to-face basis especially when it involves tacit knowledge, that is, the kind of knowledge that is difficult to transfer to the recipient via means of writing it down or verbalizing it. Furthermore, researchers such as Al-Alawi, Al-Marzooqi, and Mohammed (2007) and Yang (2009) affirmed face-to-face spontaneous interactions as the best and most preferred way of sharing knowledge among employees. The knowledge sharing instrument validated in this study can also be used by university administrators to determine the degree of knowledge sharing among the academics in the university. In knowing the degree of knowledge sharing, the administrators can approximately gauge the level of commitment that the academics have toward their institutions. As shown in our findings, higher levels of knowledge sharing among the academics lead to higher involvement in and dedication to the university. The more committed the academician is to the university, the more willing the

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

21

academician will be in communicating and exchanging information and knowledge with other colleagues to support the university’s growth. This has been acknowledged by Sohail and Daud (2009) who affirmed that staffs with high levels of institutional commitment are more likely to be highly motivated which in turn makes them more willing to share their knowledge within the organization. In fact, the positive and significant relationship between commitment and knowledge sharing has been examined by researchers and found to be true (e.g., Bahramzadeh & Khosroabadi, 2012; Casimir et al., 2012; Nyaga et al., 2010; Yam et al., 2012). Given that the KSBS comprises of four distinct areas of knowledge sharing, university administrators can identify the problematic areas and then formulate ways to stimulate acts of sharing in those areas. While the current methods of validation were thorough, the study was limited to the academic context. In future studies, these results need to be replicated using larger sample sizes and in different contexts in order to generalize the scale. Also, future research should examine interrelationships between each of the four knowledge sharing factors. In addition, a comparative study can be conducted between the academics in public and private universities to ascertain whether knowledge sharing factors differ significantly between the two types of institutions, given that private universities are typically profit oriented. From their findings, Goh and Sandhu (2013) have pointed out that the intention to share knowledge in public universities is significantly higher than private universities. However, the breakdown of differences in terms of knowledge sharing components between the two types of institutions was not provided. For this reason, future researchers can carry out comparative studies using the KSBS to uncover which dimensions of knowledge sharing (written contributions, organizational communications, personal interactions, and communities of practice) that both public and private universities differ in while validating the scale among the two types of institutions.

Appendix Items measuring the four dimensions of knowledge sharing in the Knowledge Sharing Behavior Scale (KSBS).

Written Contributions (1 ¼ never to 7 ¼ always) WC1: Submit documents and reports. WC2: Publish articles in university journals, magazines, or newsletters. WC3: Share documentation from personal files related to current work.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

22

Evaluation Review

WC4: Contribute ideas and thoughts to department online databases. WC5: Keep others updated with important university information through online discussion boards.

Organizational Communications (1 ¼ never to 7 ¼ always) OC1: Express ideas and thoughts in department meetings. OC2: Participate fully in brainstorming sessions. OC3: Propose problem-solving suggestions in team meetings. OC4: Answer questions of others in team meetings. OC5: Ask good questions that can elicit others’ thinking and discussion in team meetings. OC6: Share success stories that may benefit the university in department meetings. OC7: Reveal past personal work-related failures or mistakes in department meetings to help others avoid repeating these mistakes. OC8: Make presentations in department meetings.

Personal Interactions (1 ¼ never to 7 ¼ always) PI1: Support less-experienced colleagues with time from personal schedule. PI2: Engage in long-term coaching relationships with junior academicians. PI3: Spend time in personal conversation (e.g., discussion in hallway, over lunch, through telephone) with others to help them with their work-related problems. PI4: Keep others updated with important department information through personal conversation. PI5: Share passion and excitement on some specific subjects with others through personal conversation. PI6: Share experiences that may help others avoid risks and trouble through personal conversation. PI7: Have online chats with others to help them with their work-related problems. PI8: Spend time in e-mail communication with others to help them with their work-related problems.

Communities of Practice (1 ¼ never to 7 ¼ always) CP1: Meet with community* members to create innovative solutions for problems that occur in work. CP2: Meet with community* members to share own experience and practice on specific topics with common interests.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

23

CP3: Meet with community* members to share success and failure stories on specific topics with common interests. CP4: Meet with community* members to work to encourage excellence in community’s practice. CP5: Support personal development of new community* members. CP6: Send related information to members through community* e-mail list. CP7: Share ideas and thoughts on specific topics through university supported online community*-of-practice system. *Community: an informal network of people within or across organizations who voluntarily share common practice, expertise, and interests on specific topics. It is neither an organizational unit nor a team.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by Universiti Sains Malaysia under the Research University Grant: PMGT/1001/811036.

References Aizpuru´a, L. I., Saldan˜a, P. E. Z., & Saldan˜a, A. Z. (2011). Learning for sharing: An empirical analysis of organizational learning and knowledge sharing. International Entrepreneurship and Management Journal, 7, 509–518. Al-Alawi, A. I., Al-Marzooqi, N. Y., & Mohammed, Y. F. (2007). Organizational culture and knowledge sharing: Critical success factors. Journal of Knowledge Management, 11, 22–42. Antal, A. B., & Richebe´, N. (2009). A passion for giving, a passion for sharing: Understanding knowledge sharing as gift exchange in academia. Journal of Management Inquiry, 18, 78–95. Babbie, E. (1992). The practice of social research (6th ed.). Belmont, CA: Wadsworth. Bahramzadeh, H., & Khosroabadi, S. (2012). The relationship between organizational commitment and knowledge sharing: A case study of university employee cooperation. Management Science Letters, 2, 2661–2666. Bartol, K., & Srivastava, A. (2002). Encouraging knowledge sharing: The role of organizational reward systems. Journal of Leadership & Organization Studies, 9, 64–76.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

24

Evaluation Review

Bock, G., & Kim, Y. (2002). Breaking the myths of rewards: An exploratory study of attitudes about knowledge-sharing. Information Resources Management Journal, 15, 14–21. Bock, G.-W., Zmud, R. W., Kim, Y.-G., & Lee, J.-N. (2005). Behavioral intention formation in knowledge sharing: Examining the roles of extrinsic motivators, social-psychological forces, and organizational climate. MIS Quarterly, 29, 87–111. Bollen, K. A. (1989). Structural equations with latent variables. New York: John Wiley. Bresman, H., Birkenshaw, J., & Nobel, R. (1999). Knowledge transfer in international acquisitions. Journal of International Business Studies, 30, 439–462. Cabrera, A., & Cabrera, E. (2002). Knowledge-sharing dilemmas. Organization Studies, 23, 687–710. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. Casimir, G., Lee, K., & Loon, M. (2012). Knowledge sharing: Influences of trust, commitment and cost. Journal of Knowledge Management, 16, 740–753. Chalkiti, K. (2012). Knowledge sharing in dynamic labour environments: Insights from Australia. International Journal of Contemporary Hospitality Management, 24, 522–541. Cheng, M.-Y., Ho, J. S.-Y., & Lau, P. M. (2009). Knowledge sharing in academic institutions: A study of Multimedia University Malaysia. Electronic Journal of Knowledge Management, 7, 313–324. Cowles, D., & Crosby, L. A. (1986). Measure validation in consumer research: A confirmatory factor analysis of the voluntary simplicity lifestyle scale. Advances in Consumer Research, 13, 392–397. Davenport, T., & Prusak, L. (1998). Working knowledge: How organizations manage what they know. Boston, MA: Harvard Business School Press. Dong, G., Liem, C. G., & Grossman, M. (2010). Knowledge-sharing intention in Vietnamese organizations. VINE: The Journal of Information and Knowledge Management Systems, 40, 262–276. Earl, M. (1997). Knowledge as strategy: Reflection on Skandia International and Shorko films. In L. Prusak (Ed.), Knowledge in organization (pp. 1–15). Boston, MA: Butterworth-Heinemann. Eaves, S. (2014). Middle management knowledge by possession and position: A panoptic examination of individual knowledge sharing influences. The Electronic Journal of Knowledge Management, 12, 69–86. Fullwood, R., Rowley, J., & Delbridge, R. (2013). Knowledge sharing amongst academics in UK Universities. Journal of Knowledge Management, 17, 123–136.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

25

Goh, S. K., & Sandhu, M. S. (2013). Knowledge sharing among Malaysian academics: Influence of affective commitment and trust. The Electronic Journal of Knowledge Management, 11, 38–48. Gomez, P. J., Lorente, J. J., & Cabrera, R. V. (2004). Training practices and organizational learning capability: Relationship and implications. Journal of European Industrial Training, 28, 234–56. Gross, N., & Kluge, A. (2012). ‘‘Why should I share what I know?’’—Antecedents for enhancing knowledge-sharing behavior and its impact on shared mental models in steel production. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56, 403–407. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis. Upper Saddle River, NJ: Prentice Hall. Hislop, D. (2003). Linking human resource management and knowledge management via commitment: A review and research agenda. Employee Relations, 25, 182–202. Hou, H., Sung, Y., & Chang, K. (2009). Exploring the behavioral patterns of an online knowledge-sharing discussion activity among teachers with problemsolving strategy. Teaching and Teacher Education, 25, 101–108. Howell, K. E., & Annasingh, F. (2013). Knowledge generation and sharing in UK Universities: A tale of two cultures? International Journal of Information Management, 33, 32–39. Huysman, M., & De Wit, D. (2004). Practices of managing knowledge sharing: Towards a second wave of knowledge management. Knowledge and Process Management, 11, 81–92. Ipe, M. (2003). Knowledge-sharing in Organizations: A conceptual framework. Human Resource Development, 2, 337–359. Jyoti, J., Gupta, P., & Kotwal, S. (2011). Impact of knowledge management practices on innovative capacity: A study of telecommunication sector. Vision: The Journal of Business Perspective, 15, 315–30. Kaiser, H. F., & Rice, J. (1974). Little Jiffy, Mark IV. Educational and Psychology Measurement, 34, 111–117. Kaser, P., & Miles, R. (2001). Knowledge activists: The cultivation of motivation and trust properties of knowledge sharing relationships. Academy of Management Proceedings ODC:D1–D6. Kim, S. (2000). The roles of knowledge professionals for knowledge management. INSPEL, 34, 1–8. Kim, S., & Byun, J. (2001). A study on the behavior of knowledge-sharing. Journal of the Korean Society for Library and Information Science, 35, 227–247. Kim, S., & Ju, B. (2008). An analysis of faculty perceptions: Attitudes toward knowledge sharing and collaboration in an academic institution. Library & Information Science Research, 30, 282–290.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

26

Evaluation Review

Kim, S., & Lee, H. (2006). The impact of organizational context and information technology on employee knowledge-sharing capabilities. Public Administration Review, 66, 370–385. Kimberlin, C. L., & Winterstein, A. G. (2008). Validity and reliability of measurement instruments used in research. American Journal of Health System Pharmacy, 65, 2276–2284. Kong, H. (1999). A study on the university professor’s intention of knowledge-sharing. Kwangju, South Korea: JunNam University Kwangju. Koppi, A., Chaloupka, M., Llewellyn, R., Cheney, G., Clark, S., & Fenton-Kerr, T. (1998). Academic culture, flexibility and the national teaching and learning database. In R. Corderoy (Ed.), Flexibility: The next wave? Proceedings of the Australian Society for Computers in Learning in Tertiary Education ‘98 Conference, University of Wollongong (pp. 425–432). Wollongong, Australia: University of Wollongong. Kubo, I., Saka, A., & Pam, S. (2001). Behind the scenes of knowledge sharing in a Japanese bank. Human Resource Development International, 4, 465–485. Landry, R., Saihi, M., Amara, N., & Ouimet, M. (2010). Evidence on how academics manage their portfolio of knowledge transfer activities. Research Policy, 39, 1387–1403. MacNeil, C. M. (2003). Line managers: Facilitators of knowledge sharing in teams. Employee Relations, 25, 294–307. McCoach, D. B. (2002). A validation study of the school attitude assessment survey. Measurement and Evaluation in Counseling and Development, 35, 66–77. Michailova, S., & Husted, K. (2003). Knowledge-sharing hostility in Russian firms. California Management Review, 45, 59–77. Mittal, M. (2008). Personal knowledge management: A study of knowledge behaviour of academicians. Journal of Information & Knowledge Management, 7, 93–100. Mowday, R. T., Steers, R. M., & Porter, L. W. (1979). The measurement of organizational commitment. Journal of Vocational Behavior, 14, 224–247. Nunnally, J. C. (1978). Psychometric theory. New York: McGraw Hill. Nyaga, G. N., Whipple, J. M., & Lynch, D. F. (2010). Examining supply chain relationships: Do buyer and supplier perspectives on collaborative relationships differ? Journal of Operations Management, 28, 101–114. ¨ zbebek, A., & Toplu, E. K. (2011). Empowered employees’ knowledge sharO ing behavior. International Journal of Business and Management Studies, 3, 69–76. Palacios-Marque´s, D., Peris-Ortiz, M., & Merigo´, J. M. (2013). The effect of knowledge transfer on firm performance: An empirical study in knowledge-intensive industries. Management Decision, 51, 973–85.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Ramayah et al.

27

Park, J.-H., & Moultrie, J. (2010). Understanding university academics’ internal and external knowledge interactions in different disciplines: Evidence from university in South Korea. Paper presented at the DRUID Conference 2010, Opening Up Innovation: Strategy, Organization and Technology, Imperial College Business School, London. Retrieved from May 1, 2014, http://www2. druid.dk/conferences/viewpaper.php?id¼502087&cf¼43 Prahalad, C., & Hamel, G. (1990). The core competition of the corporation. Harvard Business Review, 68, 79–91. Ramachandran, S. D., Chong, S. C., & Ismail, H. (2009). The practice of knowledge management processes: A comparative study of public and private higher education institutions in Malaysia. VINE: The Journal of Information and Knowledge Management Systems, 39, 203–222. Ramayah, T., Yeap, J. A. L., & Ignatius, J. (2013). An empirical inquiry on knowledge sharing among academicians in higher learning institutions. Minerva: A Review of Science, Learning and Policy, 51, 131–154. Reychav, I., & Te’eni, D. (2009). Knowledge exchange in the shrines of knowledge: The ‘‘How’s’’ and ‘‘Where’s’’ of knowledge sharing processes. Computers & Education, 53, 1266–1277. Rubio, D. M., Berg-Weber, M., Tebb, S. S., Lee, E. S., & Rauch, S. (2003). Objectifying content validity: Conducting a content validity study in social work research. Social Work Research, 27, 94–104. Ryu, S., Ho, S. H., & Han, I. (2003). Knowledge sharing behavior of physicians in hospitals. Expert Systems with Applications, 25, 113–122. Sekaran, U., & Bougie, R. (2011). Research methods: A skill building approach (5th ed). Chichester, West Sussex, England: John Wiley. Smith, C. A., Organ, D. W., & Near, J. P. (1983). Organizational citizenship behavior: Its nature and antecedents. Journal of Applied Psychology, 68, 653–663. Sohail, M. S., & Daud, S. (2009). Knowledge sharing in higher education institutions: Perspectives from Malaysia. VINE: The Journal of Information and Knowledge Management Systems, 39, 125–142. Suppiah, V., & Sandhu, M. S. (2011). Organisational culture’s influence on tacit knowledge-sharing behavior. Journal of Knowledge Management, 15, 462–477. Wang, S., Noe, R. A., & Wang, Z.-M. (2014). Motivating knowledge sharing in knowledge management systems: A quasi-field experiment. Journal of Management, 40, 978–1009. Wei, C. C., Choy, C. S., Chew, G. G., & Yen, Y. Y. (2012). Knowledge sharing patterns of undergraduate students. Library Review, 61, 327–344. Wickramasinghe, V., & Widyaratne, R. (2012). Effects of interpersonal trust, team leader support, rewards, and knowledge sharing mechanisms on knowledge

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

28

Evaluation Review

sharing in project teams. VINE: The Journal of Information and Knowledge Management Systems, 42, 214–236. Yam, R. C. M., Tang, E. P. Y., & Chan, C. C. H. (2012). Commitment enhances knowledge sharing against opportunism in new product development. In V. Dermol, N. T. Sˇirca, G. Ðakovic´, & U. Lindav, (Eds.), Proceedings of the Management, Knowledge and Learning International Conference (pp. 581–589). Celje, Slovenia: International School for Social and Business Studies. Yang, J.-T. (2009). Individual attitudes to learning and sharing individual and organizational knowledge in the hospitality industry. Services Industries Journal, 29, 1723–1743. Yi, J. (2009). A measure of knowledge sharing behavior: Scale development and validation. Knowledge Management Research & Practice, 7, 65–81. Yuen, T. J., & Majid, M. S. (2007). Knowledge-sharing patterns of undergraduate students in Singapore. Library Review, 56, 485–494.

Author Biographies T. Ramayah is currently a professor at the School of Management in Universiti Sains Malaysia (USM). He teaches mainly courses in research methodology and business statistics. His articles have been published in international journals such as Computers in Human Behavior, Technovation, Information & Management, Electronic Markets, Journal of Business Economics and Management, and Information Systems Management. He also serves on the editorial boards and program committees of several international journals and conferences of repute. His full profile can be accessed at www.ramayah.com Jasmine A.L. Yeap is a postdoctoral research fellow at the School of Management in Universiti Sains Malaysia (USM). Her research focuses primarily on the use and impact of technological innovations among individuals and organizations as well as the management of IT-based resources. Apart from that, she also conducts studies in other areas such as consumer behavior in the retailing industry, knowledge sharing, and instrument validation. Most of her works have been published in journals and presented at both local and international conferences. Joshua Ignatius is a senior lecturer in operations research at the School of Mathematical Sciences, Universiti Sains Malaysia (USM). His research interests are in fuzzy multicriteria decision making, game theoretic models, and the application of structural equation modeling in empirical research. His articles have appeared in numerous ISI journals such as Journal of Intelligent & Fuzzy Systems, International Journal of Information Technology & Decision Making, International Journal of Innovative Computing Information & Control, and Group Decision & Negotiation.

Downloaded from erx.sagepub.com at UNIV TORONTO on August 12, 2014

Assessing Knowledge Sharing Among Academics: A Validation of the Knowledge Sharing Behavior Scale (KSBS).

There is a belief that academics tend to hold on tightly to their knowledge and intellectual resources. However, not much effort has been put into the...
240KB Sizes 0 Downloads 5 Views