RESEARCH METHODOLOGY: INSTRUMENT DEVELOPMENT

Development and validation of the Simulation Learning Effectiveness Inventory Shiah-Lian Chen, Tsai-Wei Huang, I-Chen Liao & Chienchi Liu Accepted for publication 11 May 2015

Correspondence to S.-L. Chen: e-mail: [email protected] Shiah-Lian Chen PhD RN Associate Professor Department of Nursing, National Taichung University of Science and Technology, Taichung, Taiwan Tsai-Wei Huang PhD RN Associate Professor Department of Nursing, Hungkuang University, Taichung, Taiwan I-Chen Liao MSN RN Lecturer Department of Nursing, Hungkuang University, Taichung, Taiwan Chienchi Liu PhD RN Associate Professor Department of Nursing, Hungkuang University, Taichung, Taiwan

C H E N S . - L . , H U A N G T . - W . , L I A O I . - C . & L I U C . ( 2 0 1 5 ) Development and validation of the Simulation Learning Effectiveness Inventory. Journal of Advanced Nursing 71(10), 2444–2453. doi: 101111/jan.12707

Abstract Aim. To develop and psychometrically test the Simulation Learning Effectiveness Inventory. Background. High-fidelity simulation helps students develop clinical skills and competencies. Yet, reliable instruments measuring learning outcomes are scant. Design. A descriptive cross-sectional survey was used to validate psychometric properties of the instrument measuring students’ perception of stimulation learning effectiveness. Methods. A purposive sample of 505 nursing students who had taken simulation courses was recruited from a department of nursing of a university in central Taiwan from January 2010–June 2010. The study was conducted in two phases. In Phase I, question items were developed based on the literature review and the preliminary psychometric properties of the inventory were evaluated using exploratory factor analysis. Phase II was conducted to evaluate the reliability and validity of the finalized inventory using confirmatory factor analysis. Results. The results of exploratory and confirmatory factor analyses revealed the instrument was composed of seven factors, named course arrangement, equipment resource, debriefing, clinical ability, problem-solving, confidence and collaboration. A further second-order analysis showed comparable fits between a three second-order factor (preparation, process and outcome) and the seven firstorder factor models. Internal consistency was supported by adequate Cronbach’s alphas and composite reliability. Convergent and discriminant validities were also supported by confirmatory factor analysis. Conclusion. The study provides evidence that the Simulation Learning Effectiveness Inventory is reliable and valid for measuring student perception of learning effectiveness. The instrument is helpful in building the evidence-based knowledge of the effect of simulation teaching on students’ learning outcomes. Keywords: high-fidelity patient simulation, instrument development, learning outcome, nursing, nursing education, questionnaires, simulations

2444

© 2015 John Wiley & Sons Ltd

JAN: RESEARCH METHODOLOGY: INSTRUMENT DEVELOPMENT

Why is this research or review needed?

• •

High-fidelity patient simulation is an innovative teaching strategy in nursing education. More and more nursing educators have found the usefulness of the teaching strategy in helping students develop



clinical competencies and skills. Evaluation instruments for the simulation learning with sound psychometric properties are important to build up knowledge of evidence-based teaching.

What are the key findings?

• •

The study findings indicate that the Simulation Learning Effectiveness Inventory is a reliable and valid instrument. The study validates that the seven-factor inventory (course arrangement, equipment resource, debriefing, clinical ability, problem-solving, confidence and collaboration) can be used to evaluate learning outcomes of high-fidelity simula-



tion teaching. The findings of construct validity were helpful in defining the score system of the instrument.

How should the findings be used to influence policy/ practice/research/education?



Instructors should evaluate the effects of the simulation teaching on students’ learning using an instrument with

• •

sound psychometric properties. The instrument presented is helpful in building evidencebased knowledge of the effects of simulation teaching on student learning. Further research should be conducted to explore predictive effects of the simulation learning using the Simulation Learning Effectiveness Inventory.

Introduction Simulation is an important teaching strategy in nursing education. With an increase in the complexity of care in the healthcare delivery system, the use of simulation has been increased dramatically worldwide in recent decades. Adopting simulation as a standard training in medical/nursing education is ethically imperative because of patient safety (Ziv et al. 2003). As technology advances, simulation authenticity has greatly improved from low fidelity to high fidelity. Six types of simulation strategies have been reported in the literature, including written simulations (patient management problems), three-dimensional models (part-task simulators), screen-based simulators (computer simulation), standardized patients (trained actors/with moulage), intermediate-fidelity patient simulators (instructor

© 2015 John Wiley & Sons Ltd

Simulation learning effectiveness

driven) and high-fidelity patient simulators (automatically interactive response) (Alinier 2007). High-fidelity patient simulation (HFPS) has become the most significant pedagogic strategy in preparing nursing students with clinical skills to meet the future challenges of their careers (Cant & Cooper 2010). Through predetermined case scenarios, students learn to assess and manage complex conditions without the fear of making mistakes in a safe and supportive environment. While many nursing educators around the world have sought to validate the usefulness of HFPS in their teaching (Kardong-Edgren et al. 2010), the evidence for simulation effectiveness in students’ learning is inconsistent (Jeffries 2005, Kardong-Edgren et al. 2010, Jansson et al. 2013, Shin et al. 2014). Learning evaluation is an important part of teaching process because it can help instructors evaluate student learning and performance, to improve and refine their teaching. However, few studies of evaluation instruments for simulation learning have sound psychometric properties (Cant & Cooper 2010, KardongEdgren et al. 2010, Jansson et al. 2013). Development of a sound psychometric instrument is necessary to facilitate further evaluation of the HFPS influence on student learning.

Background HFPS is an interactive strategy used to train students in performing procedures, to mimic realistic scenarios, or to imitate the reality of the clinical environment (Jeffries 2005). Unlike traditional lecture teaching, which is content-oriented, the HFPS offers a carefully scripted ‘hands-on’ experience that encourages students to think and act like nurses (Burns et al. 2010). It can be applied at all user levels from novice to expert and training subjects may range from individual cognitive, psychomotor and affective skills to interdisciplinary team collaboration (McGaghie et al. 2009, Kardong-Edgren et al. 2010). HFPS has been used successfully in a variety of nursing settings and specializations (Cant & Cooper 2010). Many instruments have been developed to evaluate the effectiveness of HFPS teaching on student learning outcomes; however, the findings are inconsistent (Cant & Cooper 2010). Part of the reason is due to the lack of reliable and valid instrument. In a systematic review of 26 HFPS studies, improvement of knowledge acquisition and skill performance were found, but the evidence was not strong and the outcomes may be biased because outcome measurements such as a performance checklist were not validated (Yuan et al. 2012). Similar study results were also reported in a review of 109 studies on the effectiveness of simulation in medical education (Issenberg et al. 2005). Another

2445

S.-L. Chen et al.

review of 22 simulation evaluation instruments in the nursing literature identified many outcome variables – such as cognitive, psychomotor, affective and interdisciplinary learning outcomes – but found that most studies failed to report reliability and validity estimates (Kardong-Edgren et al. 2010). Recently, some instruments have been developed with appropriate psychometric properties (Adamson & Kardong-Edgren 2012, Hayden et al. 2014). Other newly developed instruments were also found, such as the Emergency Response Performance Tool, which measures individual confidence in responding to an emergency situation (Arnold et al. 2009); the Simulation Effective Tool, which assesses the simulation effect on confidence and learning (Elfrink et al. 2012); and the Simulation Experience Scale, which assesses students’ satisfaction with simulation experience (Levett-Jones et al. 2011). Most of these instruments evaluate specific domains or tasks associated with simulation learning outcomes. To promote the usefulness of simulation teaching, experts from the National League for Nursing used the simulation model as a framework to guide the process of designing, implementing and evaluating simulations in nursing facilities (Jeffries 2005). In this model, the teaching and learning practices found to contribute to favourable outcomes which were identified as teacher factors, student factors, educational practices, simulation design and outcomes (Jeffries 2005, Jeffries & Rogers 2009). Educational practices comprise certain pedagogical principles of teaching, such as active learning, collaboration, diverse learning and high expectations. Simulation design includes five other important constituents: objectives, fidelity, problem-solving, student support and debriefing. Learning outcomes may be evaluated by five indicators: knowledge, skill performance, learner satisfaction, critical thinking and self-confidence (Jeffries & Rogers 2009, Lafond & Van Hulle Vincent 2013). Applying this model in a national multi-site, multi-method project, Jeffries and Rizzolo (2006) found that debriefing was the most important feature in simulation design, and variables such as collaboration, problem-solving and high expectations were the essential education practices in simulation teaching. Favourable student outcomes such as increased student satisfaction, confidence and improved skill performance were also reported by a critique article of the framework of simulation model (Lafond & Van Hulle Vincent 2013). Yet, many of its concepts and propositions in the framework require further study. To maximize effective use of the HFPS, it is necessary to integrate simulationbased experiences into educational curriculum (Issenberg et al. 2005) and develop an instrument with adequate psychometric properties to evaluate expected outcomes. The 2446

Simulation Learning Effectiveness Inventory was designed by our team to measure concept of learning outcomes, which is viewed in the context of the simulation model.

The study Aim The aim of the study was to develop and test the psychometric properties the Simulation Learning Effectiveness Inventory.

Methodology This cross-sectional and descriptive study was conducted in two phases. Phase I was to develop question items and to examine the preliminary psychometric properties of the instrument by exploratory factor analysis (EFA). Phase II was designed to evaluate the reliability and validity of the finalized Simulation Learning Effectiveness Inventory (SLEI) using confirmatory factor analysis (CFA) (Figure 1).

Participants A purposeful sample of 550 nursing students was recruited from the 2-year RN programme and from the 4-year baccalaureate programme of a university in central Taiwan. Those students who had taken courses with some element of clinical scenario practice using a SimMan high-fidelity patient simulator such as medical surgical nursing and critical/emergency care nursing for at least 12 hours and who were willing to participate were invited to take the questionnaire. The scenarios were developed following the guideline presented by Dubose et al. (2009) and the method of presenting the scenario was student-driven (Dubose et al. 2009). The data were collected between January 2010–June 2010. The response rate was 918% (N = 505). According to enrollment sequence, the sample size was split into 200 and 305 for data analysis of the EFA and the CFA respectively. Critical N indexes of this study was above 200 (CN = 208), indicating the sample size needed to obtain meaningful parameter estimates in the CFA was adequate (Brown 2006).

Instrument The instrument was self-developed for this study. In Phase I, an initial item pool written in Chinese was developed based on critical concepts reviewed in the literature including the framework of the simulation model (Issenberg et al. 2005, Jeffries 2005, American Association of Colleges of © 2015 John Wiley & Sons Ltd

JAN: RESEARCH METHODOLOGY: INSTRUMENT DEVELOPMENT

0·85 Preparation

Course

0·95 Resource

0·77 1·00 Process

Debrief

0·81 Clinical ability

0·70 0·95 0·77 Outcome

Confidence

0·90 0·83

Problem solving

Collaboration

Item 1 Item 2 Item 3 Item 4 Item 5 Item 6 Item 7 Item 8 Item 9 Item 10 Item 11 Item 12 Item 13 Item 14 Item 15 Item 16 Item 17 Item 18 Item 19 Item 20 Item 21 Item 22 Item 23 Item 24 Item 25 Item 26 Item 27 Item 28 Item 29 Item 30 Item 31 Item 32

Figure 1 Confirmatory factor analysis of the SLEI. Nursing 2008, Lafond & Van Hulle Vincent 2013). Ten nursing students who had taken the HFPS-related courses were invited to participate in a focus group discussion to validate if the instrument composed all necessary question items and to test the clarity of the questions. After the 50 items were constructed, content validity was evaluated by a group of five experts who were respectively specialized in instrumental design, medical education and nursing education. They were asked to rate each scale item on a four-point ordinal scale in term of its clarity, relevance and representative. The items related to student satisfaction were removed from the list of items according to the suggestions of the expert consensus. Two items which did not reach a content validity index of 90% were also deleted. Another three items underwent wording revisions for clarity. All items were rated on a five-point Likert scale based on individual degree of agreement with the items: 1 = strongly disagree; 2 = disagree; 3 = unsure; 4 = agree; and 5 = strongly agree. A higher score indicates a greater effect of the specific domain. It takes about 5-10 minutes to complete the finalized questionnaire.

Ethical considerations Ethics approval was obtained from the Institutional Review Board of the university. All data were collected at the end © 2015 John Wiley & Sons Ltd

Simulation learning effectiveness

of the semester. Students who met the recruitment criteria were invited to participate in the study. After the study purpose was explained and informed consent was signed, students were asked to complete the questionnaire. The sample was collected anonymously to ensure confidentiality.

Data analysis SPSS version 19.0 for Windows (SPSS, Inc., Chicago, IL, USA) was used to analyse the descriptive data and EFA. EFA with oblimin rotations was performed to determine the factorial structure of the instrument. Kaiser–Meyer–Olkin (KMO) measure and Bartlett test were calculated to determine sampling adequacy to perform a factor analysis. Eigen values were set greater than 1 to determine the number of factors. Factor items were retained if factor loadings were greater than 04. Items which were cross-loaded with more than one factor with loadings of 05 or greater were excluded from the factor structure. Cronbach’s alpha coefficient estimates and composite reliability (CR) estimates were calculated to evaluate internal consistency. A series of CFAs were performed using LISREL 854 (Scientific Software International, Inc., Skokie, IL, USA). The factorial structure of the SLEI was examined using the following sequence: single-factor model, seven-factor model and second-order model. Several fit statistics were used to evaluate the best fits of the model. The fit indices used to evaluate model fitness were chi-square, v2/d.f., Goodness of Fit Index (GFI) and Comparative Fit Index (CFI). The root mean square error of the approximation (RMSEA) was calculated to determine the error of approximation of the model fit (Bagozzi & Yi 1988). The convergent validity was supported if patterns of correlations among a set of indicator-factor loadings were high and significant. The square of the correlation estimation between two factors to the average variance extracted was used to determine the discriminant validity of the constructs (Brown 2006). The v2 difference test of correlation coefficient estimates between two latent factors was used to determine the discriminant validity of the constructs (Bagozzi & Yi 1988).

Results The sample of participants consisted of 505 nursing students. The majority of the sample were female (9250%) and had no working experience related to nursing (7760%). The mean age was 211 (SD=083) years. Eightyfive percent of the sample was in a 2-year RN programme; the other 15% were enrolled in a 4-year baccalaureate programme. 2447

S.-L. Chen et al. Jeffries P.R. (2005) A framework for designing, implementing and evaluating simulations used as teaching strategies in nursing. Nursing Education Perspectives 26(2), 96–102. Jeffries P.R. & Rizzolo M.A. (2006) Designing and Implementing Models for the Innovative Use of Simulation to Teach Nursing Care of Ill Adults and Children: A National, Multi-site, MultiMethod Study. National League for Nursing and Laerdal Medical, New York. Jeffries P.R. & Rogers K.J. (2009) Theoretical framework for simulation design. In: Simulation in Nursing Education (Jeffries P.R., ed.), National League for Nursing, New York, pp. 21–34. Kardong-Edgren S., Adamson K.A. & Fitzgerald C. (2010) A review of currently published evaluation instruments for human patient simulation. Clinical Simulation in Nursing 6(1), e25–e35. Lafond C.M. & Van Hulle Vincent C. (2013) A critique of the National League for Nursing/Jeffries simulation framework. Journal of Advanced Nursing 69(2), 465–480. Levett-Jones T. & Lapkin S. (2014) A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Education Today 34(6), e58–e63. Levett-Jones T., McCoy M., Lapkin S., Noble D., Hoffman K., Dempsey J., Arthur C. & Roche J. (2011) The development and psychometric testing of the Satisfaction with Simulation Experience Scale. Nurse Education Today 31(7), 705–710.

Marsh H.W. & Hocevar D. (1985) Application of confirmatory factor analysis to the study of self-concept: first- and higher order factor models and their invariance across groups. Psychological Bulletin 97(3), 562–582. Mayville M.L. (2011) Debriefing: the essential step in simulation. Newborn and Infant Nursing Reviews 11(1), 35–39. McGaghie W.C., Siddall V.J., Mazmanian P.E. & Myers J. (2009) Lessons for continuing medical education from simulation research in undergraduate and graduate medical education. Chest 135(3 suppl), 62S–68S. Pike T. & O’Donnell V. (2010) The impact of clinical simulation on learner self-efficacy in pre-registration nursing education. Nurse Education Today 30(5), 405–410. Shin S., Park J.H. & Kim J.H. (2014) Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Education Today 35(1), 176–182. Yuan H.B., Williams B.A., Fang J.B. & Ye Q.H. (2012) A systematic review of selected evidence on improving knowledge and skills through high-fidelity simulation. Nurse Education Today 32(3), 294–298. Ziv A., Wolpe P.R., Small S.D. & Glick S. (2003) Simulationbased medical education: an ethical imperative. Academic Medicine 78(8), 783–788.

Appendix 1 The Simulation Learning Effectiveness Inventory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

The course contents were arranged adequately in terms of sequential order and depth, facilitating my learning. I understand the objective and evaluation requirements of this course. The activities in this course assisted my achieving the learning goals. The equipment and resources for situational exercises were sufficient. The equipment and resources for situational exercises contributed to my learning. Using the environment and equipment for situational exercises was convenient. If I experienced problems or difficulty using the equipment, help was always available. The teacher provided appropriate positive feedback according to the learning situation of students. The feedback provided by the teacher was immediate and promoted my learning outcome. Discussion with the teacher after class assisted my achieving the learning goals. Feedback and discussion of the simulation assisted me in correcting my mistakes and promoting my learning. Situational learning enhanced my understanding of patient problems. Situational learning promoted my ability to care for patients. Situational learning contributed to my mastering the processes of clinical care. Situational learning enabled me to acquire useful knowledge about clinical practices. The contents of situational learning corresponded to my previous learning experience. Situational simulation practice encouraged me to confront future clinical challenges. Situational simulation practice boosted my confidence in my clinical skills. Simulation learning boosted my confidence in handling future clinical problems. Simulation learning alleviated my anxiety/fear of confronting future clinical patient problems. Simulation learning contributed to my confidence in future patient care. Simulation learning enabled me to understand the implication of each solution to patient problems. Simulation learning enabled me to identify problems in clinical care that I have not noticed before. In participating in simulation learning, I approached new concepts or ideas through observation. Simulation learning enabled me to learn previously unfamiliar learning methods. In participating in simulation learning, I approached solutions to problems through data search. In participating in a situational discussion, I identified solutions to problems by understanding argument to topics. Simulation courses promoted my problem-solving skills in confronting patient problems. Situational simulation practice provided opportunities to practice communicating and cooperating with other members in my team. Situational simulation practice enabled me to understand the role that I should play in an interaction with a medical team. During the interaction in the situational simulation, I was willing to share workload with other team members. I could discuss patient needs with the medical team by using effective communication skills.

2452

© 2015 John Wiley & Sons Ltd

JAN: RESEARCH METHODOLOGY: INSTRUMENT DEVELOPMENT

Simulation learning effectiveness

Table 2 Factor correlations of the SLEI subscales. Factors

Items

Alpha

CR

1

2

3

4

5

6

7

Course Resource Clinical ability Debriefing Problem-solving Confidence Collaboration

3 4 5 4 7 5 4

085 082 089 091 083 088 082

087 080 086 092 091 091 089

083 081 074 046 063 063 055

071 076 048 075 065 070

074 078 075 084 074

085 060 070 062

077 071 067

083 080

082

*The square roots of averaged variance extracted estimates are on the diagonal and correlation coefficient estimates are on the off-diagonal.

Factor correlations and discriminant validity The intercorrelations in Table 2 showed that the latent factors of the SLEI were significantly and positively related to each other. Factor correlations ranged 046-084, with none greater than 085. Most of the square roots of average variance extracted estimates (AVE) were greater than the correlation coefficient estimates on the off-diagonal in the matrix, except for equipment resource and clinical ability. Pair construct tests were further performed to examine the discriminant validity between each pair of factors (Anderson & Gerbing 1988). The parameter estimate for two factors was constrained to 10 and compared with a model where the parameter was freely estimated. The findings showed that the non-constrained model had significantly lower chi-square values between each pair of factors than the constrained one, indicating that the traits were not perfectly correlated (Anderson & Gerbing 1988). Thus, discriminant validity was supported.

Convergent validity Factor loadings for all SLEI items were above 050, ranging 079-089 for course arrangement, 064-075 for equipment resource, 071-076 for clinical ability, 074-093 for confidence, 079-089 for debriefing, 077-084 for problem-solving and 079-085 for collaboration. All items reached statistical significance levels and were loaded on a predestined latent factor, which indicated that the items represented the underlying constructs intended to be measured. The convergent validity was thus supported (Brown 2006).

Discussion The purpose of this study was to develop a reliable instrument to assess students’ perceived learning effectiveness of HFPS teaching. The study findings provide evidence that the psychometric properties of the SLEI are satisfactory. Both the EFA and the CFA findings indicated that the SLEI © 2015 John Wiley & Sons Ltd

is a seven-factor structure. The target coefficient estimate also revealed that the seven factors can be explained by the second-order structure, accounting for 940% of the total variance. Even though the chi-square of the second-order factor was larger than that for the first-order model, the result is acceptable because the second-order factors were nested in the first-order model (Marsh & Hocevar 1985, Chen et al. 2008). The chi-square/d.f. values and RMSEAs indicated that the two models are comparable (Brown 2006). Thus, the SLEI can be used to collect seven subscale scores or three collective scores to assess students’ learning outcomes (Chen et al. 2008). Reliability of the instrument was supported by satisfactory Cronbach’s alpha estimates (range 073-091) and CR estimates (range 087-091). Similar to internal consistency, CR is a measure of the overall reliability of a set of heterogeneous but similar items (Brown 2006). The findings of factor correlations, square roots of AVE and pair constructs tests showed satisfactory discriminant validity. The relationship patterns between observed variables and latent factors were pre-designated according to the EFA findings, and all factor loadings were high and reached statistically significant levels. Convergent validity of the instrument was confirmed. The results showed that the seven first-order factors can be further divided into three second-order factors. Corresponding with the process of curriculum design and components of the simulation model, we named the three second-order factors: preparation (course arrangement and equipment resources), process (debriefing) and outcome (clinical ability, confidence, problem-solving and collaboration). The variables identified in the original simulation design characteristic were objectives, fidelity, complexity, cues and debriefing (Jeffries 2005). Yet, complexity and cues have been replaced with problem-solving and student support in 2009 (Jeffries & Rogers 2009). Content of the variables seem to be similar (Lafond & Van Hulle Vincent 2013). In this study, we select the important learning outcomes produced by features of the HFPS teaching based on the simulation model (Jeffries & Rogers 2009). The 2449

S.-L. Chen et al.

findings of the CFA analysis confirmed that clinical ability, confidence, problem-solving and collaboration fitted well together as a composite factor which was consistent with the framework used to guide development of the SLEI question in this study. Students were asked to rate the extent of their agreement to which simulation teaching has brought about improvement of clinical ability, confidence in taking care of patients, collaboration with the others and actively participated in problem-solving activities. Course arrangement and equipment resource composed the first second-order factor of preparation. Course arrangement was employed to assess if objectives and course activities were appropriate for students’ learning. The use of HFPS is redundant unless objectives for the simulation match the students’ prior knowledge (Jeffries 2005) and facilitate development of competent practice to meet the future challenges of clinical care. Equipment resource includes environment and equipment orientations prior to tackling a simulated scenario. To maximize the beneficial effects on student learning, the simulation exercise must provide necessary resources to help familiarize students with the environment and facilitate favourable outcomes. Exposure to an HFPS environment is a great challenge to students which may lead to an increase in anxiety (Ganley & Linnard-Palmer 2012). Providing orientation prior to stimulation is also necessary to reduce students’ anxiety and achieve the expected learning outcomes. Debriefing is included in the second second-order factor-process. Debriefing is the most important feature of simulation teaching in terms of effective learning (Issenberg et al. 2005). Through the debriefing process, students may reflect on their performance and recognize their own learning strengths and weaknesses, to self-monitor their progress towards competency attainment and maintenance (Issenberg et al. 2005). Debriefing was associated with skills of situational awareness, team working, decision-making and other psychomotor skills (LevettJones & Lapkin 2014). Assessing students’ perception of simulation debriefing provides the instructor with immediate feedback of the effects of debriefing method and delivery on student learning. In this study, 461% of the total variance of the SLEI could be explained by debriefing, which may imply its unique role of maximizing student learning in simulation teaching. To bridge the gap between actual performance and target performance, appropriate use of debriefing techniques and methods in HFPS is recommended (Mayville 2011). The third set of second-order factors (outcome) assesses the learning outcomes that may result from the HFPS teaching. Many positive outcomes of the HFPS have been 2450

documented in the literature (Cant & Cooper 2010, Yuan et al. 2012, Lafond & Van Hulle Vincent 2013). The outcomes (clinical ability, confidence, problem-solving and collaboration) contained in the SLEI reflect the unique contribution of HFPS teaching in nursing education. The value of simulations is to expose students to mimicked clinical scenarios and increase their ability to deal with patient problems in a controlled environment (Issenberg et al. 2005). Clinical ability assesses the extent to which simulation teaching helps students improve their understanding/ ability/skills in taking care of patients with various clinical problems. Confidence assesses whether the simulation teaching improves students’ confidence in their ability to care for patients or solve patient problems in the future. To enhance the transferability of the learning to the clinical setting, simulations must be both realistic (Pike & O’Donnell 2010) and repeated (Issenberg et al. 2005). Furthermore, an increase in self-confidence after simulation teaching may also suggest a transfer of learning (Pike & O’Donnell 2010). Problem-solving is the fundamental skill of all nursing practice, and the studies have found that HFPS teaching may facilitate development of problem-solving skills (Issenberg et al. 2005, Burns et al. 2010). This scale intends to measure students’ engagement in problem-solving activities such as learning new ideas and applying problem-solving skills. Collaboration is also an important competency in nursing education (American Association of Colleges of Nursing 2008). Introducing collaboration during baccalaureate training is important to establish a basis for multidisciplinary collaboration in future practice. The HFPS is superior to the traditional lecture format in teaching students the skills of collaboration (Lafond & Van Hulle Vincent 2013). To learn to be collaborative, students need to know the roles and role expectations of team members, be able to communicate effectively and demonstrate appropriate collaborative strategies when working with other health professionals (American Association of Colleges of Nursing 2008). This scale may help faculty understand the extent to which the simulation teaching improves students’ collaborative skills.

Limitations The study sample was drawn from nursing programmes of a department of nursing in central Taiwan and generalization of the findings may be correspondingly limited. The instrument was developed in Chinese language and in a teaching and learning context using the HFPS. Replication of the study with samples comprised of students of diverse educational © 2015 John Wiley & Sons Ltd

JAN: RESEARCH METHODOLOGY: INSTRUMENT DEVELOPMENT

and cultural backgrounds is suggested. No test–retest reliability was done in this study and the entire scale alpha was high (⍺ = 096) which may imply item redundancy. Further study is needed to explore possibility of item reduction for scale parsimony. The SLEI is a self-report scale and the assessment may have subjective biases. Studies on associations between the SLEI and student clinical performance/competency are necessary to determine the usefulness of the SLEI.

Conclusion The findings of this study support the idea that SLEI is a valid and reliable instrument. The instrument may be used to evaluate the learning outcomes when using HFPS teaching as a teaching strategy in generic nursing courses or continuing nursing education to help educators better understand the learning outcomes. HFPS is costly and many factors affect its learning outcomes. To maximize learning outcomes, the educational goals and benchmarks of the simulation for students to achieve need to be clearly defined. Providing feedback and repetitive practice are also the keys to develop required competence (Issenberg et al. 2005). The instrument allows faculty to examine the links between the HFPS teaching and learning outcome from the perspectives of preparation, process and outcome. Furthermore, the study provides evidence to support the usefulness of the simulation model as a framework for simulation study. Use of the HFPS may be expanded if used in conjunction with a reliable and valid evaluation instrument. Validation of the instrument in future research may contribute to the knowledge of how to optimize HFPS use in student learning.

Funding This study was funded by a project grant from the National Science Council, Taiwan (NSC 98-2511-S-241-005).

Conflict of interest No conflict of interest has been declared by the authors.

Author contributions All authors have agreed on the final version and meet at least one of the following criteria [recommended by the ICMJE (http://www.icmje.org/recommendations/)]:



substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data;

© 2015 John Wiley & Sons Ltd

Simulation learning effectiveness



drafting the article or revising it critically for important intellectual content.

References Adamson K.A. & Kardong-Edgren S. (2012) A method and resources for assessing the reliability of simulation evaluation instruments. Nursing Education Perspective 33(5), 334–339. Alinier G. (2007) A typology of educationally focused medical simulation tools. Medical Teacher 29(8), e243–e250. American Association of Colleges of Nursing (2008) The Essentials of Baccalaureate Education for Professional Nursing Practice. American Association of Colleges of Nursing, Washington, DC. Anderson J.C. & Gerbing D.W. (1988) Structural equation modeling in practice: a review and recommended two-step approach. Psychological Bulletin 103(3), 411–423. Arnold J.J., Johnson L.M., Tucker S.J., Malec J.F., Henrickson S.E. & Dunn W.F. (2009) Evaluation tools in simulation learning: performance and self-efficacy in emergency response. Clinical Simulation in Nursing 5(1), e35–e43. Bagozzi R. & Yi Y. (1988) On the evaluation of structural equation models. Journal of the Academy of Marketing Science 16(1), 74–94. Brown T.A. (2006) Confirmatory Factor Analysis for Applied Research. The Guilford Press, New York. Burns H.K., O’Donnell J. & Artman J. (2010) High-fidelity simulation in teaching problem solving to 1st-year nursing students: a novel use of the Nursing Process. Clinical Simulation in Nursing 6(3), e87–e95. Cant R.P. & Cooper S.J. (2010) Simulation-based learning in nurse education: systematic review. Journal of Advanced Nursing 66 (1), 3–15. Chen S.L., Tsai J.C. & Lee W.L. (2008) Psychometric validation of the Chinese version of the Illness Perception QuestionnaireRevised for patients with hypertension. Journal of Advanced Nursing 64(5), 524–534. Dubose D., Sellinger L.D. & Scoloveno R.L. (2009) Baccalaureate Nursing Education (Vol. Chapter 10). Jones & Bartlett Learning, Sudbury, MA. Elfrink C.V.L., Leighton K., Ryan-Wenger N., Doyle T.J. & Ravert P. (2012) History and development of the Simulation Effectiveness Tool (SET). Clinical Simulation in Nursing 8(6), e199–e210. Ganley B.J. & Linnard-Palmer L. (2012) Academic safety during nursing simulation: perceptions of nursing students and faculty. Clinical Simulation in Nursing 8(2), e49–e57. Hayden J., Keegan M., Kardong-Edgren S. & Smiley R.A. (2014) Reliability and validity testing of the Creighton Competency Evaluation Instrument for use in the NCSBN National Simulation Study. Nursing Education Perspective 35(4), 244–252. Issenberg S.B., McGaghie W.C., Petrusa E.R., Gordon D.L. & Scalese R.J. (2005) Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher 27(1), 10–28. Jansson M., K€a€ari€ainen M. & Kyng€as H. (2013) Effectiveness of simulation-based education in critical care nurses’ continuing education: a systematic review. Clinical Simulation in Nursing 9 (9), e355–e360.

2451

S.-L. Chen et al. Jeffries P.R. (2005) A framework for designing, implementing and evaluating simulations used as teaching strategies in nursing. Nursing Education Perspectives 26(2), 96–102. Jeffries P.R. & Rizzolo M.A. (2006) Designing and Implementing Models for the Innovative Use of Simulation to Teach Nursing Care of Ill Adults and Children: A National, Multi-site, MultiMethod Study. National League for Nursing and Laerdal Medical, New York. Jeffries P.R. & Rogers K.J. (2009) Theoretical framework for simulation design. In: Simulation in Nursing Education (Jeffries P.R., ed.), National League for Nursing, New York, pp. 21–34. Kardong-Edgren S., Adamson K.A. & Fitzgerald C. (2010) A review of currently published evaluation instruments for human patient simulation. Clinical Simulation in Nursing 6(1), e25–e35. Lafond C.M. & Van Hulle Vincent C. (2013) A critique of the National League for Nursing/Jeffries simulation framework. Journal of Advanced Nursing 69(2), 465–480. Levett-Jones T. & Lapkin S. (2014) A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Education Today 34(6), e58–e63. Levett-Jones T., McCoy M., Lapkin S., Noble D., Hoffman K., Dempsey J., Arthur C. & Roche J. (2011) The development and psychometric testing of the Satisfaction with Simulation Experience Scale. Nurse Education Today 31(7), 705–710.

Marsh H.W. & Hocevar D. (1985) Application of confirmatory factor analysis to the study of self-concept: first- and higher order factor models and their invariance across groups. Psychological Bulletin 97(3), 562–582. Mayville M.L. (2011) Debriefing: the essential step in simulation. Newborn and Infant Nursing Reviews 11(1), 35–39. McGaghie W.C., Siddall V.J., Mazmanian P.E. & Myers J. (2009) Lessons for continuing medical education from simulation research in undergraduate and graduate medical education. Chest 135(3 suppl), 62S–68S. Pike T. & O’Donnell V. (2010) The impact of clinical simulation on learner self-efficacy in pre-registration nursing education. Nurse Education Today 30(5), 405–410. Shin S., Park J.H. & Kim J.H. (2014) Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Education Today 35(1), 176–182. Yuan H.B., Williams B.A., Fang J.B. & Ye Q.H. (2012) A systematic review of selected evidence on improving knowledge and skills through high-fidelity simulation. Nurse Education Today 32(3), 294–298. Ziv A., Wolpe P.R., Small S.D. & Glick S. (2003) Simulationbased medical education: an ethical imperative. Academic Medicine 78(8), 783–788.

Appendix 1 The Simulation Learning Effectiveness Inventory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

The course contents were arranged adequately in terms of sequential order and depth, facilitating my learning. I understand the objective and evaluation requirements of this course. The activities in this course assisted my achieving the learning goals. The equipment and resources for situational exercises were sufficient. The equipment and resources for situational exercises contributed to my learning. Using the environment and equipment for situational exercises was convenient. If I experienced problems or difficulty using the equipment, help was always available. The teacher provided appropriate positive feedback according to the learning situation of students. The feedback provided by the teacher was immediate and promoted my learning outcome. Discussion with the teacher after class assisted my achieving the learning goals. Feedback and discussion of the simulation assisted me in correcting my mistakes and promoting my learning. Situational learning enhanced my understanding of patient problems. Situational learning promoted my ability to care for patients. Situational learning contributed to my mastering the processes of clinical care. Situational learning enabled me to acquire useful knowledge about clinical practices. The contents of situational learning corresponded to my previous learning experience. Situational simulation practice encouraged me to confront future clinical challenges. Situational simulation practice boosted my confidence in my clinical skills. Simulation learning boosted my confidence in handling future clinical problems. Simulation learning alleviated my anxiety/fear of confronting future clinical patient problems. Simulation learning contributed to my confidence in future patient care. Simulation learning enabled me to understand the implication of each solution to patient problems. Simulation learning enabled me to identify problems in clinical care that I have not noticed before. In participating in simulation learning, I approached new concepts or ideas through observation. Simulation learning enabled me to learn previously unfamiliar learning methods. In participating in simulation learning, I approached solutions to problems through data search. In participating in a situational discussion, I identified solutions to problems by understanding argument to topics. Simulation courses promoted my problem-solving skills in confronting patient problems. Situational simulation practice provided opportunities to practice communicating and cooperating with other members in my team. Situational simulation practice enabled me to understand the role that I should play in an interaction with a medical team. During the interaction in the situational simulation, I was willing to share workload with other team members. I could discuss patient needs with the medical team by using effective communication skills.

2452

© 2015 John Wiley & Sons Ltd

JAN: RESEARCH METHODOLOGY: INSTRUMENT DEVELOPMENT

Simulation learning effectiveness

The Journal of Advanced Nursing (JAN) is an international, peer-reviewed, scientific journal. JAN contributes to the advancement of evidence-based nursing, midwifery and health care by disseminating high quality research and scholarship of contemporary relevance and with potential to advance knowledge for practice, education, management or policy. JAN publishes research reviews, original research reports and methodological and theoretical papers. For further information, please visit JAN on the Wiley Online Library website: www.wileyonlinelibrary.com/journal/jan Reasons to publish your work in JAN:

• High-impact forum: the world’s most cited nursing journal, with an Impact Factor of 1·527 – ranked 14/101 in the 2012 ISI Journal Citation Reports © (Nursing (Social Science)).

• Most read nursing journal in the world: over 3 million articles downloaded online per year and accessible in over 10,000 libraries worldwide (including over 3,500 in developing countries with free or low cost access).

• • • •

Fast and easy online submission: online submission at http://mc.manuscriptcentral.com/jan. Positive publishing experience: rapid double-blind peer review with constructive feedback. Rapid online publication in five weeks: average time from final manuscript arriving in production to online publication. Online Open: the option to pay to make your article freely and openly accessible to non-subscribers upon publication on Wiley Online Library, as well as the option to deposit the article in your own or your funding agency’s preferred archive (e.g. PubMed).

© 2015 John Wiley & Sons Ltd

2453

Development and validation of the Simulation Learning Effectiveness Inventory.

To develop and psychometrically test the Simulation Learning Effectiveness Inventory...
221KB Sizes 7 Downloads 7 Views