CIN: Computers, Informatics, Nursing

& Vol. 33, No. 2, 78–84 & Copyright B 2015 Wolters Kluwer Health, Inc. All rights reserved.

F E A T U R E A R T I C L E

The Efficacy of High-fidelity Simulation on Psychomotor Clinical Performance Improvement of Undergraduate Nursing Students MARY ANNE VINCENT, PhD SUSAN SHERIFF, PhD SUSAN MELLOTT, PhD

Ever since the Institute of Medicine (IOM)1 recommended that simulation be used to improve patient safety, high-fidelity simulation (HFS) has been a rapidly growing educational modality among institutions of higher learning as well as patient care centers. A National Council of State Boards of Nursing survey revealed that more than half of their respondents require students to use simulation at some point in the program.2 Faculty in BSN programs have increased their use of HFS and other types of simulation in response to the need for additional quality clinical experiences for their students. Nursing schools have made substantial investments toward the resources necessary for implementing clinical simulation in undergraduate nursing education. Recently, a number of researchers have explored the impact of HFS on teaching and evaluating the skills of BSN students.3–10 A few researchers5,11 report that there is not yet any support for HFS use in nursing education; however, most investigators agree that further research is necessary and find that simulation may offer some advantage over other methods in clinical teaching. The purpose of this integrative review and meta-analysis was to explore the direct measurable impact of HFS on improving the psychomotor clinical performance of BSN students. The focus is on what researchers have found in 78

High-fidelity simulation has become a growing educational modality among institutions of higher learning ever since the Institute of Medicine recommended that it be used to improve patient safety in 2000. However, there is limited research on the effect of high-fidelity simulation on psychomotor clinical performance improvement of undergraduate nursing students being evaluated by experts using reliable and valid appraisal instruments. The purpose of this integrative review and meta-analysis is to explore what researchers have established about the impact of high-fidelity simulation on improving the psychomotor clinical performance of undergraduate nursing students. Only eight of the 1120 references met inclusion criteria. A metaanalysis using Hedges’ g to compute the effect size and direction of impact yielded a range of j0.26 to +3.39. A positive effect was shown in seven of eight studies; however, there were five different research designs and six unique appraisal instruments used among these studies. More research is necessary to determine if high-fidelity simulation improves psychomotor clinical performance in undergraduate nursing students. Nursing programs from multiple sites having a standardized curriculum and using the same appraisal instruments with established reliability and validity are ideal for this work. KEY WORDS Computer simulation & Clinical skills & Nursing education research & Nursing students & Psychomotor performance

terms of whether HFS measurably improves the clinical performance of BSN students in the psychomotor domain. Also, the direction of that measurable impact will Author Affiliations: Chamberlain College of Nursing, Houston (Dr Vincent); College of Nursing, Texas Woman’s University, Dallas (Dr Sheriff); and College of Nursing, Texas Woman’s University (Dr Mellott), Houston, TX. The authors have disclosed that they have no significant relationship with, or financial interest in, any commercial companies pertaining to this article. Corresponding author: Mary Anne Vincent, PhD, PO Box 131941 The Woodlands, TX 77392 ([email protected]). DOI: 10.1097/CIN.0000000000000136

CIN: Computers, Informatics, Nursing & February 2015 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

be described, and a scatter plot will be used to illustrate the combined results of each study.

High-fidelity Simulation High-fidelity simulation is as close to actual clinical experience and context as possible. Bland et al12 state that simulation is a learning strategy defined as follows: ‘‘A dynamic process involving the creation of a hypothetical opportunity that incorporates an authentic representation of reality, facilitates active student engagement, and integrates the complexities of practical and theoretical learning with opportunity for repetition, feedback, evaluation, and reflection.’’12(p668)

The term fidelity refers to the accuracy and precision by which a medium is able to model or reproduce a sensory experience within the context and haptic dimensions of real experience.13 Haptics refer to tactile ‘‘force-feedback’’ stimulation that can make a simulation experience more authentic. In this article, high fidelity refers to simulation that goes beyond the static, technical skill learning experience, and included studies will report descriptions of learning experiences that have some level of cosmetic and response fidelity.

Performance Improvement ‘‘Performance refers to the way in which something or someone functions.’’14 Many disciplines have studied performance improvement for decades. In production economics, for example, performance improvement scholars Dutton and Thomas15 and Vits and Gelders16 focus on strategies to improve production and increase profit by looking to learning theories as a way to study and promote progress in business by capturing ‘‘progress effects’’ through four learning types:

& autonomous exogenous learning (on the job training & & &

occurring outside the production unit) autonomous endogenous learning (on the job training occurring inside the production unit) induced exogenous learning (planned learning occurring outside the production unit) induced endogenous learning (planned learning occurring inside the production unit)

Decision scientists have noted that learning during technology development has tacit (trial and error) and codified (algorithmic) characteristics and that these can be used to study how early technology adopters think versus late adopters.17 All of these elements can be used to improve or optimize systems performance in nursing. In this article, performance improvement refers to measurable evidence that high-fidelity simulation improves student psychomotor performance in undergraduate nursing clinical education.

SEARCH METHODS A systematic search was performed for published studies featuring the direct measurement of the effect of HFS on specific clinical performance improvement elements in undergraduate nursing students. Selected articles consisted of peer-reviewed, quantitative studies in English that were published between January 2000 and April 2014. Six databases were searched as follows: CINAHL Plus, ERIC, MEDLINE (Ovid), ProQuest (Nursing and Allied Health Source), PubMed, and SCOPUS (EMBASE). Search terms were: nurs*, simulat*, high fidelity, high-fidelity, educat*, perform*, and improv*. Reference lists of relevant review articles and papers were also searched.

Study Selection Criteria Inclusion criteria were as follows: (1) research subjects had to be in a BSN (or equivalent) nursing program; (2) measures had to be directly quantitative; (3) simulation had to be computerized medium to high fidelity; (4) student evaluation had to be performed by clinical experts; and (5) specific directly observable psychomotor performance improvement measures of students had to be utilized. Exclusion criteria for this review were articles that did not have adequate reported data to facilitate an effect size calculation, qualitative studies, studies that blended BSN students with other professions and did not report separate statistics, research featuring grades as the only variable (eg, written examinations, standardized tests, course grades, etc), studies that focused solely on surveys of student psychological domains (eg, self-confidence, student satisfaction, etc), and nonresearch work. Research data of only student or peer evaluation of psychomotor performance were also excluded.

Selection and Coding of Variables The variables for this review were first author, publication year, number of subjects, research design, measurement instrument, and effect size and direction (Hedges’ g). Mean and SD were important to calculate the effect size and direction of each study. Results were tabulated and placed onto a scatter plot to answer to the question: ‘‘How much does HFS impact psychomotor learning among undergraduate baccalaureate nursing students who are evaluated by clinical experts?’’

Meta-analysis Methods The effect size is the estimated magnitude of a relationship between two groups and is also referred to as the impact in this review. Direction of impact is determined by plus or

CIN: Computers, Informatics, Nursing & February 2015 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

79

minus values of effect sizes. Positive impact means that the experimental group showed more performance improvement than the control group, and negative impact means that the control group performed better. Furthermore, Cohen18 defines effect sizes as small from 0.2 to 0.49, medium from 0.50 to 0.79, and large from 0.8 and higher. In this analysis, positive and negative directions are also considered using this guideline. A Hedges’ g (a variant of Cohen’s d) was used to estimate the impact of HFS in the meta-analysis of the results of the studies included in this review. Hedges’ g is a method for estimating effect sizes when the number of subjects and the SD of each group cannot be assumed to be equal when analyzing the results of a series of independent studies.19 Effect size can be approached in three ways, based on different specific assumptions.18 Cohen’s d assumes equal sample size and equal SD among groups. Glass’ $ assumes that sample size and SD are not equal and uses the control group SD for estimating effect size. Hedges’ g makes the same assumption as Glass’ $, but pools the group SDs instead. This is important to avoid systematic bias when estimating effect size. The calculation used for estimating effect size for this meta-analysis is as follows: M1 jM2 Hedges0 g ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 ðn1 j1ÞA1 þðn2 j1ÞA2 n1 þn2 j2

Group 1 is the experimental group, and Group 2 is the control group. Descriptive statistics reported within each study were used to make this calculation.

RESULTS A total of 1120 references were found using the search terms. Initial elimination of studies via electronic screening included duplications between databases and articles that were not in English, were not peer reviewed, or were irrelevant to this review, such as feature and editorial articles. Subsequently, 196 articles were further screened manually for a more detailed evaluation, and exclusions at this stage were qualitative studies, studies that did not have undergraduate nursing student subjects, and studies that reported only nonpsychomotor learning evaluation quantitative data (eg, written examinations, grades, or student satisfaction surveys). From those articles, 30 abstracts were reviewed, and six additional articles were added from reference lists and review articles, which were screened more carefully for specific study selection criteria. Eight studies met final inclusion criteria (Table 1). Selected studies were published between 2006 and 2013; 88% of articles were published in 2010 or later. Four studies had more than one measure that qualified for inclusion in the meta-analysis. The numbers of subjects ranged from 18 to 188.

Integrative Review In the earliest work included in this review, Alinier et al3 used a pretest/posttest research design with a 15-station Objective Structured Clinical Examination (OSCE) to evaluate the clinical performance of 99 undergraduate nursing students (control n = 50, experimental n = 49) between 2001 and 2003. There was significant difference (P G .001) reported between the control and the experimental groups.

Ta b l e 1 Quantitative High-fidelity Simulation Studies in Undergraduate Nursing Education (n = 9) Year

1st Author

2006 2012

Alinier3 Baxter20

2010 2012

Blum5 Kim23

2013

Kirkman21

2012 2013

Liaw24 Smith25

2013

Walshe22

Measurement Instrument

Research Design

n

OSCE OSCE—video vs control OSCE—interactive vs control OSCE—interactive vs video Lasater Clinical Judgment Rubric Validated tool: communication Validated tool: clinical competence OSCE—pre/postlecture OSCE—postlecture/post-SIM OSCE—prelecture/post-SIM RAPIDS tool PDCA model: role PDCA model: objectives met PDCA model: participation CANE simulation scores

Test/retest, random allocation, controlled Random allocation, controlled Random allocation, controlled Random allocation—2 interventions Quasi-experimental Test/retest, nonequivalent control Test/retest, nonequivalent control Repeated measures Repeated measures Repeated measures—nonsuccessive Randomized controlled trial Test/retest Test/retest Test/retest Repeated measures

99 16 17 21 53 70 42

31 188

34

Hedges’ g (ES)

Impact or ESa

0.68 1.23 1.95 0.47 j0.26 0.49 0.73 0.93 0.92 1.83 3.39 0.35 0.52 0.50 0.37b

Medium Large Large Small Small Small Medium Large Large Large Large Small Medium Medium Small

Abbreviation: ES, effect size. a Effect sizes are small from 0.2 to 0.49, medium from 0.50 to 0.79, and large from 0.8 and higher. From Cohen.18 b Walshe et al22 effect size is the average of the successive means of each trial from SIM 1 to SIM 6. Data tabulated from reported results of Walsh et al.22

80

CIN: Computers, Informatics, Nursing & February 2015 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Two other studies used the OSCE to evaluate student performance. Baxter et al20 randomly allocated 27 students into three groups: the control group (n = 6), the video instruction group (n = 10), and the interactive group (n = 11). They reported finding a significant difference between the three groups (F2 = 6.01, P = .007), but not between the two intervention groups, although the interactive group showed the greatest gain in learning. Kirkman21 took 42 undergraduate nursing students through a series of alternating observations and interventions that measured performance improvement after each intervention. Observations were made by faculty of students in a traditional clinical setting. The first observation served as the control, the second observation was made after a lecture, and the third observation was made after simulation instruction. Their instrument was an OSCE hybrid that they created themselves. They concluded that their results demonstrate that significant (P = .000) learning occurred among participants over time and that the greatest difference occurred after the HFS intervention (difference between control and HFS = 3.32 T 0.21; difference between lecture and HFS = 1.75 T 0.21). Blum et al5 used the Lasater Clinical Judgment Rubric to evaluate the performance of 53 undergraduate students in the simulation laboratory; 16 students served as controls and were given traditional skills laboratory instruction with task trainers and student volunteers, and 37 students were given a ‘‘simulation-enhanced approach’’ using a computerized simulation mannequin. They were first evaluated by faculty during midterm and then again at the end of the term. They reported finding a significant difference in ‘‘clinical competence’’ between the control and experimental groups (t52 = 5.10, P = .00). Walshe et al22 used the Challenging Acute Nursing Events (CANE) module that was developed at the University College of Cork in Ireland. It is a problem-based, integrative, active learning strategy for enhancing the transfer of learning to psychomotor performance. The CANE module used in this study was applied to a senior-level nursing course, NU4032 Nursing Management of Challenging Acute Nursing Episodes (CANE). Thirty-four students were evaluated over six successive simulations as expert appraisers scored performance. The greatest improvement in performance was found between SIM 1 and SIM 6, and all successive simulations showed improvement except between SIM 3 and SIM 4 (Table 2). The reported effect size for Walshe et al22 in this study’s meta-analysis is the average of each of the successive simulation experiences. Kim et al23 used a nonequivalent control pretest-posttest research design to evaluate the performance improvement of 70 undergraduate nursing students (35 students in the control group and 35 in the experimental group). They used previously developed and validated Korean language tools to evaluate student learning of communication skills (Yoo, as cited in Kim et al23) and clinical competence (Yang and Park, as cited in Kim et al23). They reported significant

T a b l e 2 Effect Sizes of Successive Repeated Measures of Six Simulation Sessions of the Study of Walshe et al SIM SIM SIM SIM SIM

1 2 3 4 5

SIM 2

SIM 3

0.46

1.46 0.83

SIM 4 0.50 0.07 j0.71

SIM 5

SIM 6

1.66 1.07 0.29 0.94

2.11 1.44 0.65 1.28 0.34

Average effect size among successive sessions was 0.37; n = 188 students. Data tabulated from reported results of Walsh et al.22

improvement in student performance of both communication skills (t = j2.39, P = .020) and clinical competence (t = j2.71, P = .009) by simulation instruction. Liaw et al24 used a randomized controlled trial research design to pilot a study of performance improvement in clinical competence of 31 undergraduate nursing students who were randomly divided into the experimental (n = 15) and control (n = 16) groups. Participant group assignment was blinded. They used the Rescuing a Patient in Deteriorating Situation (RAPIDS) tool, which was developed by the principal investigator. They assessed knowledge, self-confidence, and clinical performance using a different tool for each of the three variables. They reported finding no significance among correlations of clinical performance and self-confidence and clinical performance and knowledge in either the experimental or the control group; however, they did report that clinical performance improved significantly from pretest to posttest for the experimental group (mean, 10.37 [SD, 48]; mean, 20.13 [SD, 3.29]) when compared with the control group (mean, 10.22 [SD, 2.39]; mean, 11.22 [SD, 2.25]). Smith et al25 evaluated 188 undergraduate nursing students from 2007 through 2009 using a test-retest research design and a continuous quality improvement framework frequently applied in the clinical setting, the Plan, Do, Check, Act (PDCA) model. Faculty evaluated student performance using a Likert scale (1 = not at all, 5 = entirely) to answer three questions: (1) Did the student appropriately fulfill his/her assigned role in the scenario? (2) Did the student provide substantive comments/input in the scenario briefing? (3) Did the student help meet the objectives for the scenario? Student evaluations occurred at the middle and end of a semester. Scores for both sessions were compared for significance. All three questions showed significant score increases during the second evaluation (P G .02).

Meta-analysis The effect size between the experimental and control groups among all studies ranged from j0.26 to +3.39 and carried a mean of +0.94 and median of +0.68 with no estimated

CIN: Computers, Informatics, Nursing & February 2015 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

81

mode as shown in Table 1. One study yielded a small negative impact, and four others were small and positive. Three studies showed medium impact in four measures. Studies using controlled random allocation and repeated-measures research design showed large impact except when comparing two interventions or when the average of successive measures was the result.22 These results show that most of the eight studies (75%) found medium to large positive impact when using HFS versus traditional clinical teaching methods, which indicates that there is support for its use in BSN education. A scatter plot distribution (Figure 1) of all effect sizes compared with sample sizes shows that they do not quite appear to trend toward central values (mean, +0.94; median, +0.68) with increased sample sizes, but most values are still to the left of those central values, which means that a few investigators have been reporting a larger impact or magnitude of performance improvement among experimental groups versus their controls. Effect sizes show that most investigators have reported a positive impact; however, most effect sizes are less than the mean (+0.94), and there is one extreme positive value (+3.39) from the study of Liaw et al24 (Table 1). This finding changes the results significantly when the extreme outlier study24 (N = 31) and the listed result of two interventions in Table 1 from Kirkman et al21 are eliminated from the scatter plot (Figure 1) and analysis, producing a mean of +0.75, median of +0.52, and range of j0.26 to +1.95. Skewedness in the meta-analysis is +1.67 with all measures included and +0.80 with those two measures removed. A positive skew suggests that these studies tend to favor reporting a positive impact. Patterns also emerge when research design is considered with sample size and effect size as shown in Table 1. Testretest studies tend to have high sample sizes (range, 70–188)

FIGURE 1. Scatter plot of effect size, sample size, and research design of 13 measures from eight nursing education HFS studies. Research designs are as follows: Random allocation (diamond), Test-retest (triangle), Repeated measures (X), and Quasi-experimental (circles).

82

and small to medium effect sizes (range, +0.35 to +0.73). The randomized controlled trial had a small sample size (N = 31) with a very large effect size (+3.39). The three measures reported in the random allocation study had smaller sample sizes than the randomized controlled trial but showed large to moderate clinical performance improvement of students trained using HFS. Two studies using a repeated-measures design also yielded data that produce small to large effect sizes (g = +0.35 and +1.83), and they had relatively small sample sizes (N = 34 and 42, respectively). All studies showed small to large improvement in BSN student clinical performance, whereas the quasiexperimental study showed a small reduction (j0.26) in clinical performance when HFS was used (Table 1). Two studies3,20 used the OSCE, and another project used a form of the OSCE.21 The OSCE is a standardized psychomotor clinical examination principally used in the United Kingdom. Performance evaluation using the OSCE has the more complex levels of ‘‘shows how’’ and ‘‘does’’ of the Miller model26 as compared with the traditional level of ‘‘knows’’ and ‘‘knows how’’ knowledge domains. The rest of the studies included in this review used a variety of validated instruments in which a clinical expert could evaluate an undergraduate nursing student (Table 1). Measurement instruments for student evaluation were validated prior to use in all but one study. Smith et al25 used a common industry framework for continuous quality improvement to evaluate outcomes called the PDCA model.

DISCUSSION The research reviewed in this article focused on nurse expert evaluation of psychomotor clinical skill acquisition among undergraduate nursing students using HFS versus traditional learning methods. High-fidelity simulation in undergraduate nursing education is still in its infancy, and there still is inadequate evidence for using many simulation learning strategies. The strength of the research included in this study is that many researchers have explored ways to measure clinical performance improvement and designing psychomotor testing strategies, such as those found in the OSCE.26 A weakness is that there is still a lot of work to do. There was only one randomized controlled trial with a very small sample size, and most studies depended on convenience samples and repeated measures. Sample sizes in general were very small (range, 16-188). It would also be helpful to have a comparison analysis of validated psychomotor clinical evaluation tools used for measuring nursing student clinical performance improvement using HFS. Kirkman21 had a number of issues with research design. First, they claimed that they were using a time-series research design, but then went on to describe data collection occurring during only three intervals before and after two serial interventions. A research design with three points of

CIN: Computers, Informatics, Nursing & February 2015 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

data collection is better described as being a repeated-measures research design. They also used ‘‘a performance evaluation tool [that] was structured to reflect the’’ OSCE—and not the actual instrument itself. They described the validity of the original OSCE validation correlation as being 0.63, and then stated that three of their own faculty established content validity for their instrument—having a content validity of 1.0. Finally, they did not describe the simulation mannequins that were used for the study. Although the study of Liaw et al24 appears to have the best research design of this review, it is considered a weaker study because of the sample size. Furthermore, they used an instrument developed and validated by the authors ‘‘in a previous study,’’ and did not report the reliability and validity statistics. They stated that ‘‘psychometric properties wereI tested and supported in a previous study’’ and then cited the article containing that data as unpublished. This study produced the most extreme outlier results and claimed the greatest benefit from simulation instruction versus traditional instruction. Smith et al25 referenced Williams and Fallone27(p517) for the PDCA model, who state that ‘‘this instrument was not tested for validity or reliability.’’ What Smith et al25 do not state is that the PDCA is also known as the Shewhart Cycle and has an extensive history dating back to the 1930s when Walter Shewhart originally conceptualized the model. It was further developed by Shewhart and Deming28 over several decades and popularized globally in the 1950s as a framework for performance improvement in industry.

Skewedness

ing that is a ‘‘highly structured activity explicitly directed at improvement of performance in a particular domain.’’33 There are four design elements: (1) specific focus on repetitive cognitive or psychomotor skills, (2) rigorous skills assessment, (3) specific information feedback, and (4) improved skills performance.33 Reflective practice refers to the process of reviewing a practice experience ‘‘in order to describe, [analyze]I evaluateI andI inform learning from practice.’’34 These learning methods could be integrated into a simulation protocol to habituate the nursing student to become a more fully engaged lifelong learner.

CONCLUSION High- and medium-fidelity simulation can build within clinicians (and developing clinicians) certain global skills sets when carefully planned and informed by established best practice protocols, but more research is needed at this time. These global skills sets can accelerate the novice-toexpert process via deliberate and reflective practice. This would measurably improve clinical performance in the leadership, management, and task management domains and their requisite outcomes, such as patient safety as outlined by the IOM.1 Furthermore, nurses must continue with ‘‘discovering assumptions, expectations, and [skills] sets [that] can uncover an unexamined area of practical knowledge that can then be systematically studied and extended or refuted.’’35(p8)

REFERENCES

Skewedness, a measure showing direction of deviation from central tendency (which could be a result of a single outlier study), is +1.67 with all measures included and +0.80 with those two measures removed. A positive skew suggests a lack of symmetry about a Gaussian curve, which means that these measures do not fit a normal distribution, and they tend to favor reporting a positive impact. This finding suggests that there may be publication bias due to information being difficult to find29 or (more likely), there simply is not yet enough research available, and more needs to be done before attempting another meta-analysis of this nature. Also, there was only one randomized controlled trial included in this review,24 and it was a pilot study with a very small sample size.

Implications for Nursing Education Nursing education pedagogy could incorporate deliberate practice30 and reflective practice31 principles into HFS, as well as use specific and progressive gradations of fidelity to optimize student learning. Deliberate practice is based on the work of psychologist Anders Ericsson’s32 studies of expert performance. Deliberate practice is described as train-

1. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000. 2. Kardong-Edgren S, Willhaus J, Bennett D, Hayden J. Results of the national council of state boards of nursing national simulation survey: part II. Clin Simulat Nurs. 2012;8(4):e117–e123. 3. Alinier G, Hunt B, Gordon R, Harwoood C. Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. J Adv Nurs. 2006;54(3):359–369. 4. Bambini D, Washburn J, Perkins R. Outcomes of clinical simulation for notice nursing students: communication, confidence, clinical judgment. Nurs Educ Res. 2009;30(2):79–82. 5. Blum CA, Borglund S, Parcells D. High-fidelity nursing simulation: impact on student self-confidence and clinical competence. Int J Nurs Educ Scholarsh. 2010;7(1):article 18. 6. Bogossian F, Cooper S, Cant R, et al. Undergraduate nursing students’ performance in recognizing and responding to sudden patient deterioration in high psychological fidelity simulated environments: an Australian multi-centre study. Nurse Educ Today. 2014;34(5): 691–696. 7. Foster JG, Sheriff S, Cheney S. Using nonfaculty registered nurses to facilitate high-fidelity human patient simulation activities. Nurse Educ. 2008;33(3):137–141. 8. Jeffries PR, Rizzolo MA. Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: a national, multi-site, multi-method study, 2006. http:// www.nln.org/beta/research/nln_laerdal/index.htm. Accessed May 15, 2014. 9. Simonelli MC, Paskausky AL. Simulation stimulates learning in a childbearing clinical course. J Nurs Educ. 2011;51(3):172–175.

CIN: Computers, Informatics, Nursing & February 2015 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

83

10. Smith SJ, Roehrs CJ. High-fidelity simulation: factors correlated with nursing student satisfaction and self-confidence. Nurs Educ Perspect. 2009;30(2):74–78. 11. Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs. 2010;66(1):3–15. 12. Bland AJ, Topping A, Wood B. A concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Educ Today. 2011;31:664–670. 13. Seropian MA, Brown K, Gavilanes JS, Driggers B. Simulation: not just a manikin. J Nurs Educ. 2004;43(4):164–169. 14. Swanson RA. The foundations of performance improvement and implications for practice. Adv Dev Hum Resour. 1999;1(1):1–25. 15. Dutton JM, Thomas A. Treating progress functions as a managerial opportunity. Acad Manag Rev. 1984;9(2):235–247. 16. Vits J, Gelders L. Performance improvement theory. Int J Prod Econ. 2002;77:285–298. 17. Edmondson AC, Winslow AB, Bohmer RMJ, Pisano GP. Learning how and learning what: effects of tacit and codified knowledge on performance improvement following technology adoption. Decis Sci. 2003;34(2):197–223. 18. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. New York, NY: Lawrence Erlbaum; 1988. 19. Hedges LV. Distribution theory for Glass’s estimator of effect size and related estimators. J Educ Stat. 1981;6(2):107–128. 20. Baxter P, Akhtar-Danesh N, Landeen J, Norman G. Teaching critical management skills to senior nursing students: videotaped or interactive hands-on? Nurs Educ Perspect. 2012;33(2):106–110. 21. Kirkman TR. High-fidelity simulation effectiveness in nursing students’ transfer of learning. Int J Nurs Educ Scholarsh. 2013;10(1):1–6. 22. Walshe N, O’Brien S, Murphy S, Hartigan I. Integrative learning through simulation and problem-based learning. Clin Simulat Nurs. 2013;9:e47–e54. 23. Kim HY, Ko E, Lee ES. Effects of simulation-based education on communication skill and clinical competence in maternity nursing practicum. Korean J Women Health Nurs. 2012;8(4):312–320.

84

24. Liaw SY, Scherpbier A, Rethans JJ, Klainin-Yobas P. Assessment for simulation learning outcomes: a comparison of knowledge and selfreported confidence with observed clinical performance. Nurse Educ Today. 2012;32:e35–e39. 25. Smith KV, Klaassen J, Zimmerman C, Cheng AL. The evolution of a high-fidelity patient simulation learning experience to teach legal and ethical issues. J Prof Nurs. 2013;29:168–173. 26. Rushforth HE. Objective structured clinical examination (OSCE): review of the literature and implications for nursing education. Nurse Educ Today. 2007;27:481–490. 27. Williams JF, Fallone S. CQI in the acute care setting: an opportunity to influence acute care practice. Nephrol Nurs J. 2008;35(5):515–522. 28. Moen R, Norman C. Evolution of the PDCA cycle, 2006. http:// kaizensite.com/learninglean/wp-content/uploads/2012/09/Evolutionof-PDCA.pdf. Accessed March 13, 2014. 29. Sterne JAC, Sutton AJ, Aloannidis JP, Terrin N, Jones DR, Lau J, Higgins JPT. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomized controlled trials. Res Methods Rep. 2011;342:d4002. 30. Chee J. Clinical simulation using deliberate practice in nursing education: a Wilsonian concept analysis. Nurs Educ Pract. 2013;14(3):247–252. 31. Burns HK, O’Donnell J, Artman J. High-fidelity simulation in teaching problem solving to 1st year nursing students: a novel use of the nursing process. Clin Simulat Nurs. 2010;6:e87–e95. 32. Ericsson KA, Krampe RT, Tesch-Romer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993; 100(3):363–406. 33. Duvuvier RJ, van Dalen J, Muijtjens AM, Moulaert VRMP, van der Vleuten CPM, Scherpbier AJJA. The role of deliberate practice in the acquisition of clinical skills. BMC Med Educ. 2011;11:101. http:// www.biomedcentral.com/1472-6920/11/101. Accessed April 1, 2014. 34. Reid B. Exploring a response to the concept of reflective practice in order to improve its facilitation. Nurse Educ Today. 1993;13:305–309. 35. Benner P. From Novice to Expert: Excellence and Power in Clinical Practice. Upper Saddle River, NJ: Prentice Hall; 2001.

CIN: Computers, Informatics, Nursing & February 2015 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

The efficacy of high-fidelity simulation on psychomotor clinical performance improvement of undergraduate nursing students.

High-fidelity simulation has become a growing educational modality among institutions of higher learning ever since the Institute of Medicine recommen...
226KB Sizes 3 Downloads 14 Views