YNEDT-02719; No of Pages 5 Nurse Education Today xxx (2014) xxx–xxx

Contents lists available at ScienceDirect

Nurse Education Today journal homepage: www.elsevier.com/nedt

Is high fidelity human patient (mannequin) simulation, simulation of learning? Denise McGarry a,⁎, Andrew Cashin b,1, Cathrine Fowler c,2 a b c

School of Nursing, Midwifery and Indigenous Health, Charles Sturt University, Panorama Drive, Bathurst, NSW 2795, Australia School of Health and Human Sciences, Southern Cross University, PO Box 157, Lismore, NSW 2480, Australia Tresillian Chair in Child & Family Health, Faculty of Health, University of Technology, Sydney, NSW 2007, Australia

a r t i c l e

i n f o

Article history: Received 1 November 2013 Received in revised form 5 March 2014 Accepted 17 April 2014 Available online xxxx Keywords: Human patient (mannequin) simulation Nursing students Education Learning outcomes Transfer of knowledge

s u m m a r y This paper explores the application of evaluation of high fidelity human patient (mannequin) simulation emerging in nursing education. The ramifications for use in mental health nursing are examined. A question is posed: Is high fidelity human patient (mannequin) simulation limited to being a “simulation of learning”? Explicit research that traces learning outcomes from mannequin, to clinical practice and hence consumer outcomes, is absent in mental health. Piecing together research from psychology addressing cognitive load theory and considering the capacity for learners to imitate desired behaviour without experiencing deep learning, the possibility is real that simulation of learning is the outcome of high fidelity human patient (mannequin) simulation applications to mental health nursing. Crown Copyright © 2014 Published by Elsevier Ltd. All rights reserved.

The contention of this paper that there is a risk that learning with high fidelity human patient (mannequin) simulation (HPS) is to learn how “to simulate”. As simulators are used in the evaluation of the simulated activity, what may be learnt is how to drive the simulator as opposed to patient care. As HPS allows complex clinical practice to be replicated, it might be assumed that this supports higher order learning (Holtschneider, 2007). Complexity and cognitive load theory would suggest otherwise. Mental health nursing involves a range of complex skills that are of a different type than the mastery of procedural protocols. This places mental health nursing preparation that incorporates HPS at particular risk of involving simulation of learning.

Pedagogical Features of HPS HPS is portrayed to be the current sign of contemporary education in nursing (for example see the systematic review by Cant and Cooper (2010)). Fidelity comprises two notions that are not yet fully reconciled. It refers to both ‘engineering’ fidelity (or authenticity)—whether the simulation looks realistic; and, ‘psychological fidelity’—whether the ⁎ Corresponding author. Tel.:+61 2 63384546. E-mail addresses: [email protected] (D. McGarry), [email protected] (A. Cashin), [email protected] (C. Fowler). 1 Tel.: +61 2 66203156. 2 Tel.: +61 2407942916.

simulator requires accurate behavioural responses to engage in the learning experience (Norman et al., 2012 p 637). Use of electronically controlled mannequins as patient models with increasingly sophisticated physiologically-responsive parameters is growing internationally (McKenna et al., 2007; Murray et al., 2008; Nehring, 2008; Nursing and Midwifery Council, 2007). Nursing websites feature clusters of students seriously regarding the ‘plastic man’ in his bed. The narrative of these sites positions HPS as an established pedagogy (McGarry et al., 2011). However, examination of the literature suggests otherwise. Internationally, HPS represents a significant investment by Nursing Schools. Purchase price is self-evidently one form of investment with each mannequin costing upward of $(US)80,000 (Norman et al., 2012). Such monetary investment especially if by governments frequently arouses interest in determining the outcome of their largesse (National Council of State Boards of Nursing Inc, 2009). This acts as a driver for the production of measures of effectiveness. These are primarily addressed through utilisation reports, rather than the development of accurate tools to evaluate HPS facilitated learning (Tanner, 2011; Yuan et al., 2012). The investment value of HPS is also mediated by the teaching time requirements. Small groups are often best suited to sessions with the mannequin (Kaplan et al., 2012). This necessitates less time for other learning activities so opinion is emerging in the peer reviewed literature that pressure may be experienced across the curriculum resulting from the adoption of this pedagogy (Blazeck and Zewe, 2013; Bray et al., 2009). One possible remedy is to incorporate additional subjects and

http://dx.doi.org/10.1016/j.nedt.2014.04.014 0260-6917/Crown Copyright © 2014 Published by Elsevier Ltd. All rights reserved.

Please cite this article as: McGarry, D., et al., Is high fidelity human patient (mannequin) simulation, simulation of learning?, Nurse Educ. Today (2014), http://dx.doi.org/10.1016/j.nedt.2014.04.014

2

D. McGarry et al. / Nurse Education Today xxx (2014) xxx–xxx

learning objectives into the simulation sessions. Many nursing subjects (e.g. ethics, law, communication) have potential for integration (Lapkin et al., 2010). Such integration may also increase the fidelity created in the use of HPS improving learning outcomes, if viewed through the lens of situated learning (Kneebone et al., 2005). Situated learning theory posits that much learning is unintentional and occurs in part through legitimate peripheral participation situated in authentic activities, contexts and culture (Lave and Wenger, 1991). HPS Use in Mental Health Nursing Preparation Mental health nursing despite its long history of using other simulation modalities such as standardised patients, film and role play, has been slow to adopt this new simulation technology (McGarry et al., 2012). The paucity and variable quality of clinical placements in mental health make the controlled opportunities of this pedagogy an important part of a solution (Mental Health Nurse Education Taskforce, 2008). Positive placement and teaching experiences in mental health have been shown in a series of Australian studies to be significantly linked with choice to become a mental health nurse (Happell, 2008a, b, 2009; Happell et al., 2008). The potential of providing these positive experiences more predictably by the use of HPS in mental health nursing subjects is attractive. Types of Evaluation in HPS Evaluation of HPS has been dominated by the use of a satisfaction survey approach, largely based on participant satisfaction (Cato et al., 2009; Reilly and Spratt, 2007; Smith and Roehrs, 2009). Published papers detail self-reports of students of their increasing confidence and self-efficacy (Feingold et al., 2004; Kardong-Edgren et al., 2009). However, a systematic review that compared physicians' self-rated assessment with external assessors' observations lead the veracity of self-assessment being queried (Davis et al., 2006). The 725 articles that were located of studies using quantifiable and replicable measures from the United Kingdom, Canada, United States, Australia and New Zealand yielded only 17 that met all inclusion criteria. Studies that were of students only, comparisons of self-report, articles about the development of tools or specialty society self-assessment programs, were excluded from the study. The 20 articles that resulted, which compared self and external assessor measures, established that in 65% of the cases, there was little, no, or even an inverse relationship between self-assessment and that of external assessors. Only 35% of studies demonstrated positive associations. Further, the authors reported that in a number of studies those least skilled physicians were the most confident—a result asserted to be common with findings in other professions (Davis et al., 2006). The value of such self-assessment measures has thus been substantially discredited leading some theorists to suggest that the very notion of self-assessment requires re-formulation to explore the sources of its variance (Eva and Regehr, 2011). This prompted Kardong-Edgren et al. (2010) to advocate that instrument development utilising selfreport and satisfaction be suspended. A number of researchers are working to refine tools to measure learning outcomes from the use of HPS in nursing (Lasater, 2007; Prion, 2008; Radhakrishnan et al., 2007; Todd et al., 2008). Much of this work is still limited by small samples, and scope of clinical application. The focus has been on developing or improving the utility of existing tools to measure different learning domains (Adamson et al., 2012) such as the development of improved observational assessment methods. However, Tanner (2011 p491) criticised such efforts remarking how “little investment there has been in developing suitable measures for the assessment of learning outcomes, particularly those relevant for a practice discipline”. Two approaches for understanding evaluation paradigms have been suggested by examination of medical education literature to have

particularly efficacious application to HPS (Adamson et al., 2012). These approaches are Kirkpatrick's levels of evaluation (Kirkpatrick, 1994) and the adaption to HPS of the translational model developed by the National Institute of Health by McGaghie et al. (2011). The translational model evaluates the extent to which new knowledge moves from (or is translated from) scientific discovery in the laboratory to application at the bedside. This model applies to the extent to which learning (or new student knowledge) moves from the simulation environment to practice changes. Kirkpatrick's (1994) four level model of evaluation has been identified in a review of published simulation evaluation instruments as also having potential for application (Adamson et al., 2012). The four levels of Kirkpatrick's model are labelled reaction, learning, behaviour and outcomes. These levels, it is suggested in Adamson et al.'s review, could gain additional utility if combined with the descriptions of simulation evaluation developed by Boulet et al. (2011). Boulet et al.'s descriptions of simulation evaluation comprise the following four categories—how learners reacted to the learning process, extent of knowledge gain, capacity to perform learned skills on the job and impact of the training program (for example on patient safety). This up-date of Adamson et al. (2012)'s prior review highlights the importance of any educational (or research) endeavour to transfer new knowledge (or learning) from HPS, class room or research bench to patient outcome. The work clarifies the different levels of HPS evaluation, that is, learner knowledge and skill gain within the learning environment, applied in the clinical setting and/or by improved patient outcomes. There are no impediments within this evaluative model to the psychomotor, affective and cognitive domains of learning. All may be subject to this continuum of evaluation settings. In addition, the importance of the durability and application of learning are fundamental to the model. A shortcoming of both the up-dated review (Adamson et al., 2012) and the original review (Kardong-Edgren et al., 2010) is that they are not systematic reviews of the literature. Rather they claim to be a review of “…current representative samples of many different ways of evaluating clinical abilities” (Kardong-Edgren et al., 2010 p e26). Hence, the relative strengths and limitations are difficult to discern. This conceptualisation of evaluation does represent a considerable challenge in application. The logistical problem of tracing learning gains into clinical environments is one of these problems. Tools that reliably and validly assess performance in clinical placements are contested (Gaba, 2004). The capacity to ensure inter-rater reliability and assessment of equivalent situations is almost impossible to guarantee, as clinical presentations are varied and dynamic. Challenges for Evaluation in HPS HPS faces challenges in common with any clinical evaluation or evaluation in general. These difficulties include inherent bias and subjectivity of evaluation based on direct observation; situational environment of the evaluation that is affected by actions of others including the responses of patients; and the dynamic nature of a clinical environment that precludes equivalent evaluation circumstances between students (Saewert and Rockstraw, 2012). There is a well recognised difficulty in securing clinical placements that can guarantee required learning (Mental Health Workforce Advisory Committee, 2010; National Health Workforce Taskforce, 2008; Nursing and Midwifery Council (UK), 2010). But these difficulties can result in a default position of only formative assessment and summative assessments being undertaken in HPS (Jeffries, 2005, 2006). Evaluation of student learning by application in clinical practice and then by improved patient outcome is not yet suggested to be viable (Adamson et al., 2012). There is an assumption apparent in this context that supports the contention of this paper that there is a risk the learning itself is “to simulate”. That is as simulators are used in the evaluation of the

Please cite this article as: McGarry, D., et al., Is high fidelity human patient (mannequin) simulation, simulation of learning?, Nurse Educ. Today (2014), http://dx.doi.org/10.1016/j.nedt.2014.04.014

D. McGarry et al. / Nurse Education Today xxx (2014) xxx–xxx

simulated activity, what is in fact learnt is how to drive the simulator as opposed to patient care. As HPS allows complex clinical practice to be replicated, it might be assumed that this supports evaluation of complex or higher order learning (Holtschneider, 2007). Complexity and cognitive load theory would suggest otherwise. It suggests that a learner's ability to learn is reduced as the number and complexity of learning objectives increase (Van Merrienboer and Sweller, 2010). So students may be learning to play act expected responses in some circumstances that are complex. This might also include learning that a feature of the nursing role is to disguise lack of knowledge by ‘going through the motions’. Compounding this unintended behaviour outcome are cultural norms that might include ‘face saving’ having an enhanced importance for some cultural groups (Holroyd et al., 1998). Error or lack of knowledge may be hidden in order to gain approval— social and perhaps educational. A further confounding factor in current HPS usage is the limited ability to engage in sustained practice (Akhtar-Danesh et al., 2009; Howard et al., 2011). The capacity to re-run at will the full HPS learning event is difficult as it requires coordination of multiple resources and personnel. Practice has been identified in the literature as an important condition that supports the effectiveness of HPS learning (Issenberg et al., 2005). However, the resource-poor context of HPS (time and sundry expenses) may reduce the ability for learners to engage in repeated practice. One might also speculate that the expense of equipment and supplies could act as a barrier to unsupervised practice (Howard et al., 2011). Given the special domain of HPS is authentic learning in teams, a potential further impediment is identified, as the capacity to have repeated team exposure for practice may be logistically prohibitive. The ramification of these emerging understandings drawn from psychology (cognitive load theory (Sweller et al., 2011) in application to HPS mediated learning may benefit from what has been learnt in other areas that use simulation. Cognitive load is a concept developed in Cognitive Psychology to understand the limitations of executive control of working memory that impedes learning in complex situations. Essentially, if there are a high number of demands on working memory to process information and processes simultaneously, the finite capacity of working memory may be exceeded. Meaningful learning cannot resume until all prior processing demands have been met. Insights from Medical Education Norman et al. (2012) compared learning from HPS utilising high or low fidelity based on measures of clinical performance in medicine. The study reviewed papers identified through the systematic review of simulation of Issenberg et al. (2005) upgraded in 2011 (McGaghie et al., 2011) but was not itself a systematic review. Norman et al. found that such research was uncommon, but in the 24 studies, of varying design and size no significant advantage was found for high fidelity simulation. These authors discussed a number of factors that they argued underpin the assumption that HPS will support better transfer of learning. Critical to this argument is the notion that “the closer to the ‘real world’…the better the transfer to real life” (Norman et al., 2012 p 637). This follows on from Coleridge's notion of suspension of disbelief being a requisite condition leading to immersion in any form of imitation or simulation (Herrington et al., 2003). But Norman et al. (2012) indicated that a clear understanding of which particular factors facilitate transfer of knowledge and learning by their resemblance to real life has not yet been achieved. This suggests that it is not essential that HPS be applied to all simulation to ensure learning that transfers to clinical application. Aviation Simulation Use Yet, the aviation industry has progressed high fidelity simulation to the point that hours spent in simulation are deemed equivalent and in some cases superior to those acquired in practice (Rosen, 2008). This

3

is because, in part, a rigorous approach to analysing pilot error and applying these findings to pilot training underpins the simulations (Musson and Helmreich, 2004), as does the simulation of low frequency high impact events, such as catastrophic failures. The parallel to the benefits simulation technologies offer health care preparation in this regard are clear. This approach of looking at other successful approaches in other industries helps to determine what training is required and what type of simulation delivers best outcomes (Helmreich, 2000). An example is the Boeing 737 which is rated for simulator (zero flight time) training alone, meaning that a suitably rated pilot may qualify for this aircraft with no training other than simulation. Non-technical Skill Assessment Assessment of an individual's non-technical skill performance (pertinent to much team based health care and also mental health nursing) in the aviation, nuclear and chemical industries, developed an observational rating system dubbed “behavioural marker systems” (Flin et al., 2003). Behavioural markers are observable, non-technical behaviours that contribute to a superior or a substandard performance in the work environment. They are derived from empirical data. They are usually developed into structured skill taxonomies and combined with a rating scale. This allows the skills that are demonstrated through behaviour to be assessed by trained calibrated raters (Fletcher et al., 2003 p 581) As human behaviours had been found to be highly significant in the analysis of error in these industries, training was directed to these. This has been shown to improve safety in these industries' workplaces (Helmreich et al., 1999). A systematic review of literature exploring surgical behaviours identified four human markers of non-technical skills that complemented surgeon's technical skills. These were communication, teamwork, leadership, and decision making (Yule et al., 2006). Unfortunately, it is a conclusion from the experience in medicine that the development of behavioural marker systems in training and assessment in the core curriculum of health care has been minimal (Patey, 2008). Empathetic communication could be conceptualised as a core skill to be mastered by all health workers (Steele and Hulsman, 2008) and is particularly pertinent for work within the field of mental health nursing. The capacity to non-judgementally understand the other's point-ofview enables mental health nurses to commence the formation of a therapeutic alliance. This will support people with mental health issues establish their recovery goals (Barker and Buchanan -Barker, 2011). HPS has the capacity to embrace empathetic communication skill development via its voice function or via “hybrid” forms of simulation where a standardised patient may be used in conjunction with parttrainers or HPS to enhance fidelity. The standardised patient may have the role of patient or of relative in such hybrid simulations. Irrespective of the method chosen, caution has been advised when attempts are made to rehearse or assess empathetic communication. Wear and Varley (2008) argue that it is fundamentally unachievable on two grounds. Firstly, the ‘patient’ (standardised or voice function of HPS) does not authentically experience the concerns or emotions that are expressed. Secondly, the learner does not have a responsibility for collaboration in the provision of emotional care (Wear and Varley, 2008). This argument, however, may be spurious. The opportunity for learning by rehearsal of reflective listening skills represents valuable elements of broader communication mastery. Perhaps the recognition that there is a fundamental difference in outcomes between performance in simulation and that required in clinical environments is the essential observation being made by these authors (Wear and Varley, 2008). Rather than this disparity between simulation and clinical practice preventing a desired learning experience, this could be regarded as a scaffolded learning experience. Wear and Varley (2008) argue that integration of empathic communication into assessment puts students into the unenviable position of being required to mimic socially desirable

Please cite this article as: McGarry, D., et al., Is high fidelity human patient (mannequin) simulation, simulation of learning?, Nurse Educ. Today (2014), http://dx.doi.org/10.1016/j.nedt.2014.04.014

4

D. McGarry et al. / Nurse Education Today xxx (2014) xxx–xxx

behaviours and that this is not the same as the emotional experience of care or the skills required to authentically communicate or connect with a distressed person. Further Wear and Varley (2008 p 154) warn that there is a risk teaching will also revert to this standard due to the strong links between teaching and assessment. However, this is an approach that could claim a long pedigree through other traditionally employed simulation approaches (role plays for example). Mimicking desired communication may not be judged as unsuccessful if it results in the client experiencing care that is respectful and considerate. This argument therefore appears to be faulty—reminiscent of a jeopardy argument of resistance to change (Hirschman, 1991). Summary To summarise, the authors have explored the contention that in mental health nursing applications, students' performance measures in HPS will not translate to the true measure of learning. However captured and measured HPS will not achieve behaviour change in nursing practice. The argument is summarised as follows. Firstly, HPS evaluation in nursing is recognised to be limited to measures of second level gains (knowledge and skills) according to Kirkpatrick's schema (Adamson et al., 2012; Kirkpatrick, 1994). Learning reviews of HPS in nursing have not as yet established means to evaluating the translation of learning into clinical practice or improved consumer outcomes. Secondly, this evaluation could benefit from incorporation of the knowledge of errors in nursing practice as has been demonstrated in other high-risk occupations, particularly aviation (Patey, 2008). In these high-risk industries, excellent outcomes have been achieved when simulation is designed to address causes of error as in the “Behavioural Marker System”. Thirdly, HPS's strengths for education about complex clinical situations have been incompletely established (Norman et al., 2012). Published literature favours descriptive accounts over examination of the relative strengths and limitations of HPS for a range of learning objectives. Fourthly, as mental health ordinarily is a complex presentation and area of practice, caution should be taken in application of HPS to this these challenges. Review of the relationship between simulation fidelity and transfer of learning suggests that HPS may not be the best choice for all learning types, and that the knowledge regarding selection of simulation fidelity for learning outcomes is not yet certain. Development of empathetic communication ability using simulation may be an example where a complex learning outcome is not assured by use of high fidelity HPS over other simulation modalities (Norman et al., 2012; Wear and Varley, 2008). Finally, suggestions to increase the authenticity of HPS through incorporation of co-morbid problems, particularly that of mental health issues, require caution as this might risk exceeding learner's ability to process learning tasks (Van Merrienboer and Sweller, 2010). Cognitive load theory observes the limited capacity for individuals to process through short term memory, resulting in sound instructional design principles to incorporate using simple to complex ordering of learning tasks and working from low to high-fidelity environments (Van Merrienboer and Sweller, 2010 p89). Adding co-morbidity such as mental health issues in the interest of fidelity may run counter to these findings. Conclusions The findings of the seminal systematic review of published literature of high fidelity medical simulation by Issenberg et al. (2005 p10) have continued relevance: Outcomes research on the use and effectiveness of simulation technology in (medical) education is scattered, inconsistent and varies widely in

methodological rigor and substantive focus [but that] …high-fidelity (medical) simulations are educationally effective and simulationbased education complements (medical) education in patient care settings. Discussions of the effectiveness of HPS often over-look the truism that learning will occur despite the particular teaching approach taken. Students are orientated and expecting a learning experience. The HPS has been arranged to achieve learning. We should not be surprised that learning does occur. There is also significant pressure on students to claim successful engagement in any learning event. They are enculturated in roles where their academic success and ultimate employment prospects are determined by demonstrated successful learning. It behoves students to utilise any learning opportunity. However, those that are obviously enthusiastically valued by their faculty, such as HPS, may be difficult to express dissent about. Acknowledgement also needs to be given to the political imperative to find HPS to be an effective pedagogy. Significant resources of many types have been invested and would be difficult to reverse in a health education environment where a return to traditional clinical placement models for learning may no longer be viable or desirable. This paper started with a question: Is HPS limited to being a “simulation of learning” in applications to mental health nursing? Although explicit research that traces learning outcomes from mannequin, to clinical practice and hence consumer outcomes, is absent in mental health, the strength of evidence in the published literature suggests the need for caution. Piecing together research from psychology addressing cognitive load theory and considering the capacity for learners to imitate desired behaviour without experiencing deep learning, the possibility is real that simulation of learning is the outcome of HPS applications to mental health nursing. References Adamson, K.A., Kardong-Edgren, S.E., Willhaus, J., 2012. An updated review of published simulation evaluation instruments. Clin. Simul. Nurs. e1–e13. Akhtar-Danesh, N., Baxter, P., Valaitis, R.K., Stanyon, W., Sproul, S., 2009. Nurse faculty perceptions of simulation use in nursing education. West. J. Nurs. Res. 31, 312–329. Barker, P., Buchanan -Barker, P., 2011. Myth of mental health nursing and the challenge of recovery. Int. J. Ment. Health Nurs. 20 (5), 337–344. Blazeck, A., Zewe, G., 2013. Simulating simulation: promoting perfect practice with learning bundle-supported videos in an applied, learner-driven curriculum design. Clin. Simul. Nurs. 9, e21–e24. Boulet, J., Jeffries, P.R., Hatala, R., Kordndorffer, J., Feinstein, D., Roche, J.P., 2011. Research regarding methods of assessing learning outcomes. Simul. Healthc. 6, S48–S51. Bray, B., Schwartz, C.R., Weeks, D.L., Kardong-Edgren, S.E., 2009. Human patient simulation technology: perceptions from a multidisciplinary sample of health care educators. Clin. Simul. Nurs. 5, e145–e150. Cant, R.P., Cooper, S.J., 2010. Simulation-based learning in nurse education: systematic review. J. Adv. Nurs. 66, 3–15. Cato, M.L., Lasater, K., Peeples, A.I., 2009. Nursing students' self-assessment of their simulation experiences. Nurs. Educ. Perspect. 30, 105–108. Davis, D., Mazmanian, P., Fordis, M., Harrison, R.V., Thorpe, K., Perrier, L., 2006. Accuracy of physician self-assessment compared with observed measures of competence. J. Am. Med. Assoc. 296, 1094–1102. Eva, K.W., Regehr, G., 2011. Exploring the divergence between self-assessment and selfmonitoring. Adv. Health Sci. Educ. Theory Pract. 16, 311–329. Feingold, C.E., Calaluce, M., Kallen, M.A., 2004. Computerized patient model and simulated clinical experiences: evaluation with baccalaureate nursing students. J. Nurs. Educ. 43, 156–163. Fletcher, G., Flin, R., McGeorge, P., Glavin, R., Maran, N., Patey, R., 2003. Anaesthetists' Non‐ Technical Skills (ANTS): evaluation of a behavioural marker system. Br. J. Anaesthesiol. 90, 580–588. Flin, R., Goeters, K.-M., Horman, H.-J., Amalberti, R., Valot, C., Nijhuis, H., 2003. Development of the NOTECHS (non-technical skills) system for assessing pilots' CRM skills. Hum. Factors Aerosp. Saf. 3, 95–117. Gaba, D.M.D., 2004. The future vision of simulation in health care. Qual. Saf. Health Care i2–i10. Happell, B., 2008a. The importance of clinical experience for mental health nursing—Part 1: undergraduate nursing students' attitudes, preparedness and satisfaction: feature article. Int. J. Ment. Health Nurs. 17, 326–332. Happell, B., 2008b. In search of a positive clinical experience. Ment. Health Pract. 11, 26–31. Happell, B., 2009. Clinical experience as the panacea! Acknowledging the importance of theory. Contemp. Nurse 32, 166–168.

Please cite this article as: McGarry, D., et al., Is high fidelity human patient (mannequin) simulation, simulation of learning?, Nurse Educ. Today (2014), http://dx.doi.org/10.1016/j.nedt.2014.04.014

D. McGarry et al. / Nurse Education Today xxx (2014) xxx–xxx Happell, B., Robins, A., Gough, K., 2008. Developing more positive attitudes towards mental health nursing in undergraduate students: Part 2—the impact of theory and clinical experience. J. Psychiatr. Ment. Health Nurs. 15, 527–536. Helmreich, R.L., 2000. On error management: lessons from aviation. Br. Med. J. 320, 781–785. Helmreich, R.L., Merritt, A.C., Wilhelm, J.A., 1999. The evolution of crew resource management training in commercial aviation. Int. J. Aviat. Psychol. 9, 19–32. Herrington, J., Oliver, R., Reeves, T.C., 2003. Patterns of engagement in authentic online learning environments. Aust. J. Educ. Technol. 19, 59–71. Hirschman, A., 1991. The Rhetoric of Reaction: Perversity, Futility, Jeopardy. The Belknap Press of Harvard University Press, Cambridge, Massachusetts and London, England. Holroyd, E., Yue‐kuen, C., Sau‐wai, C., Fung‐shan, L., Wai‐wan, W., 1998. A Chinese cultural perspective of nursing care behaviours in an acute setting. J. Adv. Nurs. 28, 1289–1294. Holtschneider, M.E., 2007. Better communication, better care through high-fidelity simulation. Nurs. Manag. 38, 55–57. Howard, V.M., Englert, N., Kameg, K., Perozzi, K., 2011. Integration of simulation across the undergraduate curriculum: student and faculty perspectives. Clin. Simul. Nurs. 7, e1–e10. Issenberg, S.B., McGaghie, W.C., Petrusa, E.R., Gordon, D.L., Scalese, R.J., 2005. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME (Best Evidence Medical Education Collaboration) systematic review. Med. Teach. 27, 10–28. Jeffries, P.R., 2005. A framework for designing, implementing, and evaluating simulations used as teaching strategies in nursing. Nurs. Educ. Perspect. 26, 96–103. Jeffries, P.R., 2006. Designing simulations for nursing education. Annu. Rev. Nurs. Educ. 4, 161–177. Kaplan, B., Abraham, C., Gary, R., 2012. Effects of participation vs. observation of a simulation experience on testing outcomes: implications for logistical planning for a school of nursing. Int. J. Nurs. Educ. Scholarsh. 9. Kardong-Edgren, S.E., Lungstrom, N., Bendel, R., 2009. VitalSim versus SimMan: a comparison of BSN student test scores, knowledge retention, and satisfaction. Clin. Simul. Nurs. 5, e105–e111. Kardong-Edgren, S.E., Adamson, K.A., Fitzgerald, C., 2010. A review of currently published evaluation instruments for human patient simulation. Clin. Simul. Nurs. 6, e25–e35. Kirkpatrick, D.L., 1994. Evaluating Training Programs: The Four Levels. Bernett-Koehler, San Francisco, CA. Kneebone, R.L., Kidd, J., Nestel, D., Barnet, A., Lo, B., King, R., Yang, G.Z., Brown, R., 2005. Blurring the boundaries: scenario-based simulation in a clinical setting. Med. Educ. 39, 580–587. Lapkin, S., Levett-Jones, T., Bellchambers, H., Fernandez, R., 2010. Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clin. Simul. Nurs. 6, e207–e222. Lasater, K., 2007. Clinical judgment development: using simulation to create an assessment rubric. J. Nurs. Educ. 46, 496–503. Lave, J., Wenger, E., 1991. Situated Learning: Legitimate Peripheral Participation. Cambridge University Press, Cambridge. McGaghie, W.C., Issenberg, S.B., Cohen, E.R., Barsuk, J.H., Wayne, D.B., 2011. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad. Med. 86, 706–711. McGarry, D., Cashin, A., Fowler, C., 2011. Marketing mental health nursing on Australian schools of nursing websites – Is mental health nursing positioned ‘between the flags?’. In: Australian College of Mental health Nurses, I. (Ed.), Mental Health Nursing: Swimming Between the Flags? International Journal of Mental Health Nursing, Gold Coast, Queensland, Australia, p. 12. McGarry, D., Cashin, A., Fowler, C., 2012. Child and adolescent psychiatric nursing and the ‘plastic man’: reflections on the implementation of change drawing insights from Lewin's theory of planned change. Contemp. Nurse 41, 263–270. McKenna, L., French, J., Newton, J., Cross, W., 2007. Prepare nurses for the future: identify use of simulation, and more appropriate and timely clinical placement to increase clinical competence and undergraduate positions; Final report of key activities. Department of Human Services Nurse Policy Branch, Victoria, Australia.

5

Mental Health Nurse Education Taskforce, 2008. Final Report. Mental Health in PreRegistration Nursing Courses. Mental Health Workforce Advisory Committee, 2010. Mental Health in Pre-registration Nursing: Progress Report and Full Survey Results. Mental Health Workforce Advisory Committee, Melbourne. Murray, C., Grant, M.J., Howarth, M.L., Leigh, J., 2008. The use of simulation as a teaching and learning approach to support practice learning. Nurse Educ. Pract. 8, 5–8. Musson, D.M., Helmreich, R.L., 2004. Team training and resource management in health care: current issues and future directions. Harv. Health Policy Rev. 5, 25–35. National Council of State Boards of Nursing Inc, 2009. Report of findings from the effect of high-fidelity simulation on nursing students' knowledge and performance: a pilot study. Research Brief National Council of State Boards of Nursing, IncNCSBN®, Chicago pp. 1–35. National Health Workforce Taskforce, 2008. Data, capacity and clinical placements across Australia: a discussion paper. In: National Health Workforce Taskforce (Ed.), National Health Workforce Taskforce, Melbourne, pp. 1–26. Nehring, W.M., 2008. U.S. boards of nursing and the use of high-fidelity patient simulators in nursing education. J. Prof. Nurs. 24, 109–117. Norman, G., Dore, K., Grierson, L., 2012. The minimal relationship between simulation fidelity and transfer of learning. Med. Educ. 46, 636–647. Nursing and Midwifery Council, 2007. Supporting direct care through simulated practice learning in the pre-registration nursing programme. NMC Circ. 36/2007. Nursing and Midwifery Council (UK), 2010. Consultation on proposals arising from a review of fitness for practice at the point of registration. Nursing and Midwifery Council (UK) pp. 1–24. Patey, R.E., 2008. Identifying and assessing non-technical skills. Clin. Teach. 5, 40–44. Prion, S., 2008. A practical framework for evaluating the impact of clinical simulation experiences in prelicensure nursing education. Clin. Simul. Nurs. 4, e69–e78. Radhakrishnan, K., Roche, J., Cunningham, H., 2007. Measuring clinical practice parameters with human patient simulation: a pilot study. Int. J. Nurs. Educ. Scholarsh. 4, 1–12. Reilly, A., Spratt, C., 2007. The perceptions of undergraduate student nurses of highfidelity simulation-based learning: a case report from the University of Tasmania. Nurse Educ. Today 27, 542–550. Rosen, K.R., 2008. The history of medical simulation. J. Crit. Care 23, 157–166. Saewert, K.J., Rockstraw, L., 2012. Development of evaluation measures for human simulation: the checklist. In: Wilson, L., Rockstraw, L. (Eds.), Human Simulation for Nursing and Health Professionals. Springer Publishing Company, New York, pp. 28–36. Smith, S.J., Roehrs, C.J., 2009. High-fidelity simulation: factors correlated with nursing student satisfaction and self-confidence. Nurs. Educ. Perspect. 30, 74–78. Steele, D.J., Hulsman, R.L., 2008. Empathy, authenticity, assessment and simulation: a conundrum in search of a solution. Patient Educ. Couns. 71, 143–144. Sweller, J., Ayres, P., Kalyuga, S., 2011. Cognitive Load Theory. Springer. Tanner, C.A., 2011. The critical state of measurement in nursing education research. J. Nurs. Educ. 50, 491–493. Todd, M., Manz, J.A., Hawkins, K.S., Parsons, M.E., Hercinger, M., 2008. The development of a quantitative evaluation tool for simulations in nursing education. Int. J. Nurs. Educ. Scholarsh. 5 (1 pp.). Van Merrienboer, J.J.G., Sweller, J., 2010. Cognitive load theory in health professional education: design principles and strategies. Med. Educ. 44, 85–93. Wear, D., Varley, J.D., 2008. Rituals of verification: the role of simulation in developing and evaluating empathic communication. Patient Educ. Couns. 71, 153–156. Yuan, H., Williams, B., Fang, J., Ye, Q., 2012. A systematic review of selected evidence on improving knowledge and skills through high-fidelity simulation. Nurse Educ. Today 3, 294–298. Yule, S., Flin, R., Paterson-Brown, S., Maran, N., 2006. Non-technical skills for surgeons in the operating room: a review of the literature. Surgery 139, 140–149.

Please cite this article as: McGarry, D., et al., Is high fidelity human patient (mannequin) simulation, simulation of learning?, Nurse Educ. Today (2014), http://dx.doi.org/10.1016/j.nedt.2014.04.014

Is high fidelity human patient (mannequin) simulation, simulation of learning?

This paper explores the application of evaluation of high fidelity human patient (mannequin) simulation emerging in nursing education. The ramificatio...
311KB Sizes 1 Downloads 3 Views