Surg Endosc DOI 10.1007/s00464-013-3265-0

and Other Interventional Techniques

Randomized controlled trial on the effect of coaching in simulated laparoscopic training Simon J. Cole • Hugh Mackenzie • Joon Ha George B. Hanna • Danilo Miskovic



Received: 13 June 2013 / Accepted: 4 September 2013 Ó Springer Science+Business Media New York 2013

Abstract Background The effect of coaching on surgical quality and understanding in simulated training remains unknown. The aim of this study was compare the effects of structured coaching and autodidactic training in simulated laparoscopic surgery. Methods Seventeen surgically naive medical students were randomized into two groups: eight were placed into an intervention group and received structured coaching, and nine were placed into a control group and received no training. They each performed 10 laparoscopic cholecystectomies on a virtual reality simulator. The surgical quality of the first, fifth, and 10th operations was evaluated by 2 independent blinded assessors using the Competency Assessment Tool (CAT) for cholecystectomy. Understanding of operative strategy was tested before the first, fifth, and 10th operation. Performance metrics, path length, total number of movements, operating time, and error frequency were evaluated. The groups were compared by the Mann–Whitney U test. Proficiency gain curves were plotted using curve fit and CUSUM models; change point analysis was performed by multiple Wilcoxon signed rank analyses. Results The intervention group scored significantly higher on the CAT assessment of procedures 1, 5, and 10, with increasing disparity. They also performed better in the knowledge test at procedures 5 and 10, again with an S. J. Cole  H. Mackenzie (&)  J. Ha  G. B. Hanna Department of Surgery and Cancer, Imperial College London, London, UK e-mail: [email protected] D. Miskovic John Goligher Colorectal Unit, The Leeds Teaching Hospitals, Leeds LS9 7TF, UK e-mail: [email protected]

increasing difference. The learning curve for error frequency of the intervention group reached competency after operation 7, whereas the control group did not plateau by procedure 10. The learning curves of both groups for path length and number movements were almost identical; the mean operation time was shorter for the control group. Conclusions Clinically relevant markers of proficiency including error reduction, understanding of surgical strategy, and surgical quality are significantly improved with structured coaching. Path length and number of movements representing merely manual skills are developed with task repetition rather than influenced by coaching. Structured coaching may represent a key component in the acquisition of procedural skills. Keywords Laparoscopic surgery  Education  Surgical training  Simulation

Laparoscopic surgery requires high levels of technical skill and has an extended learning curve compared to open surgery. There is increasing evidence that simulation is an effective training tool, and the skills learned are transferable to the operating room [1–4]. The quote ‘‘Tell me, and I will forget. Show me, and I may remember. Involve me, and I will understand,’’ is attributed to Confucius in 450 BC. This concept reappears in contemporary educational theory, suggesting retention rates for passive learning techniques (e.g., lectures, textbook reading) to be inferior compared to methods that engage the learner to actively participate in the learning process [5]. Surgical skill acquisition is by definition an active learning experience; however, the level of trainee engagement is highly variable. Surgical coaching can be defined as the process of encouraging juniors by more experienced surgeons to

123

Surg Endosc

actively develop skills and subsequently improve performance [6, 7]. Trainees can be engaged by identifying learning aims before, problem-solving during, and reflection after a procedure and hence take on an active role in the acquisition of their surgical skills. There is evidence that supervision in the operating room improves the quality of surgery, reducing adverse outcomes and shortening the learning curve [8]. However, conversely, a previous randomized controlled trial identified that proctoring did not confer benefit over independent training in simulation [9]. The end points of this study relied on operative metrics that evaluate processes rather than the quality of the surgery or understanding of surgical strategy. Therefore, it remains debated whether the presence of an expert mentor during simulated training is beneficial [9, 10]. The theoretical benefit of supervised training is derived from Vygotsky’s zone of proximal development, which describes the potential difference between independent and adult-guided task solving performance in child development [11]. Adopted for surgical education, it could be defined as the difference between self-taught (autodidactic) training and structured coaching by a more experienced surgeon. Hence, the exploitation of the zone of proximal development in simulated training should improve the quality of surgery and understanding of operative strategy. The aim of this experiment was to compare the effect of coaching and an autodidactic approach on the quality of surgery and understanding of operative strategy in a simulated surgical procedure (laparoscopic cholecystectomy).

Procedure In order to provide a standardized procedure, all tasks were performed on a virtual surgical simulator. The Lap Mentor (Simbionix Corp., Cleveland, Ohio, USA) is a validated system using high-fidelity graphics and two robotic arms providing haptic feedback. After 30 min of familiarization, the subjects were tested on three abstract tasks for baseline technical evaluation [15]. After this, all subjects completed 10 simulated uncomplicated laparoscopic cholecystectomies. Each participant was allowed to carry out a maximum of three procedures per day, provided each was separated by a 1-h break. All participants had to complete the procedures within 2 weeks of starting. Before their first procedure, participants were shown a video describing operative technique for a simulated laparoscopic cholecystectomy. Two similar videos, with decreasing levels of instruction, were shown before the second and third attempt (Fig. 1).

Methods Subjects On the basis of previous experience with the assessment tool, the estimated sample size was eight subjects per group [a = 0.05, b = 0.2, expected assessment scores 3 versus 2 (standard deviation, 0.7)] [12]. Seventeen surgically naive undergraduate medical students with no experience on surgical simulators were recruited and randomized to the intervention or control group with sealed envelopes. Demographic information including age, gender, undergraduate year, and video game experience was collected from all participants. Handedness was assessed using the Edinburgh Handedness inventory, and a Likert scale was used to assess how confident they were to perform the simulated cholecystectomy [13, 14]. After explaining the study protocol, written informed consent was obtained from each individual.

123

Fig. 1 Study protocol. Performance quality was assessed by 2 blinded assessors at operations (op) 1, 5, and 10 using a competency assessment tool (CAT). The 10th operation was performed by all subjects independently (without instruction)

Surg Endosc

Intervention

(novice) to 4 (proficient), with 3 indicating competent performance [12].

The control group completed the operations without any additional training. The intervention group received structured coaching during the first nine cases (Fig. 1). The same instructor (SJC) coached all subjects of the intervention group after being trained in simulated laparoscopic cholecystectomy and in the coaching technique. The coaching technique included three steps and was adopted from the established National Training Programme for Laparoscopic Surgery Train the Trainer (Lapco TT) course. Preoperatively (‘‘set’’), instructor and trainee identified and agreed on the learning aims for the upcoming training session. During the procedure (‘‘dialogue’’), the instructor would follow an agreed-upon protocol to coach the trainees when facing difficulties. This included halting the activity to ask the trainee about the problem, discuss possible solutions, apply the best option, and check to see whether this solved the problem. Postoperatively (‘‘closure’’), structured feedback was provided and the learning aims for the next session defined (Table 1). End points and outcome parameters The primary end points were the surgical quality for the 10th procedure, which was performed without any instruction by both groups and the understanding of operative strategy. Secondary end points included the surgical quality for procedures 1 and 5 and performance metrics. Surgical quality Videos of operations 1, 5, and 10 were analyzed by two independent blinded assessors using the competency assessment tool (CAT) (Fig. 2). This previously validated tool is a task-specific assessment tool to evaluate the quality of surgical performance in four skill areas across three tasks of the operation, on a scale ranging from 1

Table 1 Three-stage structure used for training the intervention group Set

Align agendas Set ground rules Questioning Direct guidance Closure

Baseline understanding of the operation was assessed using a 25-min, 22-question test of relevant anatomy, procedural steps, instruments, and common errors. This test was repeated before operations 5 and 10 to assess improvement of the trainees understanding of the laparoscopic cholecystectomy. Performance metrics Operating time, path length, number of movements, and number of errors were automatically recorded by the simulation system for all operations. Analysis Statistical analysis was performed by SPSS software, version 19.0 (IBM, Armonk, NY, USA). The two groups were compared by the Mann–Whitney U test, and interrater reliability was measured using the intraclass correlation coefficient. A p value of \0.05 was considered significant. For continuous data, proficiency gain curves were generated by curve fitting raw data using the power law [f(x) = axk]. For categorical data (number of errors), cumulative sum (CUSUM) curves were applied. These were constructed by plotting the cumulative sum of errors over consecutive operations using the CUSUM equation Si = Si-1 ? Xi; S0 = 0, where Si is the cumulative sum and Xi the number of errors at procedure i. Change point analysis in order to determine learning curve plateaus was performed by multiple Wilcoxon signed rank tests. To ensure that any temporary plateaus were excluded, a plateau was only declared if statistical significance was lost for three consecutive operations.

Results

Recap operation Review learning goals

Dialogue

Understanding of operative strategy

Trainee self reflection Feedback from trainer Learning goals Summary

Demographics and baseline assessment Subject characteristics and baseline technical parameters are summarized in Table 2. There was a small but significant age difference between the groups (control 21.3 years, intervention 22.5 years, p = 0.048). All other demographics (handedness, video game use, baseline knowledge, and confidence) were not different between the groups. There was no intergroup difference for all three baseline tasks.

123

Fig. 2 Competency assessment tool (CAT) for cholecystectomy

Surg Endosc

123

Surg Endosc Table 2 A comparison of demographic information and baseline technical metrics for the control and intervention groups Characteristic

Control

Intervention

p value (sig. if p \ 0.05)

Demographic data No. of subjects Age (years) No. male Handednessa

9

8

21.3 (1.6)

22.5 (0.5)

0.043a

8

7

0.931

51.3 (61.7)

30.8 (61.9)

Computer games, (h/week)

4.8 (6.5)

2.8 (4.0)

0.266 0.375

Confidence

2.0 (1.3)

1.8 (0.9)

0.836

Baseline metrics Task 3—touching highlighted objects Total path length (cm)

226.7 (65.1)

231.6 (91.1)

0.791

Total time (s)

62.7 (14.3)

63.3 (20.3)

0.710

Total no. of movements

62.3 (26.0)

57.7 (25.8)

0.751

Total path length (cm)

609.9 (108.4)

615.9 (208.7)

0.564

Total time (s)

138.0 (20.6)

137.3 (31.1)

0.885

Total no. of movements

200.9 (35.4)

190.9 (48.2)

0.501

Task 5—grasping and clipping

Task 6—two-handed maneuvers

a

Total path length (cm)

552.0 (116.3)

570.4 (134.7)

0.700

Total time (s)

160.3 (42.1)

160.1 (34.9)

0.630

Total no. of movements

205.8 (66.4)

214.4 (53.4)

0.596

Right-handed [40, 40[ ambidextrous [-40, left-handed \-40

Surgical quality (CAT) The interrater reliability for CAT scores between the two raters was 0.787 (intraclass correlation coefficient). There was a significant difference of mean scores for operation 10 (control = 2.04, intervention = 2.65, p \ 0.001) and the intervention group were scored significantly higher in the majority of areas as assessed on the CAT form (Table 3). There was also a significant difference present in operations 1 and 5; however, with increasing experience, the disparity of the two groups increased (Fig. 3).

Fig. 3 Mean CAT scores from the 2 blinded assessors for the control and intervention groups at procedures 1, 5, and 10

higher than the control group in the tests before the fifth (p = 0.03) and tenth (p = 0.03) procedures (Fig. 4). Performance metrics The proficiency gain curves for number of movements and path length were almost identical for both groups (Fig. 5). The mean operating time was significantly less in the control group for all operations (control, 808 s; intervention, 1030 s; p \ 0.001). However, the operating time difference between the two groups reduced as the study progressed (Fig. 5). The control group caused significantly more errors per case than the intervention group (control, 2.64; intervention, 1.16; p = 0.001). The proficiency gain curve for number of errors in the intervention group plateaus at procedure 7, whereas the curve for the control does not reach a plateau before procedure 10 (Fig. 6).

Understanding of operative strategy

Discussion

In the prestudy understanding test, there was no significant difference between the mean score of the groups (p = 0.33). However, the intervention group scored significantly

To our knowledge, this is the first controlled experiment reporting the benefit of coaching on simulated surgical quality and understanding of operative strategy. The results

Table 3 Mean CAT scores for procedure 10 of the two blinded assessors for the different areas of the CAT form, intervention versus control group Task area

Skill area Instrument use

Tissue handling

Errors

End product quality

Exposure

2.80 vs. 2.06 (p = 0.004)

Calot triangle

2.80 vs. 2.18 (p = 0.003)

2.70 vs. 1.82 (p \ 0.001)

2.40 vs. 1.76 (p = 0.013)

2.60 vs. 2.18 (p = 0.093)

2.70 vs. 2.24 (p = 0.046)

3.05 vs. 2.35 (p = 0.003)

2.85 vs. 2.47 (0.132)

Resection

2.95 vs. 1.94 (p = \0.001)

2.70 vs. 2.00 (p = 0.001)

2.60 vs. 1.81 (p = 0.003)

2.85 vs. 2.19 (p \ 0.001)

The intervention group consistently scores higher than the control group

123

Surg Endosc

Fig. 4 Mean test scores of the control and intervention group in the 3 tests of understanding of operative strategy; prestudy, before the 5th procedure and before the 10th procedure

suggest that learning rates for generic skills such as economy of movements were similar for self-directed and coached training. However, the quality of performance and the error reduction rate were significantly better in subjects who received structured coaching. In fact, as the study progressed, this difference increased, and the greatest disparity was observed at the 10th operation, which was performed independently by both groups. This indicates that the intervention group was able to retain superior performance, even without the presence of an instructor. A similar trend could be observed in their understanding of operative strategy. The intervention group scored consistently higher in the test once the intervention started. The questions were aimed to assess a combination of anatomy, instrument knowledge, and intraoperative problem solving using video stills. For proficiency metrics, such as path length and number of movements, structured coaching did not provide an advantage over self-taught learning by trial and error. These findings are consistent with a previous study that assessed similar metrics and concluded that proctored training does not offer any advantage beyond independent training [9]. Path lengths and number of movements are validated for construct validity for the VR simulator and used for assessment in curricula for laparoscopic cholecystectomy [15, 16]. It has also been suggested that these metrics are superior to global rating scales in assessing surgical skill [10]. However, the current study suggests that these metrics are only surrogates of increasing economy of movement and provide no indication of the safety or quality of the surgery. It would seem ill-advised to make inferences about surgical ability on the basis of simulator metrics alone. This is also supported by clinical studies

123

Fig. 5 Curve fit learning curves for the control and intervention groups for (A) total path length, (B) total number of movements and (C) total time taken

Surg Endosc Fig. 6 CUSUM learning curve for the intervention and control group for the frequency of errors

using real patient data, suggesting that a self-taught approach results in higher complication and conversion rates [17]. These negative outcomes were not observed when adequate supervision was present in the operating theater [8]. The contrast in error rate and manual skill metrics suggests that it is a better discriminator of surgical ability. This is in concordance with another study advocating that error to be the most valuable simulator metric [18]. Time is frequently used for assessment purposes, and fast completion is a feature of expert performance [19–21]. Nevertheless, time does not indicate the quality of the operation, and in this study, the control group members, who were faster, performed with more errors and with lower-quality surgery. The intervention group members were undoubtedly slower as a result of interruptions from the instructor, which reduced during the study. However, both groups performed the 10th procedure independently, and the intervention group were still slower and performed a significantly better operation, indicating that additional time was required in order to use a careful and accurate technique. This study has a few limitations. First, although it was desirable for the study to have a homogenous and naive group of subjects, the participants were medical students and not surgeons. It is possible that surgeons are a selfselected group with a learning style preference different to

the group used in this study. Second, the retention rate was only tested in one single procedure (operation 10), and no long-term retention rates were evaluated. Third, although standardized, the VR simulation has inherent shortcomings in its ability to recreate the real operation. Although it provides good training for manual skills and the overall steps, it does not teach all the intricacies necessary for a successful real-life laparoscopic cholecystectomy. Therefore, it remains unknown whether the improvements gained by coaching here are transmitted to the operating room. Utilization of the zone of proximal development, with the presence of a coach, improves quality and understanding in simulated surgical training. These improvements suggest that increasing trainee engagement by smart coaching is an important method to increase training efficiency. Training efficiency is increasingly important with restrictions of exposure to operating room experience as a result of working time regulations and other service commitments for contemporary surgical trainees. Further work should assess whether there are consistent findings with more senior trainees and whether the improvements observed are transferred to the real operating room environment. Disclosures S. Cole, H. Mackenzie, J. Ha, G. Hanna, and D. Miskovic have no conflicts of interest or financial ties to disclose.

123

Surg Endosc

References 1. Seymour NE, Gallagher AG, Roman SA, O’Brien MK, Bansal VK, Andersen DK, Satava RM (2002) Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg 236:458–463 2. Schijven MP, Jakimowicz JJ, Broeders IA, Tseng LN (2005) The Eindhoven laparoscopic cholecystectomy training course— improving operating room performance using virtual reality training: results from the first EAES accredited virtual reality trainings curriculum. Surg Endosc 19:1220–1226 3. Palter VN, Orzech N, Reznick RK, Grantcharov TP (2013) Validation of a structured training and assessment curriculum for technical skill acquisition in minimally invasive surgery: a randomized controlled trial. Ann Surg 257:224–230 4. Grantcharov TP, Kristiansen VB, Bendix J, Bardram L, Rosenberg J, Funch-Jensen P (2004) Randomized clinical trial of virtual reality simulation for laparoscopic skills training. Br J Surg 91:146–150 5. Dale E (1946) Audiovisual methods in teaching. Dryden Press, New York 6. Rombeau J, Goldberg A, Lovelend-Jones C (2010) Surgical mentoring: building tomorrow’s leaders. Springer, Verlag 7. Parsloe E (1999) The manager as coach and mentor. Chartered Institute of Personnel and Development, London 8. Miskovic D, Wyles SM, Ni M, Darzi AW, Hanna GB (2010) Systematic review on mentoring and simulation in laparoscopic colorectal surgery. Ann Surg 252:943–951 9. Snyder CW, Vandromme MJ, Tyra SL, Hawn MT (2009) Proficiency-based laparoscopic and endoscopic training with virtual reality simulators: a comparison of proctored and independent approaches. J Surg Ed 66:201–207 10. Pellen M, Horgan L, Roger Barton J, Attwood S (2009) Laparoscopic surgical skills assessment: can simulators replace experts? World J Surg 33:440–447

123

11. Vygotsky L (1978) Mind in society: development of higher psychological processes. Harvard University Press, Cambridge 12. Miskovic D, Ni M, Wyles SM, Kennedy RH, Francis NK, Parvaiz A, Cunningham C, Rockall TA, Gudgeon AM, Coleman MG, Hanna GB (2013) Is competency assessment at the specialist level achievable? A study for the national training programme in laparoscopic colorectal surgery in England. Ann Surg 257:476–482 13. Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113 14. Brown DC, Miskovic D, Tang B, Hanna GB (2010) Impact of established skills in open surgery on the proficiency gain process for laparoscopic surgery. Surg Endosc 24:1420–1426 15. Aggarwal R, Crochet P, Dias A, Misra A, Ziprin P, Darzi A (2009) Development of a virtual reality training curriculum for laparoscopic cholecystectomy. Br J Surg 96:1086–1093 16. Zhang A, Hu¨nerbein M, Dai Y, Schlag PM, Beller S (2008) Construct validity testing of a laparoscopic surgery simulator (Lap Mentor): evaluation of surgical skill with a virtual laparoscopic training simulator. Surg Endosc 22:1440–1444 17. Miskovic D, Ni M, Wyles SM, Tekkis P, Hanna GB (2012) Learning curve and case selection in laparoscopic colorectal surgery: systematic review and international multicenter analysis of 4852 cases. Dis Colon Rectum 55:1300–1310 18. Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD, Satava RM (2005) Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg 241:364–372 19. Loukas C, Nikiteas N, Kanakis M, Georgiou E (2011) The contribution of simulation training in enhancing key components of laparoscopic competence. Am Surg 77:708–715 20. Aggarwal R, Moorthy K, Darzi A (2004) Laparoscopic skills training and assessment. Br J Surg 91:1549–1558 21. Sturm LP, Windsor JA, Cosman PH, Cregan P, Hewett PJ, Maddern GJ (2008) A systematic review of skills transfer after surgical simulation training. Ann Surg 248:166–179

Randomized controlled trial on the effect of coaching in simulated laparoscopic training.

The effect of coaching on surgical quality and understanding in simulated training remains unknown. The aim of this study was compare the effects of s...
726KB Sizes 0 Downloads 0 Views