The Laryngoscope

TRIOLOGICAL SOCIETY CANDIDATE THESIS

C 2015 The American Laryngological, V

Rhinological and Otological Society, Inc.

Improving Skills Development in Residency Using a Deliberate-Practice and Learner-Centered Model Nasir I. Bhatti, MD, FACS; Aadil Ahmed, MD Objectives/Hypothesis: Work-hour restrictions, increased workload, and subjective assessment of competency are major threats to the efficacy of the traditional apprenticeship model of surgical training in modern surgical practice. In response, medical educators are developing time- and resource-efficient competency-based models of surgical training. The purpose of our project was to develop, implement, and measure the outcomes of such objective and structured programs in otolaryngology. We also investigated factors affecting the learning curve, especially deliberate practice, formative feedback, and learners’ autonomy. Study Design: Prospective, longitudinal study. Methods: To measure the surgical skills of residents, we first developed and tested objective tools for otolaryngology procedures. Based on these instruments, we identified milestones of the procedures. Training on a virtual-reality simulator was validated to shorten the learning curve. We also studied a learner-centered approach of training, factors affecting the learning curve, and barriers to a competency-based model. Results: The objective tools were found to be a feasible, reliable, and valid opportunity for measuring competency in both the laboratory and operating room. With the formative assessment from these tools, residents had a remediation target to be achieved by deliberate practice. The milestones helped identify the threshold of competency, and deliberate practice on the simulator gave an opportunity for improving skills. The learner-centered approach allowed flexibility and personalized learning by shifting the responsibility of the learning process to the learners. Conclusion: The competency-based model of residency, based on the principles of deliberate practice and a learnercentered approach, is a feasible model of residency training that allows development of competent surgeons and hence improves patient outcomes. Despite these advantages, challenges to this model require a concerted effort to overcome and fully implement these principles of training beyond just technical skills, ultimately creating well-rounded medical professionals and leaders in the surgical field. Key Words: Surgical competency, surgical training, competency-based model, deliberate practice, learner-centered, personalized training, OSATS, simulation, milestones. Level of Evidence: N/A. Laryngoscope, 125:S1–S14, 2015

INTRODUCTION Surgeons are fiduciary professionals responsible for competently using their knowledge and skills to benefit their patients. Residency programs therefore have a duty to train and develop surgeons who are competent to practice medicine without jeopardizing patients’ safety. Traditionally, an apprenticeship model for surgical training has been practiced and has successfully created generations of competent surgeons.1 New regulations, however, such as work-hour restrictions and limited From the Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A Editor’s Note: This Manuscript was accepted for publication May 11, 2015. The authors have no funding, financial relationships, or conflicts of interest to disclose. Send correspondence to Nasir I. Bhatti, MD, Johns Hopkins Outpatient Center, 601 N. Caroline St., Suite 6241, Baltimore, MD 21287. E-mail: [email protected] DOI: 10.1002/lary.25434

Laryngoscope 125: October 2015

reimbursement for training pose major challenges to this model. In addition, subjective assessment to determine the level of competency of trainees in this model is under tight scrutiny by public and credentialing agencies. To address this matter of measuring surgical competency, the Accreditation Council for Graduate Medical Education (ACGME) recommends that all residency programs evaluate their trainees under six core competencies.2 These challenges led to the idea of a competency-based model (CBM) of surgical training, which emphasizes objective and longitudinal assessment of residents’ skills in both surgical and nonsurgical domains.3 Continuous evaluation and feedback help trainees identify their weaknesses early in the course of training, with an opportunity to remodel their skills in subsequent performance. This opportunity is further improved by the introduction of deliberate practice and simulation modalities in surgery. Simulation reproduces a real surgical environment and gives the trainee an opportunity to safely practice and improve surgical skills. The provision of immediate and automatic feedback and the Bhatti and Ahmed: Personalized Residency Training Model

S1

TABLE I. Five Principles of Deliberate Practice and Their Implementation in a Competency-Based Model. Principles of Deliberate Practice

Implementation Methods

Setting learning goals

OSATS, milestones

Assessment

OSATS, OSCE, board examinations

Formative feedback Repetition of performance:

OSATS, surgical simulators Surgical simulators, cadavers, animal labs

Motivation

Surgical simulators, self-directed learning (learner-centered approach)

To examine and fully implement the principles of deliberate practice, we tested trainees on a temporal bone virtualreality simulator for acquisition of skills. Newer learning strategies such as a learner-centered approach were developed and implemented to assist the learning process. In addition, we conducted studies focusing on the factors affecting the learning curves and challenges associated with a CBM. All of these studies were approved by our institutional review board. All statistical analyses were carried out on Stata 10.0 (StataCorp LP, College Station, TX); a P value of less than 0.05 was considered significant for all studies.

RESULTS

We developed and implemented a longitudinal and structured program, keeping the fundamentals of deliberate practice in mind (Table I). We developed performance-based testing, such as an Objective Structured Clinical Examination (OSCE), and tested trainees on standardized patients (SP) as a first step to evaluate the effectiveness of objective assessment. As a next step, we developed Objective Structured Assessments of Technical Skills (OSATS) for major otolaryngology procedures, that is, tonsillectomy, thyroidectomy, endoscopic sinus surgery (ESS), mastoidectomy, and laryngoscopy, which were tested in both the laboratory and operating rooms by deconstructing the procedures into small representative steps. Trainees were evaluated on checklists and global parts of the tool using a Likert scale. The checklist part evaluates the performance on deconstructed steps of the procedure, whereas the global part assesses preparation for the procedure and skill domains that include the visual-motor and cognitive performance required for a successful procedure. To create the items on these assessment tools, we used a modified Delphi technique—a systemic interactive forecasting method that relies on course curriculum and validation by a panel of experts. A Likert scale is a psychometric scale from 1 to 5 that evaluators use to rate the learner’s performance (e.g., 1 5 unable to perform, 3 5 performs competently, and 5 5 performs expertly). The continuous implementation of these tools led to the development of milestones or benchmarks of these key procedures. To increase the objectivity of the assessments, we also tested the trainees using video-based evaluations.

Both the OSCE and the OSATS objectively evaluated skills in the laboratory and the operating room. We found the OSCE was a valid method of assessing residents’ clinical skills for evaluating hoarseness. Senior residents performed better in all of the tasks, such as obtaining history and performing a physical exam on a SP, ability to perform flexible laryngoscopy on a mannequin, and interpretation of radiologic findings. Internal consistency assessed by Cronbach’s alpha as a measure of interitem reliability was 0.92 for the laryngoscopy station and 0.95 for the radiology station.4 Similarly, the OSATS (tonsillectomy, thyroidectomy, mastoidectomy, endoscopic sinus surgery, laryngoscopy) were developed; tested in the laboratory and operating room settings; and found to be feasible, reliable, and valid—and allow monitoring of the learning curves of the trainees.5–8 These tools showed that performance improved with increasing level of experience, and they were construct-valid, as shown in Figure 1. The feedback provided during these evaluations helped the residents achieve targeted remediation, which was apparent in their subsequently evaluated performances. These tools also identified the tasks that were predictors of competency for an overall procedure. For example, “identification of uncinate and boundaries’’ was found to have the strongest correlation with the overall surgical performance of endoscopic sinus surgery (r 5 0.70; P < 0.0001).9 Analyzing data from the mastoidectomy OSATS, we found that opening antrum and deepening dissection at sinodural angle were the strongest predictors of overall surgical performance (r 5 0.82; P < 0.0001) in mastoidectomy.7 These evaluations were most meaningful and demonstrated highest validity when completed within 6 days of the procedure.10 Frequent faculty development sessions increased both the understanding and the overall use of these instruments within the residency program.11 Blinded video-based evaluations increased the objectiveness and significantly reduced the time required to complete the evaluations. Results showed construct validity, with senior residents performing better than junior residents. It took an average of 20 minutes (range, 7–39 minutes) to watch and evaluate a video. Interrater reliability, as measured by interclass correlation coefficient across evaluators, was 0.62.12 By defining milestones for common otolaryngology procedures and longitudinally implementing the OSATS, we could calculate the approximate number of cases required to become competent in each milestone and

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

OSAT 5 Objective Structured Assessments OSCE 5 Objective Structured Clinical Examination.

of

Technical

Skills;

flexibility to practice on simulators at any time are central to the paradigm shift of learning responsibilities in the training process. This learner-centered approach, in which learners take higher responsibility, is now essential for the training programs with limited faculty resources and time. Still, to ensure competency of a surgeon, a balanced approach is necessary, one that incorporates these deliberate practice and learner-centered methods of training with objective assessments and formative feedback. Our purpose in developing and implementing a CBM in an otolaryngology residency program was to increase the understanding of surgical competency, improve the process of developing surgical skills, and evaluate the effectiveness of a competency-based model run on the principles of deliberate practice and learners’ autonomy.

MATERIALS AND METHODS

S2

Fig. 1. Graphs showing improved performance of residents as recorded by Objective Structured Assessments of Technical Skills for different otolaryngology procedures. (A). Mean checklist scores and global rating scores by postgraduate year show a general increase in residents’ performance in tonsillectomy with increasing levels of training; (B) Mean scores of a single trainee (PGY5) for both the checklist and global rating scale, showing the acquisition of surgical skills in thyroidectomy throughout a head and neck surgery rotation despite complexity of the cases; (C) Average checklist score plotted against cumulative days of otology experience showing improving performance in mastoidectomy; (D) Construct validity for checklist and global assessments for endoscopic sinus surgery showing improved performance with increasing levels of training. PGY 5postgraduate year. [Color figure can be viewed in the online issue, which is available at www.laryngoscope.com.]

overall procedure. For example, first, second, and third milestones of the mastoidectomy procedure were achieved after performing an average of 6, 9, and 13 operative cases, respectively.13 Similarly, a 60% probability of achieving competency in performance of all milestones of endoscopic sinus surgery is obtained with performance of 42 ESS procedures, and the probability is increased to a 100% with performance of 55 procedures. On average, it took residents 23 cases to become competent in performance of maxillary antrostomy and ethmoidectomy (first and second milestones).14 A subsequent study showed that the odds ratio (OR) for competency increased with experience in these initial milestones (OR 5 1.13; P 5 0.003). An interesting finding in this study was a tremendous increase in the OR for competency in both initial (3 times higher) and advanced milestones (10 times higher) when personal interest (as reported in a survey) in otology was also recorded (Table II).15 The concept of deliberate practice was successfully implemented through use of a temporal bone simulator.

Practice on this simulator increased the OSATS score (P 5 0.02), reduced the time to complete the tasks (P 5 0.01), and lowered error scores (P 5 0.02).16 Deliberate practice also had a moderating effect on the learning curve and ability to multitask, and assessment-guided feedback increased the trainee’s interest in the task.15,17 Based on reported data about the learning styles of residents and fellows, we developed a learner-centered curriculum designed to meet the needs and learning patterns of residents to prepare them for their in-service examinations. Both mean national and group Stanine scores improved (P 5 0.01) from the pre- to postintervention periods for all postgraduate years.18

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

DISCUSSION The fundamental duty of surgical residency training is to produce a competent surgeon. This concept can be described as based on three main components: 1) technical skill, 2) safe judgment, and 3) high moral

S3

TABLE II. Logistic Regression Analysis Showing Significant Increment in the Competency for Both Initial and Advanced Milestones of Mastoidectomy When Personal Interest Is Involved. Initial Milestone Cortical Mastoidectomy (i.e., Item 1a-4c) Covariate

Environmental factors Number of cases

Odds Ratio

Advanced Milestone Facial Recess (i.e., Item 5a-5c) Estimate (95% Cl)

P value

Odds Ratio

Estimate (95% Cl)

P value

1.13

1.04-1.23

0.003*

1.05

0.96-1.15

0.273

Attendances of the temporal bone course (Supervised training session)

0.96

0.84-1.10

0.591

0.98

0.81-1.17

0.782

Extra time spent in the laboratory (unsupervised) training session)

0.52

0.17-1.57

0.248

0.05

0.011-0.229

0.000*

3.86

1.22-12.27

0.022*

10.38

2.25–47.94

0.003*

Personal factor Interest in otology

*Significant at  0.05. A. The survey questionnaire that explored interest from each resident B. The Task-based Checklist for Mastoidectomy

The apprenticeship model is the most common and well-established method of surgical training still in practice, although not in its original form.1 Traditionally, the trainee learned by observing and emulating the assigned mentor in the operating room and clinical settings. Surgical techniques were learned by demonstration and rep-

etition of the procedure. The major shift in surgical education took place when Dr. William Halstead gave structure to the training, which introduced a pyramid structure in residency training, eliminating candidates each year until only one resident reached graduation.1 This did not eliminate the apprenticeship model but instead gave it a structure and standardization previously lacking. Dr. William Osler was also instrumental in the genesis of this model. A strong proponent of mentoring, Osler emphasized a committed and invested role of mentors in nurturing trainees.1 This model allowed residents to rotate with their mentors, and faculty was required to instruct residents while seeing patients during clinics, on floor, and during operative procedures. Well-documented success stories of graduating surgeons further validated this model, which has been the dominant standard of surgical training. However, new assessment paradigms proposed by the ACGME and the changing landscape of modern surgical practice pose major challenges to this apprenticeship model that are increasingly difficult to overcome. The first of these challenges is resident duty-hour restrictions, limited to 80 hours per week in the United States and 56 hours in Europe.21 Although this restriction may reduce the residents’ fatigue and poor functioning, its impact on resident training and residency structure can be negative. The opportunity for residents to learn from the attending physician may be reduced, and altogether missed in some unusual and rare cases. Furthermore, a procedure previously attended 10 times or more by a resident may now be observed many fewer times. Such lost opportunities to learn will increase the struggle to become competent even in the major and more common procedures. Reduced reimbursement for teaching activities is another challenge that burdens academic physicians with the need to be clinically productive and thus having less time to teach.21 Other issues such as the need for efficient completion of the procedure, productivity, and patient safety are the concerns that limit residents’ hands-on experience, further reducing the opportunity

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

performance (meaning conscientiousness and dedication).19 The concern is to determine how well the residents are trained in these areas. In response, a new field of surgical education has emerged aiming to use scientific understanding and applying principles of adult education theory in surgery. The field of educational theory and research is not new: disciplines such as kinesiology, engineering, and aeronautics have seen a boom in educational research to enhance the student’s learning experience. Although relatively behind the curve, medical education has undergone a major change over the past decade in both quantity and quality of scientific work on educational theory and research. The work by Reznick’s group in Toronto (Canada) and Darzi’s group in London (United Kingdom) has successfully demonstrated improvement in the performances of individuals and of programs as a whole by applying principles of psychology and cognitive learning. The work of Ericsson on deliberate practice is another major example of how principles of psychology can be successfully applied for improved performance in any profession. Based on the evidence provided by these researchers, significant reforms have evolved in the structure of surgical residency programs. At the same time, surgical training should not be overwhelmed by general principles of education because only surgeons understand the nuances of surgical training and must decide which principles are useful and can be applied. Because of the increased interest in the field by academic surgeons and the allocation of departmental positions for surgeons with a focus in education, many institutions are now offering graduate degrees in medical education.20

Surgical Training and Its Challenges

S4

for direct learning.22 None of these issues, however, are an excuse for the programs to compromise on the excellence of their training and hence patient safety. Programs must introduce new methods and adopt models of teaching that do not violate these rules—and at the same time maintain the quality standards of the training program. In addition, the increased attention of the public and media on patient safety and performance ratings has increased pressure on credentialing agencies for enhanced assessment of surgeons’ skills and training. Subjective evaluation of competency within the apprenticeship model is based on unsystematic observations and is subject to bias and inconsistency.23 The procedural logbooks also fail to indicate the quality of performance and specific involvement of the trainee, which raises major concerns about the reliability and validity of those assessments—not to mention the endangerment of patient safety. Such assessments provide only summative feedback, which fails to identify residents’ performance gaps, making a targeted remediation impossible. The reason is that apprenticeship training is based on a time-spent model, with the assumption that a trainee will be transformed into a competent practitioner after a specific period of training.3 This has led to the new CBM of training, which is gaining popularity because of the many challenges associated with the apprenticeship model. This new model centers on demonstration of predetermined competencies that are measured objectively: it focuses not only on the skills learned but also the outcomes of performance. This model emphasizes timely and formative feedback given to the trainees based on objective evaluation by the faculty.3 Combining different definitions of the model, Frank et al.24 summarized CBM as an approach to preparing physicians for practice that is fundamentally oriented to the graduate’s outcome abilities and organized around competencies derived from an analysis of societal and patient needs. It deemphasizes time-based training and promises greater accountability, flexibility, and learner-centeredness. To develop such a model, one needs to look into the true concept of competency.

What Is Surgical Competency? The concept of competency in surgery is not new and can be found as early as 500 years ago in the declaration of the original charter of the Royal College of Surgeons of Edinburgh, with emphasis on being worthy of practicing surgery.25 Benner26 described competence as a part of a spectrum ranging from novice, to advanced beginner, to competent, to proficient, to expert. The novice has no experience and lacks confidence to demonstrate safe practice and requires continued supervision. Advanced beginners demonstrate marginally acceptable performance because of prior experience in actual situations. Competent people are coordinated, able to demonstrate efficiency, and have confidence in their actions based on considerable experience. Care is completed within a suitable time frame without supporting cues. The competent are able to plan their actions but lack Laryngoscope 125: October 2015

flexibility and speed. Proficient practitioners understand a situation as a whole because they perceive its meaning in terms of long-term goals. Proficient physicians learn from experience what typical events to expect in a given situation and how plans need to be modified in response to these events. They can recognize when the expected normal picture does not materialize. This holistic understanding improves their decision making, making it less labored. The expert’s performance becomes fluid and flexible and highly proficient. Highly skilled analytic ability is demonstrated for those situations with which they had no previous experience. Although the successful outcome of a surgical procedure depends primarily on the competence of a surgeon, other factors also contribute.25 Hall et al.27 describe surgical competence as the ability to successfully apply professional knowledge, skills, and attitudes to new situations as well as to familiar tasks. According to Girot27, competence is defined by behavioral (equated with the ability to actually perform tasks) and psychological (equated with cognitive, affective, and psychomotor skills) components, which are not mutually exclusive. This notion is better explained by Bhatti and Cummings27 who define competence and competency separately. They define competence as the aspect of a job that an individual is able to perform, whereas competency is the behavior underpinning such performance. This holistic approach is warranted for two reasons: first, it attempts to address the complexity of society’s expectations of the medical profession; and second, it gives due consideration to the uncontrolled nature of the situations in the day-to-day practice of medicine.

Accreditation Council for Graduate Medical Education Initiative Based on this new model of surgical training, the six core competencies—patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systemsbased practice—were developed by the ACGME in the early 1990s and officially endorsed in 1999.2 These competencies are part of the ACGME’s long-term initiative, The Outcomes Project, which emphasizes educational outcomes in the accreditation of residency program. The programs are expected to identify their objectives based on these competencies, objectively record the achievement of these competencies, and use educational outcome data to improve resident and overall program performance.2 The introduction of these competencies has increased pressure on residency programs to develop and implement tools that are objective and can reliably evaluate these competencies.

Deliberate Practice Work-hour constraints, compliance with core competencies, and outcomes assessment potentially affect the operating time and experience for trainees. In addition, the use of cadavers and animal models has been Bhatti and Ahmed: Personalized Residency Training Model

S5

decreasing as a result of high-maintenance costs for such laboratories and related ethical issues.16 This situation requires time- and cost-effective methods for acquiring surgical skills to improve the quality of training. Without question, no one becomes an expert professional without enough practice and years of experience, but practice and experience alone do not necessarily ensure a superior level of performance. Professionals with the same amount of experience may be at different levels of performance. Although everyone in a specific domain gets better with experience, some people learn faster and continue to improve, whereas others reach a stable level of performance and maintain it for the rest of their careers. Traditionally, these differences in motivation and performance are attributed to individuals’ abilities, mental capacities, and innate talent, which have led organizations to use a system of testing and interviewing to find individuals of higher talent. In contrast, Ericsson28 argues that “individual differences in ultimate performance can largely be accounted for by differential amounts of past and current levels of practice.” Deliberate practice is engagement in a highly structured activity with the specific goal of improving performance. Such practice is qualitatively different from work and play and “includes activities that have been specially designed to improve current level of performance.”28 Deliberate practice becomes easier to understand with Fitts and Posner’s29 three-stage theory of motor skills acquisition. After the first cognitive stage (theoretical training), the learner needs to advance through an integrative stage—during which knowledge is translated into appropriate motor behavior—leading to fluidity and rapidity of execution, by which time the last autonomous stage is attained. Simple practice may lead directly to the autonomous stage, skipping the integrative stage. An acceptable standard of performance is typically attained and the performance skills become automated; however, as a consequence of automation, performers lose conscious control over execution of those skills, making intentional modifications difficult. Once the automated phase of learning has been attained, performance reaches a stable plateau with no further improvements (Fig. 2). Deliberate practice has a long integrative stage, which helps performers gain increasing control over their actions. This indicates that although everyday

skills can be learned with simple practice, development of expertise is not possible without experience and deliberate practice. This concept of deliberate practice also requires reevaluating the concept of expertise in a field. Experts in a specific domain are generally identified by their extensive experience or reputation. This method is highly subjective and may not be reflective of their superior performance. For example, highly experienced computer programmers’ performance on programming tasks is not always superior to that of computer science students, and physics professors from the University of California, Berkeley, were not always consistently superior to students on introductory physics problems.28 Therefore, the social criteria of being an expert cannot be validated; only individuals consistently exhibiting superior performance on an objective scale should be considered experts. To avoid the plateaued development associated with automaticity, it is important that individuals have welldefined learning goals. This allows individuals to deliberately construct and seek out training situations in which the desired goal exceeds their current level of performance. Furthermore, by receiving immediate feedback and opportunities to reflect on possible refinements, competitive individuals are able to improve with repeated exposures to similar tasks over time, thus increasing their motivation. The importance of deliberate practice in attaining expert performance was first described in a report of expert musicians studying in Berlin.28 Those in the superior groups spent more time in solitary practice, concentrating on the improvement of specific aspects of the musical performance, as directed by their music teachers. The best experts spent 4 hours per day, including weekends, on this type of practice. By the age of 20, the best expert musicians had spent over 10,000 hours of practice. Since that publication appeared, several studies have shown a correlation between deliberate practice and higher performance in chess, music, and different types of sports.28 Ericsson has also used this concept of deliberate practice to understand skills acquisition in medicine and has described this for pulmonary auscultation, mammogram interpretation, and medical diagnosis.28 Although the principles of deliberate practice can benefit surgical training, the timing of learning opportunities depends on the operating schedule and does not allow a personalized approach to deliberate practice. In addition, time constraints and patient safety issues are major hindrances to using the operating room as an optimal learning environment. As a result, immediate and formative feedback about the trainee’s performance is not always available in the operating room. Even if a trainee is competent enough to perform the procedure, there is no standardization (owing to patient variability) or opportunity for repetition. In summary, deliberate practice as described by Ericsson28 entails the following: 1) well-defined learning objectives, 2) precise measurements of performance, 3) informative real-time feedback, 4) focused and repetitive practice, and 5) motivated learners. How these principles

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

Fig. 2. Illustration of the qualitative difference between the course of improvement of expert performance and of everyday activities. Source: Ericsson.28

S6

were incorporated in developing a CBM is described below and summarized in Table I.

Defining Learning Goals and Assessment of Performance The challenges described above may explain the lack of widespread use of deliberate practice in surgical training. To enable deliberate practice in this context, the first task is to define the learning objectives and identify the thresholds of surgical competency. Based on the holistic definition of surgical competency, the skills of a competent surgeon can be divided into the technical and the nontechnical: communication, decision making, teamwork, situational awareness, and leadership. The real challenge is to evaluate these skills objectively and integrate the data to determine six core competencies. Although the traditional methods such as in-training evaluation reports, written/oral examinations, and procedure case logs may measure ACGME competencies, the psychometric (reliability, validity, feasibility) properties of these assessments are questionable.30 In addition, these assessments take place at a specific time in the course of training, and residents are not obliged to demonstrate competence in the daily activities of clinical practice.

A structured and step-wise assessment helped identify the errors in performance, enabling a targeted remediation. Because simulated patients are trained to focus on residents’ attitudes, interpersonal skills, and efficiency, the process allowed measuring of residents’ communication and interpersonal skills. Also, as all the stations were standardized for all the examinees, it removed the possibility of favoritism (familiarity bias), which may be present in traditional testing environments. Unlike written or oral exams, this approach allowed residents to interact with a patient, which is similar to a real-life situation (face validity). Moreover, senior residents performed better in all tasks, confirming construct validity. The authors also reported a high reliability of different tasks. Given that most medical schools have standardized patient programs with fixed administrative costs, with faculty expected to participate as part of their academic responsibility, the OSCE is highly feasible.30 More widespread development and use of such scenarios may be the desired answer for the otolaryngology residency programs in response to the demands of the Milestones project.

Objective Structured Assessment of Technical Skills

These challenges led to the development of performance-based testing such as the OSCE. A typical OSCE consists of a series of stations through which examinees rotate. At each station, an examinee is asked to perform a clinical task on a standardized patient, such as a focused history or physical examination. An examiner or the standardized patient assesses the examinee’s performance on a predetermined rating scale. Such a station may be followed by a postencounter station, where the examinee is asked to give a differential diagnosis, order appropriate investigations, or interpret results. Many medical schools and licensing authorities have adopted the OSCE model because of its ability to measure all the core competencies.30 The practicality, reliability, and validity of an OSCE have been demonstrated by many medical and surgical specialties.4 Objective Structured Clinical Examination-related efforts in otolaryngology, although few, have been shown to reliably assess the core competencies and enhance skill development. The pilot study by Stewart et al.4 is one of the pioneering studies that not only developed but also validated the OSCE on hoarseness. Objective Structured Clinical Examination stations consisting of an encounter with a simulated patient, which assessed the need for performing laryngoscopy and interpreting radiology results, not only measured core competencies but also identified the exact gaps in performance where improvement was needed. The performance was videotaped and evaluated blindly by faculty, removing the faculty bias. Both faculty and simulated patients evaluated trainees on a checklist using a Likert scale that actually corresponds to Benner’s five levels of competency.26

Based on the same concept as OSCE, the OSATS was first developed by Reznick’s31 group in Toronto for use with surgeons. The OSATS not only measures competency of the surgeon but also provides targeted learning goals for each surgical procedure. This method of determining learning objectives and tools for evaluation has been practiced successfully in aviation for decades as a pilot’s task is organized into a logical sequence, and the pilot’s performance is evaluated (as satisfactory or not) at each step. An individualized training program is then developed for the deficiencies observed.5 Similarly, deconstructing the overall surgical performance into a checklist of the different tasks of the procedure allows a more objective and structured evaluation by turning the evaluators into observers rather than interpreters of behavior. Evaluation on a smaller and a well-defined task also reduces the potential of the bias by factors such as the relationship between the evaluator and the resident, the resident’s performance on other steps of the procedure, and the resident’s competency in areas other than technical competence.7 This allows residents to be aware of what is being evaluated, making the goals of training and learning explicit, consistent, and welldefined. A minimally accepted level (usually 3) of performance on the Likert scale is predefined and determined by the experts. This approach allows residents to keep improving their competency level beyond this minimally acceptable level.7 Utilizing Reznick’s31 approach, assessment instruments have been developed and validated for a number of procedures in different surgical specialties. In otolaryngology, the first OSATS was developed for tonsillectomy by Roberson.32 This tool was reliable and valid, but the length of the instrument and large number of items made it less feasible in everyday settings. It can be

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

Objective Structured Clinical Examination

S7

argued that an increased number of items add to reliability and validity by capturing maximum information on the performance; however, an evaluation tool must be feasible for regular use before adoption. A study by Williams et al.33 demonstrated that increasing the number of items on a tool like OSATS has little effect on reliability. Notably, the same study found that increasing the number of evaluations per resident has a large effect on improving the reliability of an assessment. Based on this finding, Ahmed et al.5 developed a compact and focused assessment tool for tonsillectomy while retaining the reliability and validity of the evaluation. This tool’s feasibility was indicated by the high response rate (100%) and short time to complete the assessment (4 minutes). The results in this study were derived from a large sample over 3 years that allowed multiple observations per resident in the operating room. This is an important consideration because achieving an optimal level of sampling spread across assessors, instruments, and contexts will improve both the validity and reliability of competency assessment. In addition, residents from all postgraduate year (PGY) levels were included in the study, which provided data on changes in the performance of the residents over time and showed high construct validity for both the checklist and global parts of the instrument as scoring improved with increasing PGY level. A considerable improvement was especially noted in the residents from PGY 1 through 3, but a ceiling effect was demonstrated with senior residents. This ceiling effect can be further mitigated with deliberate practice, which is discussed earlier in the section on deliberate practice. The checklist part showed that after competency was achieved, the range of scores also narrowed down, possibly representing greater precision in skills. The same research group developed additional OSATS for other otolaryngology procedures.6,7 An OSATS for thyroidectomy was initially developed by Stack et al.,34 but again the issue of feasibility (the checklist had 18 items) made it less popular. A new tool developed by Varela et al.6 had only 10 items on the checklist, significantly increasing its feasibility. In this pilot project, residents were evaluated by a single faculty member who had a chance to work with individual trainees on multiple occasions in a sequential manner, thus observing development of each trainee’s surgical skills. This tool also showed capacity for discerning the complexity of the surgical procedure and showing improvement in the evaluation score subsequent to a difficult case. Having a busy expert thyroid surgeon as an evaluator provided the ability to set benchmarks on the specific assessable tasks of the checklist, hence allowing the surgeon to determine when such benchmarks were tougher to achieve on difficult cases. Therefore, the evaluator could provide valuable feedback to the residents, helping them to improve in subsequent cases. Although the presence of a single evaluator may raise the question of faculty bias, this issue was controlled by constant faculty development (for more on faculty development, see Timely Completion and Faculty Development section). Besides the increased reliability of the tool, this study found progress in the performance throughout the

rotation. A higher score was noted on both the checklist and global parts with increasing level of training. An important observation was the consistency of proficientand expert-level performance shown by clinical fellows. This indicates that at the highest level of surgical training, trainees surpass competency and achieve proficiency. This tool showed that they reached this level before graduating, making the tool potentially useful for in-training assessment of surgical competency. This high level of performance is what is termed a performance bar; a cornerstone of the competency movement is the demonstration of satisfactory achievement of this performance bar by a graduating resident.27 To reach this high performance level, one needs first to understand the satisfactory achievement. According to Crebbin,27 satisfactory can be defined as either “adequate” or “satisfying expectations, leaving no room for complaint.” Most important in competency is “leaving no room for complaint,” which will ensure that the bar continues to be set at a high level, and that it in fact will be raised to the highest level of performance, as indicated in Miller’s triangle35 (Fig. 3). This tool successfully demonstrated competence at Miller’s “performs” level in daily practice, rather than merely at the “shows-how” level evaluated in a traditional testing situation. Mastoidectomy was another major procedure for which an OSATS was first developed by Zirkle et al.36 The major problem with that instrument was that it used a 2-point (0 and 1) scale. It may have high interrater reliability, but it takes away the opportunity to provide feedback to already competent residents aimed at improving their scores to the level of expert. The later mastoidectomy OSATS developed by Francis et al. was first pilot-tested and then revalidated in the operating room.7,11 This step-wise approach allowed modifications that were more practical for the operating room. Steps not routinely performed by the residents in the operating room were removed from the checklist, increasing the feasibility of the instrument. The instrument showed strong correlation between its checklist and global parts of the instrument, suggesting the checklist was a frequently used feedback tool and the global part was a confirmatory evaluation of competency at transition points of professional training. This instrument also identified tasks on the checklist that were strongly correlated with overall surgical performance, and thus helped predict the competency of the trainees in mastoidectomy as a whole. It provides an opportunity to enhance the efficiency of time and resources spent on the teaching of these skills. Similarly, the OSATS tool for ESS developed by the same group was also found to be feasible, reliable, and valid.8 The only problem was interrater agreement, which was low earlier but increased with targeted faculty development (see Timely Completion and Faculty Development section). This underscores the importance of faculty development before and during the course of the evaluation process to ensure compliance and fair and meaningful evaluations. Similar to the mastoidectomy tool, this tool also identified tasks on checklists that were predictors of overall competency in ESS, thus allowing

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

S8

dents can watch the evaluated videos of their performance, giving them an opportunity for self-appraisal and deliberate reflection on their performance; however, care should be taken when deleting extraneous frames because such edits can make the video less informative and evaluation less reliable. Because some of the global aspects of the performance such as flow of the operation or knowledge of the anatomy are not properly captured on video, it is necessary that these assessments be intermixed with live assessments in a structured way.

Timely Completion and Faculty Development Fig. 3. Miller’s triangle (modified) showing levels of competence by a surgeon in ascending order. Novice and beginner surgeons will show familiarity with a task at the first two levels; competent surgeons will demonstrate ability at the third level; and experts will perform at the highest level in real-life situations. Source: Miller.35

residents to focus on those tasks during busy rotation schedules.9

Video-Based Evaluations Although these OSATS are more systematic, quantitative, reproducible, and comprehensive than other methods of assessments, the process is not 100% objective, mainly because the criteria used to determine competence are based on the judgment of the faculty. Entirely objective evaluation methods include dexterity analysis systems that have been reported and consist of electromagnetic field generators and sensors attached to a surgeon’s hand.7 Data generated by the sensors are used to assess dexterity measures such as the number of movements, distance travelled by the hands, and time taken for the task. Despite its objectivity, this form of assessment still remains experimental and confined to the laboratory because of the lack of practicality and feasibility.7 Much more practical is the use of blinded videotapes of the procedure to make the evaluation more objective and time efficient. Evaluators can fast-forward to the key steps of the procedure on the recording, greatly minimizing the time required to assess a resident. Evaluation time using a video can be minimized to 34% of that needed for live assessment, whereas the evaluator has to be present for the duration of the surgery.12 In the study by Laeeq et al.12 to test video-based evaluation for ESS on OSATS, it took an average of 20 minutes for the evaluators to watch each video. This is particularly important in laboratory settings where the attending physician does not have to be supervising while a resident performs a surgical procedure on a cadaveric or animal model. With this form of assessment, procedures can be viewed by multiple raters at their convenience, creating a larger pool of evaluators and a more efficient system of evaluations. Similarly, with the possibility of multiple independent evaluations per resident, more meaningful data can be obtained along with better tracking of the learning curve. The videos can also be sent to outside programs for assessment by external evaluators. ResiLaryngoscope 125: October 2015

Increasing the number of evaluations and the standardization of testing conditions have been shown to improve interrater reliability33; however, all of these efforts have focused largely on the properties of the instrument. The effects of noninstrument factors on the psychometric properties of an assessment tool have not been widely studied. The time elapsed between the performance and the completion of evaluation can potentially affect the reliability and validity of the assessment tools secondary to recall bias. Laeeq et al.10 reported that the mean number of days taken to complete an evaluation was 7.7, with a range of 1 to 327 days. Kim et al.10 had similar results: the median time to completion of evaluations was 11 days, 9 hours, with the quickest evaluation completed 18 hours after assignment; however, the construct validity was significant only for those evaluations completed within 6 days of the procedure. These findings underscore the importance of faculty development. Faculty development, however, is only possible if the faculty has agreed to the idea of competencybased education and assessment. Lack of faculty understanding has been cited as one of the most important barriers to successful implementation of the ACGME mandate.37 To win faculty support, programs should recognize the importance of faculty in the implementation of competency-based assessment and education and undertake steps to improve faculty involvement. Faculty development seminars, employment of feasible assessment tools, provision of positive feedback from the residents, promoting ownership, and providing flexibility in scheduling are some steps that can be undertaken to improve faculty support and participation. Similarly, these efforts have also been shown to improve the use of OSATS, interpretation of Likert scale, and interrater agreement between the evaluators.11

Milestones Case logs of a resident may provide information about whether the resident gets enough opportunities to prepare for effective and safe surgical management, but this case number approach fails to address details about skills acquisition that can be used for feedback and program improvement.13 On the other hand, analysis of the progression of residents’ surgical skills to the level of competency can provide benchmarks or skill milestones that may help in defining a common standard for competency in a given procedure. The Accreditation Council Bhatti and Ahmed: Personalized Residency Training Model

S9

With the emergence of low- and high-fidelity and virtual-reality simulators, the opportunity to integrate the principles of deliberate practice into a competency-

based model has greatly expanded. A surgical simulator may incorporate several elements such as providing feedback, the opportunity for repetitive practice, increasing levels of difficulty, the ability to adapt to different learning strategies, and the provision for clearly stated benchmarks and outcome measurement.39 A simulation not only enables a participant to reproduce phenomena under test conditions that are likely to occur in actual clinical settings but also offers the possibility of presenting rare problems and emergencies that can better prepare performers to deal with such situations, if and when encountered in real-life settings. It offers practice of surgical cases in a milieu of immersion in superior visual, auditory, and haptic feedback quality.16 Trainees can set their learning goals and practice on selected difficult tasks. Unlike surgery in actual settings, the simulator can be stopped at any time, giving trainees a chance to correct mistakes immediately and even perform challenging parts of procedures repeatedly. The greatest advantage of surgical simulators is the objective quantification of potential errors and near misses. One of the strengths of the aviation industry’s safety record of performance is the reporting of near misses, which are stressed in simulation training. Because a surgical simulator can accurately calculate the path of the virtual instrument and how close to an anatomical hazard an instrument is placed, it can measure near misses that may or may not result in a postoperative complication, thus training residents to avoid them in real situations.40 A big advantage of simulators in surgical training is the flexibility of scheduling, which allows self-directed practice without the presence of faculty; however, this self-directed practice is based on personal attributes such as interest in the subject and subsequent motivation. The study by Malik et al.15 using a mastoidectomy simulator showed that personal attributes exert substantial influence on surgical performance. Their study showed that the OR for competency in a simple mastoidectomy task increased with experience for each procedure done in the operating room (OR 5 1.13; P 5 0.003), but it was more than 3 times greater when personal interest in otology was also taken into consideration (OR 5 3.86; P 5 0.02). More importantly, this ratio for competency increased 10 times for a difficult task coupled with personal interest (OR 5 10.38; P 5 0.003) (Table II). Motivated residents required less surgical practice to achieve competency, particularly for the challenging task of navigating the facial recess anatomy. Although a complex interaction of factors is likely to contribute to surgical learning, this study showed that interest in the field, paired with actual hands-on deliberate practice, considerably influenced the magnitude of performance growth. For less-interested individuals, interest or learner motivation may be generated or enhanced with early success by practicing on the simulator, allowing trainees to track their performance, do a targeted practice, and observe relatively rapid improvement. Another important aspect of this study was a negative association of performance, with extra time spent in the laboratory without supervision and feedback. This

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

for Graduate Medical Education has also shifted to a milestones-based accreditation system, the Next Accreditation System.2 This system aims to include milestonesdriven assessment and improvement for the six competencies. Data generated by the implementation of OSATS allowed our research group to define milestones of ESS and mastoidectomy.13,14 To identify the number of cases performed to achieve competency in different steps of the procedure, the items on the checklist assessment tool are grouped into three different milestones based on faculty opinion, and similarity of the tasks in terms of complexity and level of skills required to complete the task competently. Francis et al.13 showed in their study of otolaryngology residents that the number of cases needed to achieve competency in performance of each task of mastoidectomy varied from six to 14 cases, depending on the complexity of the task. Similarly, results in ESS indicate that a 60% probability of competency in performing the first milestone (maxillary antrostomy and anterior ethmoidectomy) is achieved after performing 18 ESS procedures; however, for all steps of the ESS, the 60% probability of achieving competence is obtained after performing 42 procedures.14 Another study seeking to identify the learning curve for flexible laryngoscopy showed that an average of six attempts was required to become competent in performing flexible laryngoscopy on a model. The probability of achieving competence exceeded 80% with the 14th attempt.38 These findings can be used by residency programs when developing benchmarks and milestones and also for monitoring a resident’s progress throughout training in otolaryngology.

Incorporation of Formative Feedback Both summative and formative assessments are commonly used in education. Summative feedback is comprehensive in nature and is based on the assessment undertaken near the end of a program of teaching to ensure a student has reached a set of defined goals and objectives. Formative assessments are used as regular ongoing in-course assessments providing immediate feedback on progress. Formative assessment helps students identify problems and improve their technique and skills. Because medical training is lengthy, learning curves of trainees should be continuously monitored to avoid waste of time and resources. This monitoring allows a determination of the trainee’s place on the spectrum of competency from novice to expert. The OSATS and identification of milestones provide such formative feedback, which highlights the exact gap between the actual and intended performance, offering an impetus for targeted improvement in future performances by means of deliberate practice.

Repetition of Performance and Motivation via Simulation

S10

again reinforces the idea that setting learning goals and providing immediate and formative feedback are fundamental to achieving competency. Although evidence of the effectiveness of simulation in surgical training is substantial, simulation in the field of otolaryngology is relatively new. The virtual-reality simulators are being validated for endoscopic sinus surgery and temporal bone surgery and have shown promising results.16,39,41,42 The multicenter trial by Welling39 developed and validated a temporal bone dissection system that was comparable to cadaveric bones for practice. Computed tomography data were used to efficiently render three-dimensional structural acquisition, whereas realistic visual and haptic feedback was generated using a volume-based rendering algorithm to keep it low in cost and maintain the complexity of a case. Similarly, a study by Francis et al.16 using a mastoidectomy simulator provides evidence of such systems as a potentially valuable educational tool in acquiring psychomotor skills. Larger gains in procedural performance of the most experienced trainees highlight the potential value of the virtual-reality simulator for enhancing psychomotor and procedural learning when practice is based on an appropriate cognitive foundation or integrated with appropriate teaching and curriculum. This is in line with the Fitts and Posner29 model of skills acquisition, in which practice on a simulator should move a trainee from an integrative stage to an automated stage of motor acquisition. The study by Francis et al.16 showed that participants were able to perform the task faster, with few errors and with better economy of motion, after practicing for a short time on a temporal bone simulator. Another study by Zhao et al.,43 using a randomized controlled design, found that training on the temporal bone simulator demonstrated transferability of skills with improved outcomes of cadaver temporal bone dissection. Some might argue that simulators (at least highfidelity and virtual-reality versions) are expensive and not feasible. In fact, a low-fidelity training model for ESS that costs less than U.S. $5 has been developed with reusable, recyclable, and readily available materials.41 Novice surgeons effectively learned nasal endoscopy and basic sinus surgery skills and were evaluated using OSATS on this low-cost task trainer using measurable and reliable metrics. Others might dispute the retention of skills learned on a surgical simulator. A study on an ESS simulator, however, found that medical students not only performed simulated ESS within a reasonable approximation of the performance of experienced sinus surgeons but also resumed their training without deviation from their prior learning curves over a period of at least 2 months.42 Deliberate practice also has implications in enhanced ability to multitask. This is shown in the study by Ahmed et al.17 that subjected residents and faculty to different auditory and cognitive distractions and tested participants for their ability to multitask. The superior performance of faculty in this study can be attributed to their training through real experiences that allow them to make intentional adjustments under unexpected situations. This experience can be safely

transferred to junior residents with the use of deliberate practice methods based on the efficacy of simulation in increasing the learning experience. Current research suggests that developing simulators that have the dual purpose of providing opportunities for deliberate practice and assessing the current level of performance will make the experience more individualized and effective for both juniors and skilled performers.

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

Learner-Centered Approach and Personalized Training Because of the limited availability of faculty time for teaching, work-hour restrictions, and other challenges described earlier, medical postgraduate training has shifted from the teacher-centered knowledge transfer approach to a focus on the learner as an active participant in the educational process. The purpose of this approach is to increase the competence of the trainees by actively involving them in the process. This increases overall interest, which plays an important part in achieving competency.15 This approach is actually a modification of the Malls’ model of surgical training,1 which placed the responsibility of learning within the training programs solely on the residents. The importance of mentorship and feedback, however, cannot be overemphasized. By incorporating these considerations, Weimer44 described five key principles of this learnercentered approach. The first is that the content should meet the individual learners’ needs in terms of their preferred learning style. David A. Kolb45 describes four learning styles (accommodating, diverging, converging, and assimilating) that depend on how people perceive and transform experience (Fig. 4). Studies have reported a preferred learning style in surgical trainees.46 In otolaryngology, Laeeq et al.46 showed that converging and accommodating were the two learning styles preferred by three-fourths of the total resident population. For both of these learning styles, individuals process newly acquired knowledge by active experimentation and learn most effectively by being actively involved in simulations, laboratory assignments, presentations, and demonstrations. Another Varela et al. study done on otolaryngology fellows, however, showed that their preferred learning style was a balance of the four styles.47 This finding could be attributed to the fact that as trainees develop and mature within the training process, they reach a level where the training environment forces them to adapt to variety of learning situations for which a single style might not be enough. These studies support the need for tailoring the educational process to meet learners’ demands. The second principle of Weimer’s learner-centered approach is that the role of the teachers should become more facilitative than instructive. In this way, “teachers do less telling; learners do more discovering.” Course content is still provided for the learner but is individualized such that learners maximize their educational needs in a more effective manner. Self-directed learning allows a continuing openness to new experiences,

S11

will develop into a personalized model of training with more focus on learners.

Current Challenges to the Competency-Based Model

motivation for self-directed learning, and ultimately cultivation of creativity. The facilitative teacher is involved with the preparation of learning environments conducive to such motivation and creativity. Self-directed learning promotes personal development throughout life and instills a life-long desire to seek new knowledge, which is highly desirable in the field of medicine. The third principle of the learner-centered approach requires involvement of the learners in designing and implementing the curriculum. The student should be able to choose content according to individual needs, and the teacher should provide guidance on that subject. In this collaborative environment of learning, students and faculty share equal responsibility for the learning process. Weimer described assessment and feedback as the fourth principle, in which both learners and teachers should be able to assess performance and provide feedback on their performance. This principle has already been described in our model in conjunction with the principles of deliberate practice and objective assessment. Weimer’s fifth and final principle is learners’ autonomy, which gives learners independence to manage the curriculum and lead the learning process, providing them with ownership of the learning process. This ultimately creates confidence and fosters self-directed and life-long learning habits developed through a learnercentered approach. Learner-centered training of residents can also help incorporate the principles of deliberate practice and objective assessment in a single training module. For example, a recent study by Reh et al.18 showed continued improvement in otolaryngology in-service examination scores after implementation of this learner-centered approach in their program. Application of these learnercentered principles along with individual’s characteristics and attributes such as learning styles, personality and psychometric traits such as emotional intelligence

The Accreditation Council for Graduate Medical Education expects programs to identify learning objectives based on the six core competencies, assess objectively the residents’ attainment of these competencies, and use educational outcome data to improve individual resident and overall program performance. Fulfilling ACGME competency requirements has proved to be more challenging than expected. It requires much commitment and time on the part of program directors (PDs) to comply with these requirements and document such compliance. Several surveys have been conducted in various specialties to identify programs’ compliance with the work-hour limitations, implementation of the ACGME outcome project, and challenges faced by PDs in implementing a competency-based education program. In a survey of otolaryngology PDs, respondents cited workload and inadequate time as their most important challenges.48 In addition, they named limited faculty and resident buy-in, lack of adequate financial support, and minimal understanding of ACGME competencies as other major hurdles. These findings were augmented by the results of a systematic review comprising the opinions of 1,076 PDs working in five surgical specialties, three medical specialties, and one hospital-based ancillary specialty.37 According to that review, PDs reported that medical knowledge and patient care among the core competencies were the most frequently assessed, whereas system-based practice and practice-based learning were deemed most problematic to evaluate. Teaching and assessing these competencies can understandably be difficult because of the limited understanding of these concepts, shortage of experts to teach the new content, and lack of the assessment tools. The development of our competency-based model can serve as a platform for other residency programs to adopt a CBM based on deliberate practice; however, some limitations specific to this model should be kept in mind. For one, this model was developed and tested in a single tertiary care hospital. But similar models have been developed and successfully implemented in other institutions such as the University of Toronto31 and Imperial College in London.49 In that context, our study provides additional evidence for the success of this model. Another limitation resulting from implementation in a single program was the small sample size. This was controlled by using a longitudinal model and a study based on several years. Familiarity bias is another limitation that might have affected the OSATS evaluation of residents in the operating room. The authors tried to overcome this by introducing blinded, videobased assessments. In addition, no tools were developed for nontechnical skills in this program. Nontechnical skills such as professionalism, decision making, and interpersonal skills are difficult to assess objectively in real-time situations. Although rating scales such as

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

Fig. 4. The four learning styles: accommodating, diverging, converging, and assimilating, each formed by two of the four learning quadrants. Source: Laeeq et al.46 [Color figure can be viewed in the online issue, which is available at www.laryngoscope.com.]

S12

NOTSS, NOTECHS, and OTAS are in the early stages of testing and development,50 future endeavors are needed to train and assess competency in these domains. Regardless of these limitations, this is the first longitudinal study with educational outcomes in the field of otolaryngology aiming to enhance surgical skills and shorten the learning curves of the trainees.

1. Franzese CB, Stringer SP. The evolution of surgical training: perspectives on educational models from the past to the future. Otolaryngol Clin North Am 2007;40:1227–1235. 2. Implementing Milestones and Clinical Competency Committees 2013. www.acgme.org/acgmeweb/Portals/0/PDFs/ACGMEMilestones-CCCAssesmentWebinar.pdf. Accessed September 4th, 2014. 3. Sonnadara RR, Mui C, McQueen S, et al. Reflections on competency-based education and training for surgical residents. J Surg Educ 2014;71:151– 158. 4. Stewart CM, Masood H, Pandian V, et al. Development and pilot testing of an objective structured clinical examination (OSCE) on hoarseness. Laryngoscope 2010;120:2177–2182. 5. Ahmed A, Ishman SL, Laeeq K, Bhatti NI. Assessment of improvement of trainee surgical skills in the operating room for tonsillectomy. Laryngoscope 2013;123:1639–1644. 6. Diaz Voss Varela DA, Malik MU, Thompson CB, Cummings CW, Bhatti NI, Tufano RP. Comprehensive assessment of thyroidectomy skills development: a pilot project. Laryngoscope 2012;122:103–109. 7. Francis HW, Masood H, Chaudhry KN, et al. Objective assessment of mastoidectomy skills in the operating room. Otol Neurotol 2010;31: 759–765. 8. Lin SY, Laeeq K, Ishii M, et al. Development and pilot-testing of a feasible, reliable, and valid operative competency assessment tool for endoscopic sinus surgery. Am J Rhinol Allergy 2009;23:354–359. 9. Laeeq K, Waseem R, Weatherly RA, et al. In-training assessment and predictors of competency in endoscopic sinus surgery. Laryngoscope 2010;120:2540–2545. 10. Laeeq K, Francis HW, Varela DA, Malik MU, Cummings CW, Bhatti NI. The timely completion of objective assessment tools for evaluation of technical skills. Laryngoscope 2012;122:2418–2421. 11. Laeeq K, Bhatti NI, Carey JP, et al. Pilot testing of an assessment tool for competency in mastoidectomy. Laryngoscope 2009;119:2402–2410. 12. Laeeq K, Infusino S, Lin SY, et al. Video-based assessment of operative competency in endoscopic sinus surgery. Am J Rhinol Allergy 2010;24: 234–237.

13. Francis HW, Masood H, Laeeq K, Bhatti NI. Defining milestones toward competency in mastoidectomy using a skills assessment paradigm. Laryngoscope 2010;120:1417–1421. 14. Laeeq K, Lin SY, Varela DA, Lane AP, Reh D, Bhatti NI. Achievement of competency in endoscopic sinus surgery of otolaryngology residents. Laryngoscope 2013;123:2932–2934. 15. Malik MU, Varela DA, Park E, et al. Determinants of resident competence in mastoidectomy: role of interest and deliberate practice. Laryngoscope 2013;123:3162–3167. 16. Francis HW, Malik MU, Diaz Voss Varela DA, et al. Technical skills improve after practice on virtual-reality temporal bone simulator. Laryngoscope 2012;122:1385–1391. 17. Ahmed A, Ahmad M, Stewart CM, Francis HW, Bhatti NI. Effect of distractions on operative performance and ability to multitask: a case for deliberate practice. Laryngoscope 2015;125:837–841. doi: 10.1002/ lary.24856. Epub 2014. 18. Reh DD, Ahmed A, Li R, Laeeq K, Bhatti NI. A learner-centered educational curriculum improves resident performance on the otolaryngology training examination. Laryngoscope 2014;124:2262–2267. doi: 10.1002/ lary.24703. Epub 2014. 19. Bosk CL. Forgive and remember: managing medical failure. Chicago, IL: University of Chicago Press; 1979. 20. Athanasiou T, Debas H, Darzi A. Key Topics in Surgical Research and Methodology. London UK: Springer-Verlag; 2010. 21. Sakorafas GH, Tsiotos GC. New legislative regulations, problems, and future perspectives, with a particular emphasis on surgical education. J Postgrad Med 2004;50:274–277. 22. Walter AJ. Surgical education for the twenty-first century: beyond the apprentice model. Obstet Gynecol Clin North Am 2006;33:233–236. 23. Jaffera A, Bednarza B, Challacombeb B, Sriprasad S. The assessment of surgical competency in the UK. Int J Surg 2009;7:12–15. 24. Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. Toward a definition of competency-based education in medicine: a systematic review of published definitions. Med Teach 2010;32:631–637. 25. Ahmed A, Reh DD, Bhatti NI. Technical skills and outcomes assessment in endoscopic sinus surgery: surgical competency matters. Otorinolaringologia 2014;64:153–156. 26. Benner P. From novice to expert. Am J Nurs 1982;82:402–407. 27. Bhatti NI, Cummings CW. Competency in surgical residency training: defining and raising the bar. Acad Med 2007;82:569–573. 28. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004; 79(10 suppl):S70–S81. 29. Fitts PM, Posner MI. Human Performance. Belmont, CA: Brooks/Cole; 1967. 30. Sidhu RS, Grober ED, Musselman LJ, Reznick RK. Assessing competency in surgery: where to begin? Surgery 2004;135:6–20. 31. Faulkner H, Regehr G, Martin J, Reznick R. Validation of an objective structured assessment of technical skill for surgical residents. Acad Med 1996;71:1363–1365. 32. Roberson DW, Kentala E, Forbes P. Development and validation of an objective instrument to measure surgical performance at tonsillectomy. Laryngoscope 2005;115:2127–2137. 33. Williams RG, Verhulst S, Colliver JA, Dunnington GL. Assuring the reliability of resident performance appraisals: more items or more observations? Surgery 2005;137:141–147. 34. Stack BC Jr, Siegel E, Bodenner D, Carr MM. A study of resident proficiency with thyroid surgery: creation of a thyroid-specific tool. Otolaryngol Head Neck Surg 2010;142:856–862. 35. Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990;65:63–67. 36. Zirkle M, Atplin MA, Anthony R, et al. Objective assessment of temporal bone drilling skills. Ann Otol Rhinol Laryngol 2007;116: 793–798. 37. Malik MU, Diaz Voss Varela DA, et al. Barriers to Implementing the ACGME Outcome Project: a systematic review of program director surveys. J Grad Med Educ 2012;4:425–433. 38. Laeeq K, Pandian V, Skinner M, et al. Learning curve for competency in flexible laryngoscopy. Laryngoscope 2010;120:1950–1953. 39. Wiet GJ, Stredney D, Kerwin T, et al. Virtual temporal bone dissection system: OSU virtual temporal bone system: development and testing. Laryngoscope 2012;122(suppl 1):S1–S12. 40. Fried MP, Satava R, Weghorst S, et al. The use of surgical simulators to reduce errors. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation (Volume 4: Programs, Tools, and Products). Rockville, MD: Agency for Healthcare Research and Quality; 2005. 41. Steehler MK, Chu EE, Na H, Pfisterer MJ, Hesham HN, Malekzadeh S. Teaching and assessing endoscopic sinus surgery skills on a validated low-cost task trainer. Laryngoscope 2013;123:841–844. 42. Uribe JI, Ralph WM Jr, Glaser AY, Fried MP. Learning curves, acquisition, and retention of skills trained with the endoscopic sinus surgery simulator. Am J Rhinol 2004;18:87–92. 43. Zhao YC, Kennedy G, Yukawa K, Pyman B, O’Leary S. Can virtual reality simulator be used as a training aid to improve cadaver temporal bone dissection? Results of a randomized blinded control trial. Laryngoscope 2011;121:831–837. 44. Weimer M. Learner-Centered Teaching: Five Key Changes to Practice. San Francisco, CA: Jossey-Bass; 2002.

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

CONCLUSION Competency-based medical education is a concept that is here to stay. The fact that ACGME has not provided concrete guidelines on details related to development and implementation of such program remains a challenge. Even so, we believe that there is an opportunity for individual specialties to develop, implement, and tweak their own milestones. In addition, an evaluation matrix to monitor the progress of individual residents and programs is needed. On the surface, this is a daunting task, but the sharing of resources and lessons learned within and among specialties can overcome many of the aforementioned challenges. Novel strategies such as wider use of the simulation modalities, deliberate practice, and a learner-centered approach will further assist educators in achieving their goals. Ongoing rigorous research on factors enhancing the learning curves of residents is critically needed for maintaining and raising the standards of future practitioners in our specialty. Involving bench-level researchers with doctorates in education, along with surgeons, routinely into this process can have far-reaching beneficial effects on these efforts. The ultimate beneficiaries of this movement will be our current and future patients.

BIBLIOGRAPHY

S13

45. Kolb DA, Kolb AY. Learning styles and learning spaces: a review of multidisciplinary application of experiential learning theory in higher education. Acad Manage Learn Educ 2005;4:193–212. 46. Laeeq K, Weatherly RA, Carrott A, Pandian V, Cummings CW, Bhatti NI. Learning styles in two otolaryngology residency programs. Laryngoscope 2009;119:2360–2365. 47. Varela DA, Malik MU, Laeeq K, et al. Learning styles in otolaryngology fellowships. Laryngoscope 2011;121:2548–2552.

48. Laeeq K, Weatherly RA, Masood H, et al. Barriers to the implementation of competency-based education and assessment: a survey of otolaryngology program directors. Laryngoscope 2010;120:1152–1158. 49. Crochet P, Aggarwal R, Dubb SS, et al. Deliberate practice on a virtual reality laparoscopic simulator enhances the quality of surgical technical skills. Ann Surg 2011;253:1216–1222. 50. Sharma B, Mishra A, Aggarwal R, Grantcharov TP. Non-technical skills assessment in surgery. Surg Oncol 2011;20:169–177.

Laryngoscope 125: October 2015

Bhatti and Ahmed: Personalized Residency Training Model

S14

Improving skills development in residency using a deliberate-practice and learner-centered model.

Work-hour restrictions, increased workload, and subjective assessment of competency are major threats to the efficacy of the traditional apprenticeshi...
344KB Sizes 1 Downloads 15 Views