Research Report

Assessing Effective Teaching: What Medical Students Value When Developing Evaluation Instruments Jeffrey E. Pettit, PhD, Rick D. Axelson, PhD, Kristi J. Ferguson, PhD, and Marcy E. Rosenbaum, PhD

Abstract Purpose To investigate what criteria medical students would value and use in assessing teaching skills. Method Fourth-year medical students at the University of Iowa Carver College of Medicine enrolled in a teaching elective course are required to design and use an evaluation instrument to assess effective teaching. Each class uses a similar process in developing their instruments. Since the first class in spring 2007, 193 medical students have created 36 different instruments. Three faculty evaluation

Medical schools continuously evaluate

programs, teachers, and students in multi­ ple ways. They are mandated to perform evaluations by requirements from external accrediting agencies, inter­nal review boards, promotion and tenure guidelines, performance assess­ment, program review, and curri­culum development. To assess teacher effectiveness and to ensure that students are receiving appropriate instruction, every medical school uses some type or types of evaluation instru­ment. Instruments vary in terms of characteristics, behaviors or traits to be measured, length, types of rating scales, and inclusion or exclusion of sections for comments.

Introduction

Medical schools have invested many hours in committee work and engaging experts Please see the end of this article for information about the authors. Correspondence should be addressed to Dr. Pettit, Office of Consultation and Research in Medical Education, Carver College of Medicine, University of Iowa, 500 Newton Rd., Iowa City, IA 52242; telephone: (319) 335-9910; e-mail: jeffrey-pettit@ uiowa.edu. Acad Med. 2015;90:94–99. First published online August 5, 2014 doi: 10.1097/ACM.0000000000000447

94

experts conducted a thematic analysis of the instruments and coded the information according to what was being evaluated and what types of ratings were indicated. The data were submitted to a fourth faculty reviewer, who synthesized the information and adjusted the codes to better capture the data. Common themes and categories were detected. Results Four themes were identified: content (instructor knowledgeable, teaches at level of learner, practical information), learning environment, teacher personal attributes, and teaching methods. Thirty-

in evaluation to create teaching evalua­ tion instruments applicable across the continuum of classrooms and clerkships. Members of these committees typically consist of medical school faculty members, evaluation experts, administrative per­ sonnel, and occasionally a medical student or two. Once the evaluation instrument has been developed, it is tested on a sample group, studied for validity and reliability, refined, and then integrated into the academic health center.1–7 The developers recognize that their instrument may have inherent flaws such as whether students interpret questions differently from the intended meaning (or differently from each other), form judgments using dissimilar and unexpected criteria, or select response categories idiosyncratically.1 Student ratings might be biased by the initial interest of students, instructor reputation, and instructor enthusiasm.8 Various levels of learners perceive teaching differently, and different teaching skills are needed in the operating room and in inpatient and outpatient settings.9 In a survey of deans, department chairs, and faculty, respondents expressed concern that student evaluations are often “unfocused and undisciplined,” little more than “testimonials,” and often based on faculty “popularity” and “entertainment value.”10

two descriptors were distinguished across the 36 instruments. Thirteen descriptors were present in 50% or more of the instruments. The most common rating systems were Likert scales and open comments. Conclusions Fourth-year medical students can offer an eclectic resource for evaluating teaching in the classroom and the clinic. Using the descriptors that were identified in greater than 50% of the evaluation instruments will provide effective measures that can be incorporated into medical teacher evaluation instruments.

Medical students are the primary users of evaluation instruments to assess teachers. Because most students are not involved in the development of evaluation instruments, they may not be aware of the reasons why a criterion was selected or phrased in a certain way. Additionally, medical students receive little instruction in how to appropriately use the instrument, what the ratings represent, and how the results are incorporated into the academic health center. It is not uncommon to hear complaints from medical students about not being able to discern certain items or how the rating scales do not fit the particular characteristics of their teachers. In the end, medical students try their best and complete instruments on the basis of previous experiences and fractional knowledge of evaluation. Multiple studies have examined how to assess quality of teaching. Speer and Elnicki10 categorized key characteristics of exemplary teachers: (1) enhancing the learning environment, which includes tailoring learning experiences to student abilities, actively involving learners, and providing specific feedback; and (2) enhancing medical knowledge, which includes giving short lecture and

Academic Medicine, Vol. 90, No. 1 / January 2015

Research Report

bedside demonstrations and providing information that is current, relevant, and practical to patient care. The authors indicated that student evaluations identifying good teachers have been questioned because of variable reliability of both evaluation instruments and students as raters. They recommended that these characteristics of excellent teachers should be used to design survey instruments and to incorporate appropriate behavioral anchors as a guide for students completing the instruments. Only limited research has examined the evaluation of teachers from the medical student perspective. In a qualitative study, Schiekirka et al8 elicited third- and fourth-year medical students’ views on the purpose of evaluation, indicators of teaching quality, evaluation tools, and possible consequences drawn from evaluation data. In focus group interviews, students described teaching quality as dependent on content, process, teacher and student characteristics, and learning outcomes. They also indicated that evaluation instruments need to capture actual learning outcomes and judge procedural and organizational aspects of teaching. Students preferred open questions over scaled questions and recommended a maximum of 15 questions on evaluation instruments. In another study, Billings-Gagliardi et al1 investigated what medical students are thinking as they complete a typical basic science course evaluation. A small sample of students participated in “thinkaloud” cognitive interviews, voicing their thoughts while completing a typical evaluation instrument including items on overall course design, educational materials and methods, and faculty teaching. Participating medical students were often uncertain about the meaning of educational concepts. When rating teaching effectiveness, they considered a number of factors instead of, or in addition to, actual classroom teaching performance, and their ratings tended to skew toward the positive end of the scale. Students consistently described highly rated faculty teachers with certain characteristics: “helps students understand and retain material”; “projects passion and/or enthusiasm about the subject”; “teaches to a level of first year students”; “makes material interesting”; and “related to students, supportive, patient and establishes rapport.”

Academic Medicine, Vol. 90, No. 1 / January 2015

Although students’ perspectives on teaching evaluations have been examined in previous studies, the majority of these have been based on getting student (and faculty) reactions to existing instruments. Additionally, students have had very little input regarding the criteria or design of the evaluation instrument. In this Research Report, we investigate what criteria medical students would value in assessing the teaching skills of their teachers. Would students choose the same characteristics as their institution? Would their criteria be more focused on the instructors’ personal traits or presentation skills? What type(s) of rating systems would students prefer? If given the chance to develop their own instrument to evaluate effective teaching, what would medical students design? We hypothesized that fourth-year medical students in our teaching elective course may have gained a unique perspective through their educational experience and might be able to identify important issues that are typically overlooked in teacher evaluation processes. We posited the following: 1. Fourth-year medical students are able to tailor the evaluation to the medical school curriculum because they have experienced the entire curriculum. They can provide insiders’ knowledge and highlight aspects that faculty or “other experts” might not be aware of because they only see a snapshot of the full curriculum. 2. During the teaching elective, students acquire more knowledge about assessment/evaluation than a typical medical student has and can apply their requisite technical knowledge to evaluating effective teaching. 3. Participating in a group project might elicit common sentiments and leverage collective thinking about evaluation methods. Through this reflective activity, the students could merge insider and technical knowledge skills. Method

Since spring 2007, the University of Iowa Carver College of Medicine (UICCOM) has offered a four-week teaching elective course for interested fourth-year medical students to receive training in advanced teaching skills so that they are better prepared to be teachers as

residents. The course is offered six times during the academic year, and class size ranges from 2 to 10 students. Topics covered include defining characteristics of effective teachers (classroom and clinical), evaluating teaching, learning styles, orienting new learners, interactive teaching, teaching psychomotor skills, effective feedback, small-group teaching, clinical teaching, and hidden curriculum. During the course, students develop a teaching statement, write a case-based scenario, conduct a literature search, and have two in-class presentations video recorded. Additionally, one of the course requirements is that each class, as a group, develops its own teaching evaluation instrument. The purpose of going through this process is twofold: (1) help students understand how difficult it is to create an effective evaluation instrument, and (2) help students understand that one instrument does not adequately cover both a classroom and clinical setting. The students then use their instrument to assess teaching in either a classroom or clinical setting. Instrument development Early in the course, during the second class, students discuss characteristics they believe exemplify effective teachers, both during and before medical school. The facilitator questions and challenges them as to why these characteristics are so important to them; this stimulates lively discussions and sparks stories of specific examples. Before the next class, students are required to read two articles about developing instruments (Copeland and Hewson2 and Wotruba and Wright11). During a three-hour class in the first week of the course, the students discuss, organize, and design their instrument. The instrument development process is described in greater detail in Box 1. Students are then required to use their instrument to evaluate seven different educational sessions of their own choosing (either classroom or clinical). After using the instrument, the student brings the completed evaluation to the next class so that it can be discussed. Analysis of instruments From February to June 2013, we analyzed all instruments created by teaching elective students from 2007 to 2013. A multistep process of thematic analysis12,13 identified the core themes that represent the main characteristics and

95

Research Report

Box 1 Process in the University of Iowa Carver College of Medicine’s Teaching Elective Class for Developing an Evaluation Instrument • Class begins with brainstorming all possible characteristics of the instructor, the environment, and other characteristics to be evaluated. A scribe lists all of the recommended characteristics. The content of a given educational session to be evaluated is of lesser importance because the focus is on teaching skills. The facilitator redirects the students so that they do not get bogged down in defining or figuring out how to measure the item, keeping the focus on recalling desirable characteristics. After exhausting all ideas of possible characteristics, the students are given a break and a final chance to identify any other characteristics. • In the next phase, the class determines whether to accept each item as a possible measure, to group the characteristics into common behaviors, or to set a specific number of criteria and meld the characteristics under each set. Students clarify and define any characteristic and determine, by consensus, whether it should remain as an individual characteristic or be grouped with other similar characteristics. The usual student response is to group the characteristics. The facilitator’s role is to question the meaning and understanding of the items and ensure that all students have a say in the final selection. • In the final phase, the students identify how the behaviors will be rated. The facilitator explains that any or all of the following are possibilities: Yes/No; Likert scale (1–5 or some other number); −/0/+; does not meet expectations/meets expectations/exceeds expectations; check box for the presence of the item; a line with graduated markings; a line with no markings; a bell curve; and only comment boxes. The class then decides which of the rating scales to use with various items. Additional information such as creating spaces for the instructor’s name, student’s name, date, and venue (lecture, rounds, small group, etc.) are added to the instrument. Once a draft of the student-developed instrument has been created, students are given an opportunity to review instruments created by previous classes and modify their newly created instrument as they deem appropriate.

areas students believed were important to evaluate in teacher performance. The analysis team was made up of three faculty experts in evaluation (two statisticians [R.D.A., Dr. Clarence D. Kreiter] and one authority on evaluation [K.J.F.]) at UICCOM and an educational consultant (M.E.R.) with expertise in qualitative methods. To develop a preliminary codebook, each team member carefully reviewed all existing evaluation instruments developed in the teaching elective and identified common evaluation themes and what types of rating systems were specified. Prespecified codes were purposely not disclosed so as not to influence the coders. All instruments were deidentified and given a number so that there were no identifiers indicating which class created the instrument or which students made up the class. The educational consultant then reviewed and synthesized the themes identified by each evaluation expert and, in consensus with the rest of the team, developed a refined group of themes to serve as a guide for the in-depth coding of each evaluation instrument. One of us (J.E.P.) then systematically searched and coded all of the items in all of the evaluation instruments to identify into which major themes each item fit. This allowed for a numerical tallying of occurrences of both particular words and descriptors, and overall themes. The

96

level coding was validated and clarified by the educational consultant (M.E.R.). A third phase of analysis compared the themes, descriptors, and measurement scales in the teaching elective instruments with those in the evaluation instrument currently used at UICCOM. Ethical approval This research was submitted to UICCOM’s institutional review board, which deemed it not human subject research; therefore, approval was not required. No student names or identifiers were associated with each instrument to protect their identity. Results

Since spring 2007, 193 fourth-year medical students created a total of 36 evaluation instruments in the teaching elective course at UICCOM. After creating its instrument, the class can examine previously developed instruments. No group has changed its instrument after looking at the others; each group feels it has created the “best” instrument and tends to criticize what other classes have developed. Although there may be similar criteria across some instruments, no two are the same in appearance and evaluation strategy. A final component considered is whether the class wants to include some type of overall or gestalt-type rating. Some classes have chosen not to, whereas others have used such ratings as the pain

scale, filling in stars similar to movie ratings; pictures of positive, neutral, and negative symbols; or some type of continuum scale. First-level analysis by each reviewer resulted in similar themes. The education consultant (M.E.R.) synthesized the themes identified by the previous reviewers, resulting in four final themes: (1) teacher personal attributes (enthusiasm, personality traits, professionalism); (2) learning environment (respect, grading fairness, rapport, tailored to the audience); (3) content (relevance, intellectual substance); and (4) teaching methods (organization, innovative and interactive methods). In the second level of analysis we sought to quantify how many evaluation instruments contained a particular descriptor and whether any group used a descriptor in different ways. For example, one student-developed instrument may ask, Did the teacher clearly identify the course objectives? whereas another might ask, Did the teacher refer to and/or summarize the objectives? In both cases, the descriptor refers to “objective” but in different contexts. Another example: Did the instructor use PowerPoint? Or, was PowerPoint used effectively/ appropriately? Table 1 shows a breakdown of the descriptors and how frequently they appeared on evaluation instruments. Engagement and level of the learner were the most commonly identified descriptors, occurring in over 90% of the instruments. Students assessed engagement as “engaged the audience with interactive material,” “attempted to establish rapport with students,” “invited student participation,” and “actively involves students.” Level of the learner was measured with “taught at a level appropriate for audience,” “content appropriate for level of audience,” “delivers relevant and appropriate amount of material,” and “is the lecturer flexible to the audience’s needs?” The four highestoccurring descriptors (engagement, level of the learner, enthusiasm, respect) represented each of the four themes (teaching methods, content, teacher personal attributes, learning environment), respectively. Thirteen descriptors occurred in 50% or more of the student-developed evaluation instruments. Variations of the Likert scale and comment boxes were used in 88.9% (32/26) of the evaluation instruments.

Academic Medicine, Vol. 90, No. 1 / January 2015

Research Report

Table 1 Themes and Descriptors From 36 Teaching Evaluation Instruments Developed by Fourth-Year Medical Students From 2007 to 2013 at the University of Iowa Carver College of Medicine Theme and descriptor

No. (%) of instruments containing descriptor

Content Level of learner

33 (91.7)

Knowledgeable

24 (66.7)

Practical content

10 (27.8)

Learning environment Respect

29 (80.6)

Learning environmenta

21 (58.3)

Independent learning

12 (33.3)

Team

3 (8.3)

Teaching methods Engagement

34 (94.4)

Objectives—preb

27 (75.0)

Time management

27 (75.0)

Organization

25 (69.4)

Technology

23 (63.9)

Handouts

18 (50.0)

Questions—teaching

17 (47.2)

Teaching modalities—general

16 (44.4)

Expectations

13 (36.1)

Objectives—postb

13 (36.1)

Preparation

12 (33.3)

Feedback—giving

11 (30.6)

Feedback—receiving

10 (27.8)

Teaching modalities—summary

6 (16.7)

Teaching modalities—illustrates

5 (13.9)

Creativity

2 (5.6)

Teacher personal attributes Enthusiasm

31 (86.1)

Communications—verbal

23 (63.9)

Approachable

20 (55.6)

Communications—nonverbal

16 (44.4)

Personality traits

11 (30.6)

Motivational

9 (25.0)

Humor

7 (19.4)

Appearance

5 (13.9)

Role model

2 (5.6)

“Learning environment” is both a theme and a descriptor because the theme includes such things as atmosphere, learner centered, nonthreatening, and conducive to learning, whereas the descriptor specifically refers to “positive learning environment.” b “Objectives—pre” indicates that the instructor identified his or her learning objectives at the beginning of the learning; “Objectives—post” indicates that the instructor referred back to the initial objectives and/or summarized the objectives at the end of the learning. a

The Likert scales ranged from 2 to 10 anchors and included one descriptive Likert scale (written anchors instead of numbers). The next highest occurring type of rating scale was Yes/No (22.2% [8/36]). An overall or gestalt-type rating was used in 14 instruments (38.9%). The

Academic Medicine, Vol. 90, No. 1 / January 2015

more frequently the descriptor appeared, the greater the likelihood that multiple rating systems were used to assess it. For example, all seven types of rating systems were used to measure the top four descriptors (engagement, level of learner, enthusiasm, respect).

In the third phase of analysis we compared the student-developed instruments with the evaluation instrument currently used by UICCOM (see Table 2). Of the 10 items on UICCOM’s evaluation instrument, we found that 4 occurred in greater than 80% of the student-developed instruments, whereas 4 other items were present in less than 40% of the instruments. Two items did not directly correspond to any descriptors used in the student-developed instruments. The UICCOM instrument uses a Likert rating scale and comments, which were the two most commonly used rating systems selected by the students. The student-developed instruments, when examined collectively, resulted in 13 descriptors identified on 50% or more of the instruments that defined quality teaching. The lowest number of descriptors was 3, and the highest number was 42. Averaging the number of items (587) across all the evaluation instruments (36) equates to 16.3 items per evaluation instrument. Of the 36 instruments, 24 (66.0%) focused solely on classroom teaching. The remaining 12 (33.0%) were developed such that they could be used in either a classroom or clinical setting. Discussion and Conclusions

Fourth-year medical students provide distinctive insights regarding teaching evaluation criteria and rating methods based on their experience in completing multiple evaluations. Their insights can be used to help create new evaluation instruments or fine-tune current ones. Over the past five years, 193 senior medical students in the teaching elective at UICCOM identified 4 themes and 32 categories that could potentially be used to assess effective teaching. Analyzing the 36 instruments indicates that although the four themes are consistent, there are some categories more highly valued than others (e.g., engagement, level of learner, enthusiasm, respect). Students also recognized that assessment of clinical teaching and classroom teaching would be difficult to capture with one evaluation instrument. This article contributes to evaluation literature in several ways. First, very little literature exists in which medical students create an evaluation instrument to assess effective teaching. This report indicates what categories are important to medical students and the type of

97

Research Report

Table 2 Comparison of the Existing University of Iowa Carver College of Medicine Faculty and Resident Teaching Evaluationa Against the Evaluation Instruments Developed by Its Fourth-Year Medical Students From 2007 to 2013 No. (%) of times concept mentioned in student instruments (N = 36)

Items from the existing evaluation 1. Conveys expectations to students 2. Gives frequent constructive feedback

13 (36.1) 11 (30.6)

3. D  emonstrates interest in teaching and allots time for itb

31 (86.1)

4. Delegates appropriate responsibility to students



5. Provides opportunity for students to observe and participate in clinically relevant procedures



6. Actively engages students in discussion

34 (94.4)

7. Shows support and respect for students

29 (80.6)

8. Shows respect for patients

29 (80.6)

9. Works well with members of the health care team 10. Overall teaching effectiveness

3 (8.3) 14 (38.9)

Scale: 1 = strongly disagree; 5 = strongly agree; both a Likert scale and comments were the two most commonly used rating systems selected by students. Described in the student-developed evaluation instruments as “enthusiasm.”

a

b

rating systems they prefer. Second, it describes a constructive process to involve medical students in the development of an evaluation instrument. The process supports students in creative thinking, uses their experiences, and achieves consensus as well as ownership of their instrument. Finally, this longitudinal view shows that medical students have very similar perspectives over time in assessing effective teaching. Comparing this research with that of Schiekirka et al,8 which is one of the few studies that also examines students’ perceptions of evaluation, reveals both similarities and differences. Their research described teaching quality as dependent on content, process, teacher and student characteristics, and learning outcomes. Except for learning outcomes, medical students in UICCOM’s teaching elective considered the same themes important to include in evaluations. Unlike Schiekirka et al, the teaching elective students preferred Likert scale rating systems and comments instead of open-ended questions. There are several limitations to this study. First, only 25% of each medical school class enrolls in the teaching elective each year. Using the same process with a larger percentage of students may shift the importance of the categories and themes. Second, the initial approach used to analyze the instruments might result

98

in different themes given a different set of evaluation experts. Any preconceived notions about assessing effective teaching may have biased the experts in their review of the 36 instruments. Finally, the coding of all items by the first author (J.E.P.) was conducted manually. Using qualitative software instead might generate a slightly different listing of themes and descriptors. Future research should proceed in three possible areas. In one, the same process for creating a student-developed evaluation instrument could be implemented across other medical institutions. Will students at other schools have similar perspectives, or will they emphasize different themes and descriptors? On the basis of the literature examining effective teaching characteristics, one would expect the outcomes to be very similar.14–16 Another possibility would be to see whether student-developed instruments are accepted more readily by students than those created through a more typical, faculty-driven approach. Finally, it would be interesting to assess students’ attitudes about the difficulty in designing effective evaluation instruments before and after developing their own, to determine whether this exercise deepens their appreciation for the challenges associated with designing evaluation instruments. This report provides insights from the medical student perspective, which is not often captured in the literature on evaluating effective teaching.

Understanding how students view evaluation, define good teaching, and arrive at course ratings will improve the reliability and validity of instruments and provide students with a better understanding of how evaluation instruments are used at their academic health center.8 Fourth-year medical students offer a heterogeneous resource for evaluating teaching in the classroom and the clinic. Their exposure to a wide range of teaching makes them a useful resource in determining how effective teaching can be evaluated. Students can offer a varied viewpoint on how teachers should be evaluated and what should be included in instrument development. Allowing fourth-year medical students to provide input may strengthen the instrument and help them understand their institution’s evaluation process. Developing an evaluation instrument as part of the teaching elective gives medical students the opportunity to experience the development process and implementation of their evaluation instrument. Using the descriptors that were identified in more than 50% of the student-developed instruments might provide effective measures that can be incorporated into medical teacher evaluation instruments. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Dr. Pettit is education consultant, Office of Consultation and Research in Medical Education, Carver College of Medicine, University of Iowa, Iowa City, Iowa. Dr. Axelson is assistant professor, Department of Family Medicine, and consultant for program evaluation, Office of Consultation and Research in Medical Education, Carver College of Medicine, University of Iowa, Iowa City, Iowa. Dr. Ferguson is professor, Department of Internal Medicine, and director, Office of Consultation and Research in Medical Education, Carver College of Medicine, University of Iowa, Iowa City, Iowa. Dr. Rosenbaum is professor, Department of Family Medicine, and consultant for faculty development, Office of Consultation and Research in Medical Education, Carver College of Medicine, University of Iowa, Iowa City, Iowa.

References 1 Billings-Gagliardi S, Barrett SV, Mazor KM. Interpreting course evaluation results: Insights from thinkaloud interviews with medical students. Med Educ. 2004;38:1061–1070. 2 Copeland HL, Hewson MG. Developing and testing an instrument to measure the

Academic Medicine, Vol. 90, No. 1 / January 2015

Research Report

3

4

5

6

effectiveness of clinical teaching in an academic medical center. Acad Med. 2000;75:161–166. Gerbase MW, Germond M, Nendaz MR, Vu NV. When the evaluated becomes evaluator: What can we learn from students’ experiences during clerkships? Acad Med. 2009;84:877–885. Irby DM, Gillmore GM, Ramsey PG. Factors affecting ratings of clinical teachers by medical students and residents. J Med Educ. 1987;62:1–7. McOwen KS, Bellini LM, Morrison G, Shea JA. The development and implementation of a health-system-wide evaluation system for education activities: Build it and they will come. Acad Med. 2009;84:1352–1359. Mullan P, Sullivan D, Dielman T. What are raters rating? predicting medical student, pediatric resident, and faculty ratings of clinical teachers. Teach Learn Med. 1993;5:221–226.

Academic Medicine, Vol. 90, No. 1 / January 2015

7 Woloschuk W, Coderre S, Wright B, McLaughlin K. What factors affect students’ overall ratings of a course? Acad Med. 2011;86:640–643. 8 Schiekirka S, Reinhardt D, Heim S, et al. Student perceptions of evaluation in under­ graduate medical education: A qualitative study from one medical school. BMC Med Educ. 2012;12:45. 9 Beckman TJ, Mandrekar JN. The inter­personal, cognitive and efficiency domains of clinical teaching: Construct validity of a multi-dimen­ sional scale. Med Educ. 2005;39:1221–1229. 10 Speer AJ, Elnicki DM. Assessing the quality of teaching. Am J Med. 1999;106:381–384. 11 Wotruba TR, Wright PL. How to develop a teacher-rating instrument. J Higher Educ. 1975;46:653–663. 12 Crabtree BF, Miller WL. Using codes and code manuals: A template organizing style of

13 14

15

16

interpretation. In: Doing Qualitative Research in Primary Care. 2nd ed. Thousand Oaks, Calif.: Sage Publications; 1999:163–177. Rice PL, Ezzy D. Qualitative Research Methods: A Health Focus. Melbourne, Australia: Oxford University Press; 1999. Menachery EP, Wright SM, Howell EE, Knight AM. Physician–teacher characteristics associated with learnercentered teaching skills. Med Teach. 2008;30:e137–e144. Srinivasan M, Su-Ting TL, Meyers FJ, et al. “Teaching as a competency”: Competencies for medical education. Acad Med. 2011;86:1211–1220. Sutkin G, Wagner E, Harris I, Schiffer R. What makes a good clinical teacher in medicine? A review of the literature. Acad Med. 2008;83:452–466.

99

Assessing effective teaching: what medical students value when developing evaluation instruments.

To investigate what criteria medical students would value and use in assessing teaching skills...
249KB Sizes 0 Downloads 3 Views