http://informahealthcare.com/jic ISSN: 1356-1820 (print), 1469-9567 (electronic) J Interprof Care, Early Online: 1–6 ! 2015 Informa UK Ltd. DOI: 10.3109/13561820.2015.1025373

ORIGINAL ARTICLE

A pilot study to test the effectiveness of an innovative interprofessional education assessment strategy

J Interprof Care Downloaded from informahealthcare.com by Nanyang Technological University on 08/20/15 For personal use only.

Michelle Christine Emmert1 and Li Cai2 1

College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, Office of Student Affairs, Pomona, CA, USA and Graduate School of Education and Information Studies, University of California, Los Angeles, Los Angeles, CA, USA

2

Abstract

Keywords

The goals of this quasi-experimental pilot study were to test an assessment tool designed to evaluate students’ teamwork skills, and to assess the effectiveness of an interprofessional education (IPE) course. Participants were health professional students (physical therapy, pharmacy, dental and osteopathic medicine) 24 of whom were second-year students who had previously taken part in an IPE course (experimental group), and 22 of whom were third years that had not (control group). Students interacted with a standardized patient and her son during an asynchronous Team Objective Structured Clinical Exam (TOSCE), after which they were scored on their teamwork skills using newly designed teamwork rating scales. Cronbach Alpha calculations suggest that the rating scales are reliable when rater scores are aggregated (0.81). Pearson coefficient calculations determined that teamwork scores of live raters and video raters were significantly correlated (p50.0001) suggesting good consistency across these raters, and the experimental group performed significantly better (p ¼ 0.0003) than the control group suggesting that the IPE curriculum is successfully equipping students with teamwork skills. The results of this study contribute to the much needed IPE assessment literature, and suggest that teamwork skills can be taught and effectively assessed using this new rating scale.

Assessment, interprofessional education, quasi-experimental, Team Objective Structured Clinical Exam, teamwork

Introduction An estimated 400 000 people die and four million are injured in US hospitals each year as a result of clinical error (Committee on Quality of Health Care in America, 2001; James, 2013). Patients who survived the impact of clinical error experienced a range of problems such as infections, pressure ulcers and objects left in the body (Cimiotti, Aiken, Sloane, & Wu, 2012; Mallow, Pandya, Horblyuk, & Kaplan, 2013; Van Den Bos et al., 2011). Literature reveals that healthcare providers who work together in an interprofessional basis make fewer errors and provide higher quality care (Deering et al., 2011; Einav et al., 2010; Neily et al., 2010; Reeves, Perrier, Goldman, Freeth, & Zwarenstein, 2013). Therefore, a number of institutions have turned to interprofessional education (IPE) as a possible means of improving teamwork, collaboration and communication. While research indicates that healthcare providers who work effectively together make fewer errors, there is a scarcity of literature articulating how health professional students can be effectively educated and assessed on the skills and behaviors known to improve teamwork and collaboration. A recent

Correspondence: Michelle Christine Emmert, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, Office of Student Affairs Pomona, 309 East 2nd Street, Pomona, CA 91766, USA. E-mail: [email protected]

History Received 5 December 2013 Revised 30 November 2014 Accepted 1 March 2015 Published online 17 June 2015

systematic review found only 15 quantitative studies that rigorously evaluated the effectiveness of IPE as compared with no intervention (Reeves et al., 2013); seven studies reported positive results, and the remaining eight reported neutral or mixed results. While these studies are a step in the right direction, Reeves et al. (2013) conclude that more methodologically sound studies of IPE are needed in order to draw conclusions regarding their effectiveness. The present study seeks to contribute to this much needed literature base by creating and testing a new IPE assessment tool, and using it to assess the skills and behaviors of study participants. A number of expertly designed competency frameworks have been used in IPE and assessment. Noteworthy studies include the Interprofessional Education Collaborative (IPEC), the Canadian Interprofessional Health Collaborative (CIHC) and the work of the Agency for Healthcare Research and Quality (AHRQ) (CIHC, 2010; IPEC, 2011; King et al., 2008). These organizations have made some significant strides in identifying the framework and competencies necessary for the assessment of IPE. In addition, the Department of Defense (DoD) and the AHRQ published a document titled ‘‘Team Strategies and Tools to Enhance Performance and Patient Safety’’ (TeamSTEPPS) (King et al., 2008) which identified eight performance criteria essential to interprofessional teamwork. Four of these performance criteria are thought to be teachable: (1) team leadership (ability to facilitate team problem solving, provide expectations, clarify team members roles and synchronize team member contributions);

2

M. C. Emmert & L. Cai

(2) situation monitoring (ability to identify lapses in other team member’s actions and provide feedback to facilitate self-correction); (3) mutual support (ability to recognize workload distribution problems and shift responsibilities to under-utilized team members) and (4) closed-loop communication (following up with the team to ensure the message was received, acknowledging receipt of the message, clarifying that the message received is same as the intended message sent).

J Interprof Care Downloaded from informahealthcare.com by Nanyang Technological University on 08/20/15 For personal use only.

IPE developments Informed by TeamSTEPPS, an IPE course at Western University of Health Sciences (WesternU) was designed. Having identified the collaborative competencies and criteria from TeamSTEPPS, we next sought the most effective way of assessing to what extent students have learned the associated skills and behaviors. IPE programs are typically assessed using questionnaires that examine knowledge, attitudes, beliefs and/or satisfaction (Carpenter, Patsios, Szilassy, & Hackett, 2011; Curran, Sharpe, Flynn, & Button, 2010; MacDowell, Glasser, Weidenbacher-Hoper, & Peters, 2014; Norris et al., 2013). Despite the overwhelming use of questionnaires focused on non-performance outcomes, assessment literature favors the performance assessment approach for interprofessional, teamwork skills (Boet et al., 2013; Burn, Nestel, Gachoud, & Reeves, 2013; Oza et al., 2014; Simmons et al., 2011). One such tool used to assess performance of interprofessional teamwork skills is the Team Objective Structured Clinical Exam (TOSCE). In a typical TOSCE, a team of students interacts with one patient at each station. The team is usually comprised of students from several different professions who interact with each patient as a group and communicate face-to-face. The patient starts in the exam room. The student team enters the exam room, and team members communicate amongst themselves regarding the patient’s care. The standard TOSCE setup works well when all included professions routinely work side-by-side providing patient care (e.g. a doctor, nurse and physician assistant in a hospital); it unfortunately falls short when team members are not routinely part of such a structured team. Because the professions included in this study would not routinely work side-by-side, we pilot tested an innovative TOSCE model that allowed team members to work together asynchronously. In this study, each station was setup as a simulated work setting (e.g. pharmacy and dental office) and the standardized patient visited each station once. In this study, the standardized patient visited each room one after the other, and the healthcare providers collaborated with each other asynchronously via ‘‘faxing’’ of records, phone calls and referral forms.

Background Western University of Health Sciences (WesternU) is a private, exclusively graduate, health science institution located in an urban area of Southern California, USA. We chose WesternU as the pilot study site because it has an IPE course series that spans the preclinical curriculum. As part of this curriculum development activity, we developed a team-based performance task, rating scale, patient chart for student review and detailed standardized patient and healthcare provider training documents. Copies of these documents can be found in the dissertation form of this study (Emmert, 2011). The performance task was designed by four faculty clinicians and two experienced standardized patient trainers. The performance task was based on an elderly patient who was recently released from the hospital after having a stroke. The standardized patient portraying this role displayed behaviors characteristic of an elderly stroke patient who may have been abused by her caregiver son. The first TOSCE station was setup as the patient’s

J Interprof Care, Early Online: 1–6

living room for a visit with a physical therapist, the second as a physician’s office, the third as a Coumadin clinic and the fourth as a dentist’s office. Two groups of four students (each from a different program) entered their respective healthcare stations simultaneously. Each pair of standardized patient/caregivers (SPCGs) then walked from room to room and interacted with and scored each student in their group for 18 min each. This was done three times to process through a total of six groups of students. The case was designed such that students ran into errors and atypical observations during the interaction such as trip hazards, missing test results and signs of elder abuse. These provided opportunities for students to demonstrate their teamwork and communication skills via asynchronous communication (i.e. communicating with other healthcare professionals who are not in the same room with them during the patient visit). A faculty member played the role of standardized healthcare provider and accepted all calls from students who needed to speak to another provider. Students were instructed to verbalize everything they did during the interaction so that the raters knew what they were doing and thinking throughout the task. The rating scale was designed by a WesternU-based team of nine including the four IPE liaisons, two standardized patient trainers identified above, two IPE assessment specialists and the primary author. The resulting rating scale was reviewed by three external reviewers with PhD’s in assessment, one of which specializes in IPE. The external reviewers were provided with TeamSTEPPS definitions of these criteria and asked to indicate which performance criteria were addressed by each observable behavior. They were also asked to provide suggestions regarding improvements to the rating scale. Only observable behaviors that the majority believed were tied to the specified performance criteria were used in the analysis. Assessment forms are available upon request. Participant roles Actors’ role in the TOSCE Each SPCG pair spent a total of 18 min at each station: 10 min interacting with a student volunteer and 8 min completing a rating scale for each student. Each standardized patient/care giver pair discussed and indicated on the rating scale joint scores for each question and each student. Students’ role in the TOSCE One student from each degree program was randomly assigned to one of six groups responsible for jointly providing care for the standardized patient. Students were given copies of the patient’s medical chart and a brief explanation of why the patient made the appointment. They had the ability to request additional information or ask questions of other healthcare providers during the encounter. At the study’s conclusion, students were debriefed regarding the intentions of the study but did not receive personalized feedback on their performance. If the tool were to be used as part of the University curriculum, personalized feedback would be provided. Faculty roles in the TOSCE Two faculties were designated as standardized healthcare providers. Students called a single phone number regardless of what healthcare professional they wanted to speak to, and the faculty member who answered acted as the provider the student requested and gave them the information they needed. The standardized healthcare providers received detailed training to standardize their responses to student questions. A total of eight faculties watched

Innovative interprofessional education assessment strategy

DOI: 10.3109/13561820.2015.1025373

the live interactions from the control room on the days of data collection, and an additional four faculties watched recorded interactions from their desks in the months following data collection. Each faculty member completed rating scales for 11 (control group) or 12 (experimental group) students.

J Interprof Care Downloaded from informahealthcare.com by Nanyang Technological University on 08/20/15 For personal use only.

Methods This study is a quasi-experimental design focused on performance-based assessment of the ‘‘teachable’’ teamwork competencies from TeamSTEPPS. The research questions were as follows: (1) How are pilot study students rated on teamwork skills? (b) Are there differences in teamwork scores between the experimental and control groups? (c) Are there differences in teamwork scores among programs? (d) Are there differences in teamwork scores between genders? (2) Are the resulting rating scales methodologically sound by current assessment standards?

3

group as well as teamwork performance scores for each rater group and other similar data. T-tests and Pearson correlation coefficients were also calculated to address the issue of instrument validity. Once these building block calculations were completed, we calculated a number of one-way analyses of variance (ANOVA). These calculations enabled us to compare teamwork scores between the experimental and control groups, and compare teamwork scores between programs and genders in each group. An analysis of covariance helped us to determine which factor(s) may explain study results. Including demographic covariates in the ANOVA was expected to increase statistical power as it accounts for some of the variability. Ethics The University of California, Los Angeles, Institutional Review Board (UCLA/IRB) approved this study, and WesternU determined that the study was exempt. Each student was assigned a pilot study ID number and asked to complete a written, informed consent according to UCLA/IRB requirements.

Data collection

Results

A total of 24 second-year students participated in the experimental group: six Doctor of Dental Medicine (DMD) students, six Doctor of Osteopathic Medicine (DO) students, six Doctor of Pharmacy (PharmD) students and six Doctor of Physical Therapy (DPT) students. A total of 22 third-year students participated in the control group: 6 DO students, 6 PharmD students and 10 DPT students. Student participants in the experimental group had taken part in nearly two years of IPE training, while students in the control group had not participated in any IPE courses. Fourteen faculties were recruited from the University; 12 acted as faculty raters and two as standardized healthcare providers. WesternU’s Office of Medical Simulation recruited and trained actors for the study. At the close of the patient interaction at each station, the SPCG and Live faculty raters confirmed that they had the correct prelabeled rating scale, indicated their scores for that station/student and placed the completed rating scale back in the folder. SPCG and Live faculty raters handed their completed forms to the researcher immediately following each round of encounters, and video faculty raters returned their completed forms to the researcher after they completed all assigned reviews at their desks.

The difference between combined teamwork scores for all raters in the experimental group compared with the combined teamwork scores for all raters in the control group were found to be significant (p ¼ 0.0031) indicating that students in the experimental group were rated significantly higher than students in the control group. Results indicate that raw scores were higher in all experimental groups and that Satterthwaite t values indicate significant differences between experimental and control groups for live raters (t value ¼ 1.24, p ¼ 0.0019) and video raters (t value ¼ 3.19, p ¼ 0.0039). SPCG rater scores trended in the same direction but were not found to be significant (t value ¼ 1.24, p ¼ 0.2234). See Table I for details. These data provide some evidence that WesternU’s curriculum is effectively equipping students with the teamwork skills evaluated by the rating scales, duly noting that the assignment mechanism is not random.

Data analysis

Teamwork score differences among programs

All data collected during the study were entered into excel by the primary investigator, double checked by a colleague, and imported into SAS (Cary, NC) and SPSS (Chicago, IL) for analysis. The first wave of calculations provided us with means, frequencies, correlations and alpha values. These calculations allowed us to determine the basic reliability of the rating scales. It also provided mean scores for the experimental and control

Three one-way ANOVA calculations were performed to determine whether students were rated differently based upon the program in which they were enrolled. SPCG and live rater sample sizes were 46, and the video sample size was 34. Resulting F values (F(3,42) ¼ 1.39, F(3,42) ¼ 0.23 and F(3,30) ¼ 1.42) ranged in significance p ¼ 0.2506 to p ¼ 0.8768 indicating that SPCG raters, live raters and video raters scored students similarly

The goals of this study were to test an assessment tool designed to evaluate students’ teamwork skills, and to assess the effectiveness of WesternU’s IPE course. Teamwork score differences between groups

Table I. Teamwork score differences between groups. Teamwork scores Group

N

Min

Max

Mean

Standard deviation

95% Confidence interval

Experimental – SPCG raters Control – SPCG raters Experimental – Live raters Control – Live raters Experimental – Video raters Control – Video raters

24 22 24 22 23 11

1.44 1.56 1.74 1.23 1.50 1.35

2.78 2.67 2.80 2.64 2.90 2.41

2.13 2.01 2.24 1.93 2.15 1.76

0.30 0.31 0.28 0.36 0.38 0.31

2.00–2.25 1.88–2.15 2.13–2.36 1.77–2.09 1.98–2.31 1.55–1.97

4

M. C. Emmert & L. Cai

J Interprof Care, Early Online: 1–6

Table II. Teamwork score differences between programs. Program DMD

DO

DPT

PharmD

Group

Mean

Standard deviation

Mean

Standard deviation

Mean

Standard deviation

Mean

Standard deviation

SPCG raters Live raters Video raters

2.21 2.32 2.12

0.19 0.34 0.44

2.06 2.05 1.98

0.39 0.29 0.43

1.96 2.00 1.97

0.29 0.33 0.36

2.16 2.15 2.07

0.25 0.42 0.45

Table III. Teamwork score differences between genders.

Table IV. Statistically significant findings for rating scale.

Group

Pearson correlation coefficient

Shared variance

Significance

Experimental Control Combined Experimental Combined

0.65 0.78 0.68 0.42 0.37

0.42 0.60 0.46 0.17 0.14

0.0009 0.0050 50.0001 0.0428 0.0290

J Interprof Care Downloaded from informahealthcare.com by Nanyang Technological University on 08/20/15 For personal use only.

Teamwork scores

Group

N

Min

Max

Mean

Standard deviation

Male SPCG raters Female SPCG raters Male live raters Female live raters Male video raters Female video raters

18 28 18 28 12 22

1.75 1.62 1.23 1.50 1.50 1.35

3.00 3.13 2.80 2.80 2.70 2.90

2.38 2.33 2.05 2.12 2.05 2.01

0.35 0.36 0.41 0.32 0.43 0.39

95% Confidence interval 2.21–2.56 2.19–2.47 1.84–2.25 2.00–2.25 1.77–2.32 1.83–2.18

regardless of the program in which they were enrolled, see Table II for details. This implies that there are no significant program biases built into the utilization of the rating scale. Teamwork score differences between genders Three independent sample t-tests were calculated to determine whether men and women were rated differently by each rater type. Resulting t values (0.52, 0.66 and 0.26) ranged in significance from p ¼ 0.5121 to p ¼ 0.7937 indicating that SPCG raters, live raters and video raters scored students similarly regardless of their gender, see Table III for details. This implies that there are no significant gender biases built into the utilization of the rating scale. Methodological Soundness of Rating Scales A series of Pearson product moment correlation coefficients were calculated to provide initial validity evidence. p Values between 0.0001 and 0.0050 suggest that across the board, live raters scored students similarly to the way in which video raters scored students, which indicates consistency across these raters. Video raters also scored students similarly to the way in which SPCG raters scored students in the combined group which also indicates consistency between these raters (p ¼ 0.0290), see Table IV for details. To explore the reliability of the rating scales, we calculated a number of Cronbach coefficient alphas. In our first set of calculations we included scores from all three rater types: SPCG raters, live raters and video raters; however we discovered that the alpha was significantly higher when we excluded the SPCG rater scores. The alpha for the experimental group was 0.78, the alpha for the control group was 0.87, and the alpha for the two groups combined was 0.81. In each case, we compared the mean teamwork scores for live raters to the mean teamwork scores for video raters. All three alpha values fall within the range that indicates that the rating scales produce scores that are internally consistent.

Discussion Current assessment strategies rarely approach assessment of IPE programs from a performance standpoint (Carpenter et al., 2011; Curran et al., 2010; MacDowell et al., 2014; Norris et al., 2013)

Correlation between raters Live and video Live and video Live and video Live and SPCG Video and SPCG

nor do they often use methodologically sound tools to assess performance (Reeves et al., 2013). This study contributes to the growing literature in which students are required to demonstrate their teamwork skills in a controlled environment while their performance is measured using a set of standardized rating scales. In addition, this study contrasts from other research, as it assessed students using an asynchronous TOSCE, work which to our knowledge, has not been previously undertaken. Third-year students were selected because they matriculated prior to the start of the IPE curriculum. We were concerned that the additional year of clinical training might close the gap between the experimental and control groups because some studies have found that students with more clinical experience perform better on OSCEs (James, 2013; Tchorz et al., 2013). However, we were surprised to find that even with this potentially confounding variable, the less clinically experienced experimental group still outperformed the control group. Our results also indicate that the rating scales are, in a number of ways, methodologically sound. A meta-analysis of literature shows that the average Cronbach alpha score for OSCE’s is 0.78 (Brannick, Erol-Korkmaz, & Prewett, 2011) and the Cronbach alpha score in this study (for faculty raters only) was 0.81, indicating that faculty raters produced scores that were internally consistent; a measure of reliability. However, some studies have found that a student’s gender can impact how an observer rates a student’s clinical performance (Berg et al., 2014; Carson, Peets, Grant, & McLaughlin, 2010). Our group comparisons showed that there were no differences in the way students were rated based upon their gender, suggesting that the rating scales and raters are not affected by gender. We also analyzed the data for possible biases favoring or hindering students of a given degree program, and none were found. We were unable to identify any published studies that investigated whether an OSCE used to assess students of multiple programs could be biased towards one program or another; therefore, it appears as though this finding is a new addition to the literature. In this study, we utilized three different types of raters: live, video and SPCG raters. All live and video raters were faculty members of the university. Research shows that standardized patients and faculty often rate students differently even when trained in the same manner using the same rating scale (Leung, Wang, & Chen, 2012; Lie, Encinas, Stephens, & Prislin, 2010). Our results indicated that live and video-based faculty ratings, and

J Interprof Care Downloaded from informahealthcare.com by Nanyang Technological University on 08/20/15 For personal use only.

DOI: 10.3109/13561820.2015.1025373

video-based and SPCG ratings were significantly correlated, while live faculty and SPCG ratings were not correlated. We also found that SPCG ratings were more varied than live or video faculty ratings which may account for the non-statistically relevant findings. Researchers do not speculate on whether one rater type is better than the other, but rather note that differences sometimes relate to domain type (e.g. humanism or history taking). Similarly this study was not able to determine the best type or combination of raters. Therefore, we suggest that future studies explore differences between raters to determine if there is a best type of rater or perhaps a best mix of raters who will most appropriately rate student performance in TOSCEs. This study had two main limitations. First, as with many studies (Capella et al., 2010; Chiu, 2014; Rosen et al., 2010; Weaver et al., 2010), this work focused on only one element of IPE (its effects on teamwork) as opposed to studying the general effects of IPE. Second, the size of the participant pool and nonrandom selection process also limited the results produced by the study. To mitigate the effect of a limited pool, future studies should expand to include other colleges and universities. To mitigate the self-selection bias one might include as part of the IPE curriculum that randomly selected (or all) individuals will be required to participate at the end of their second year. A benefit of a higher number of participants is that in the future, we could explore in greater detail any similarities or differences between programs in regard to how they perform on specific aspects of teamwork. Future research should continue to focus on testing and refining methodologically sound assessment tools at the graduate and post-graduate level. Once these instruments are fine-tuned, we suggest that researchers undertake time-interrupted series studies to assess students’ interprofessional skills over time. In future work, we recommend having the graduates’ ‘‘real world’’ supervisors (of at least six months) review an online training module on use of the rating scales and then ask them to rate their employee based upon their observations and interactions. This is an essential next step to determine if the interprofessional skills learned in professional school will be used by graduates when they enter practice.

Declaration of interest The authors report no conflicts of interest. The authors alone are responsible for the writing and content of the paper.

References Berg, K., Blatt, B., Lopreiato, J., Jung, J., Schaeffer, A., Heil, D. . . . Veloski, J. (2015). Standardized patient assessment of medical student empathy: Ethnicity and gender effects in a multi-institutional study. Academic Medicine, 90, 105–111. Boet, S., Bould, M.D., Sharma, B., Reeves, S., Naik, V.N., Triby, E., & Grantcharov, T. (2013). Within-team debriefing versus instructor-led debriefing for simulation-based education: A randomized controlled trial. Annals of Surgery, 258, 53–58. Brannick, M.T., Erol-Korkmaz, H.T., & Prewett, M. (2011). A systematic review of the reliability of objective structured clinical examination scores. Medical Education, 45, 1181–1189. Burn, C.L., Nestel, D., Gachoud, D., & Reeves, S. (2013). Board 191Program Innovations Abstract Simulated Patient as Co-Facilitators: Benefits and Challenges of the Interprofessional Team OSCE (Submission# 1677). Simulation in Healthcare, 8, 455–456. Capella, J., Smith, S., Philp, A., Putnam, T., Gilbert, C., Fry, W. . . . Baker, D. (2010). Teamwork training improves the clinical care of trauma patients. Journal of Surgical Education, 67, 439–443. Carpenter, J., Patsios, D., Szilassy, E., & Hackett, S. (2011). Outcomes of short course interprofessional education in parental mental illness and child protection: Self-efficacy, attitudes and knowledge. Social Work Education, 30, 195–206.

Innovative interprofessional education assessment strategy

5

Carson, J.A., Peets, A., Grant, V., & McLaughlin, K. (2010). The effect of gender interactions on students’ physical examination ratings in objective structured clinical examination stations. Academic Medicine, 85, 1772–1776. Chiu, C.-J. (2014). Development and validation of performance assessment tools for interprofessional communication and teamwork (PACT). University of Washington. Retrieved from https://dlib.lib.washington. edu/researchworks/handle/1773/25364. CIHC. (2010). A National Interprofessional Competency Framework. Vancouver: University of British Columbia. Cimiotti, J.P., Aiken, L.H., Sloane, D.M., & Wu, E.S. (2012). Nurse staffing, burnout, and health care–associated infection. American Journal of Infection Control, 40, 486–490. Committee on Quality of Health Care in America, I. o. M. (2001). Crossing the quality chasm: A new health system for the twenty-first century. Washington, DC: National Academy Press. Curran, V., Sharpe, D., Flynn, K., & Button, P. (2010). A longitudinal study of the effect of an interprofessional education curriculum on student satisfaction and attitudes towards interprofessional teamwork and education. Journal of Interprofessional Care, 24, 41–52. Deering, S., Rosen, M.A., Ludi, V., Munroe, M., Pocrnich, A., Laky, C., & Napolitano, P.G. (2011). On the front lines of patient safety: Implementation and evaluation of team training in Iraq. Joint Commission Journal on Quality and Patient Safety, 37, 350–350. Einav, Y., Gopher, D., Kara, I., Ben-Yosef, O., Lawn, M., Laufer, N. . . . Donchin, Y. (2010). Preoperative briefing in the operating roomshared cognition, teamwork, and patient safety. CHEST Journal, 137, 443–449. Emmert, M.C. (2011). Pilot test of an innovative interprofessional education assessment strategy. Los Angeles: University of California. IPEC (2011). Core Competencies for Interprofessional Collaborative Practice Core competencies for interprofessional collaborative practice: Report of an expert panel. Washington, DC: Interprofessional Education Collaborative. James, J. (2013). A new, evidence-based estimate of patient harms associated with hospital care. Journal of Patient Safety, 9, 122–128. King, H., Battles, J., Baker, D., Alonso, A., Salas, E., Webster, J. . . . Salisbury, M. (2008). TeamSTEPPS: Team strategies and tools to enhance performance and patient safety. Advances in Patient Safety. Rockville: Agency for Healthcare Research and Quality. Leung, K.-K., Wang, W.-D., & Chen, Y.-Y. (2012). Multi-source evaluation of interpersonal and communication skills of family medicine residents. Advances in Health Sciences Education, 17, 717–726. Lie, D., Encinas, J., Stephens, F., & Prislin, M. (2010). Do faculty show the ‘halo effect’in rating students compared with standardized patients during a clinical examination. The Internet Journal of Family Practice, 8. Retrieved from https://ispub.com/IJFP/8/2/8726. MacDowell, M., Glasser, M., Weidenbacher-Hoper, V., & Peters, K. (2014). Impact of a rural interprofessional health professions summer preceptorship educational experience on participants’ attitudes and knowledge. Education for Health, 27, 177–182. Mallow, P., Pandya, B., Horblyuk, R., & Kaplan, H. (2013). Prevalence and cost of hospital medical errors in the general and elderly United States populations. Journal of Medical Economics, 19, 1367–1378. Neily, J., Mills, P., Young-Xu, Y., Carney, B., West, P., Berger, D. . . . Bagian, J. (2010). Association between implementation of a medical team training program and surgical mortality. Journal of the American Medical Association, 304, 1693–1700. Norris, J., Carpenter, J., Eaton, J., Guo, J.-W., Lassche, M., Pett, M., & Blumenthal, D. (2013). Board 391-Research Abstract Development and Construct Validation of the Interprofessional Attitudes Scale (IPAS) for Assessing the Impact of Interprofessional Simulations (Submission# 1233). Simulation in Healthcare, 8, 571–572. Oza, S.K., Boscardin, C.K., Wamsley, M., Sznewajs, A., May, W., Nevins, A. . . . E. Hauer, K. (2014). Assessing 3rd year medical students’ interprofessional collaborative practice behaviors during a standardized patient encounter: A multi-institutional, cross-sectional study. Medical Teacher. Advance online publication. doi:10.3109/0142159X.2014. 970628. Reeves, S., Perrier, L., Goldman, J., Freeth, D., & Zwarenstein, M. (2013). Interprofessional education: Effects on professional practice and healthcare outcomes (update). Cochrane Database of Systematic Reviews, 3. doi:10.1002/14651858.CD002213.pub3

6

M. C. Emmert & L. Cai

J Interprof Care Downloaded from informahealthcare.com by Nanyang Technological University on 08/20/15 For personal use only.

Rosen, M., Weaver, S., Lazzara, E., Salas, E., Wu, T., Silvestri, S. . . . King, H.B. (2010). Tools for evaluating team performance in simulation-based training. Journal Emergency Trauma Shock, 3, 353–359. Simmons, B., Egan-Lee, E., Wagner, S.J., Esdaile, M., Baker, L., & Reeves, S. (2011). Assessment of interprofessional learning: The design of an interprofessional objective structured clinical examination (iOSCE) approach. Journal of Interprofessional Care, 25, 73–74. Tchorz, K.M., Binder, S.B., White, M.T., Lawhorne, L.W., Bentley, D.M., Delaney, E.A. . . . Dunn, M.M. (2013). Palliative and end-of-life care

J Interprof Care, Early Online: 1–6

training during the surgical clerkship. Journal of Surgical Research, 185, 97–101. Van Den Bos, J., Rustagi, K., Gray, T., Halford, M., Ziemkiewicz, E., & Shreve, J. (2011). The $17.1 billion problem: The annual cost of measurable medical errors. Health Affairs, 30, 596–603. Weaver, S., Rosen, M., DiazGranados, D., Lazzara, E., Lyons, R., Salas, E. . . . King, H. (2010). Does teamwork improve performance in the operating room? A multilevel evaluation. Joint Commission Journal of Quality Patient Safety, 36, 133–142.

A pilot study to test the effectiveness of an innovative interprofessional education assessment strategy.

The goals of this quasi-experimental pilot study were to test an assessment tool designed to evaluate students' teamwork skills, and to assess the eff...
193KB Sizes 3 Downloads 7 Views