2014; 36: 177–179

SHORT COMMUNICATION

Effect of clinical context on simulator-based assessment of blood pressure taking – A pilot randomized study GILBERTO K. K. LEUNG & JOHN M. NICHOLLS The University of Hong Kong, Hong Kong

Abstract Background: Blood pressure measurement is an essential clinical skill that can readily be assessed in objective structured clinical examination (OSCE). While the use of simulators can enhance test validity and reliability, the given clinical context may also affect student performance. Aims: To investigate the impact of variations in clinical context on blood pressure measurement in a simulator-based OSCE. Method: We randomized 162 first-year medical students into four groups that received different lead-in statements before measuring blood pressure on a manikin simulator. These statements described hypothetical patients with different likelihoods of having systemic hypertension. Results: The lead-in that described the highest likelihood of hypertension was associated with significantly higher reported readings and lower accuracy. The lead-in that suggested normality yielded the best performance. Conclusion: Student performance in simulator-based OSCE may be affected by the clinical context provided. However, we argue that construct validity should be viewed in light of the application of a test, in that patients may also present with different cues and likelihoods of having hypertension. Variations in construct design should be further explored to enhance the training and assessment of clinical competence that reflects the unpredictability encountered in daily clinical practice.

Blood pressure measurement using sphygmomanometry is an essential clinical skill that can be assessed in objective structured clinical examination (OSCE) (Harden & Gleeson 1979). When compared with standardized patients, the use of simulators can further improve test reliability and validity by generating consistent and verifiable blood pressure readings. Student performance, however, may still be affected by other factors (Humphris & Kaney 2001; Schoonheim-Klein et al. 2007; Iramaneerat et al. 2008;). In this study, we investigated whether and how the clinical context of a construct design would affect blood pressure readings during a simulator-based OSCE. Our hypothesis was that variations in the lead-in statement would have no effect on student performance.

Methods We conducted a prospective randomized study during a formative OSCE for 162 first-year medical students. One of the stations used a full-size manikin arm (S410 Blood Pressure Training System, Gaumard Scientific Company, FL, USA) to assess blood pressure taking. All students had previously been trained with the device and had free access to it for practice. Each student had five minutes to measure the same preset blood pressure with a sphygmomanometer. They were blinded to the study’s design. Prior institutional approval was obtained.

Students were randomized into four groups, which differed only by the lead-in statements given. They were: Group A (n ¼ 41): ‘This is a healthy young adult. How would you measure his blood pressure?’ Group B (n ¼ 40): ‘This is an elderly gentleman with good past health. How would you measure his blood pressure?’ Group C (n ¼ 40): ‘This is an elderly gentleman with a known history of hypertension. How would you measure his blood pressure?’ Group D (n ¼ 40): ‘You will be assessed on the measurement of blood pressure. The systolic (SBP) and diastolic (DBP) blood pressures were set at 130 and 90 mmHg, respectively. Students’ reported SBPs and DBPs that fell within the ranges of 125–135 and 85– 95 mmHg, respectively, were considered as correct answers (i.e., permitted error of  5 mmHg). The mean arterial pressure (MAP) was calculated after the examination. Between-group differences were calculated with the Mann–Whitney U-test on SPSS 14.0 (SPSS, Inc., Chicago, IL). A p-value of less than 0.05 was considered statistically significant.

20 14

Introduction

Results Of the 162 students, 33 failed to report blood pressure readings. The reported findings for the remaining 127 students

Correspondence: Dr Gilberto K. K. Leung, Department of Surgery, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Queen Mary Hospital, 102 Pokfulam Road, Hong Kong. Tel: þ852 22553368; fax: þ852 28184350; email: [email protected] ISSN 0142–159X print/ISSN 1466–187X online/14/020177–3 ß 2014 Informa UK Ltd. DOI: 10.3109/0142159X.2013.849328

177

G. K. K. Leung & J. M. Nicholls

Table 1. Blood readings reported by the four groups of students.

Student group

Patient description No. of students tested No. of students with reported findings No. who reported correct SBP* and DBP** (%) SBP (mmHg) Median Range DBP (mmHg) Median Range MAP (mmHg) Median Range Mean error of MAP (mmHg)

A

B

C

D

Young healthy adult

Elderly with suspected hypertension

Elderly with known hypertension

No specific information

41 41 29 (70.5)

40 28 9 (32.1)

40 29 10 (34.5)

40 29 18 (62.1)

130 120–132

131 120–142

134 130–170

130 125–142

90 50–102

90 68–104

92 64–102

90 64–100

103 77–111 4.3

103 91–115 5.3

107 97–115 7.9

103 90–110 5.7

SBP, systolic blood pressure; DBP, diastolic blood pressure; MAP, mean arterial pressure. *Correct SBP: between 125 and 135 mmHg inclusively. **Correct DBP: between 85 and 95 mmHg inclusively.

are listed in Table 1. Statistically significant differences in accuracy were found amongst the four groups ( p 5 0.02). Group A gave the best performance, followed by Groups D, C and B. The reported median SBP for Group C was significantly higher than that of Groups A ( p 5 0.001) and D ( p ¼ 0.001). The reported DBP for Group C was also higher than that of Groups A ( p ¼ 0.028) and B ( p ¼ 0.021). The calculated MAP of Group C was also higher than those of all other groups. There was no significant differences between Groups A, B and D in SBP, DBP and MAP readings.

Discussion The use of simulators can significantly enhance face, content and construct validity in testing (Karnath et al. 2002; Lyons et al. 2013). Our findings suggest that, during a simulatorbased OSCE, student performance may be affected by the given clinical context, in consistent with a previous report on the impact of question construct (Barman 2005). Errors in blood pressure measurement using sphygmomanometry may be associated with equipment, technique and observer factors. The latter includes terminal digit preference, prejudice for or against certain pressure values, and differences in the way Korotkoff sounds are interpreted (Bailey & Bauer 1993). We surmised that our Group C students were anticipating and had prejudice for higher readings given the context of a probable hypertensive patient. Interestingly, even though our pre-set blood pressure was that of mild hypertension, Group A students, who would have anticipated normal findings, could achieve higher accuracy than Group D students, who received a non-specific lead-in. Moreover, Group A students were more likely to be able to report blood pressure readings. This contrasts with a previous report that described poorer performances in clinical normality versus abnormality OSCE 178

(Tiong 2008). Whether different pre-set blood pressures (e.g. 90/50 mmHg) would have yielded similar findings requires further investigations. Our findings also suggested that priming statements may have significant effect on validity in OSCE. However, construct validity should be viewed not so much as a property of a test but in light of the application of that test. Indeed, an OSCE may produce a valid assessment in one context but not another (Hodges 2003). In clinical practice, patients may present with different cues and likelihoods of having hypertension, and it falls on a competent doctor to confirm or refute it. We therefore argue that variations in construct designs may in fact be contextually more akin to what is encountered in daily clinical practice. When applied in combination with simulators, their use may further enhance the training and assessment of clinical skills competence. This pilot study has several limitations. Firstly, we recruited relatively junior students. Although all had received teaching and practiced in blood pressure taking, many failed to obtain readings correctly or any reading at all. Future studies may involve more senior students or even clinicians. Secondly, each of our students was presented with only one lead-in. A cross-over study that exposes each student to different lead-ins may yield more interesting findings. Lastly, we only investigated one clinical skill. Future studies may explore the use of variations in construct design in other areas in which normal and abnormal findings can readily be generated by simulators.

Conclusion While simulators can enhance test validity and reliability, simple variation in construct design may significantly affect student performance in simulator-based OSCE. A priming

Skill performance in OSCE

statement can result in erroneous blood pressure reading. Having a neutral non-specific lead-in statement, however, may not be the only or the best approach for valid assessment. Future studies may focus on the concomitant use of different construct designs in simulator-based assessment in order to improve the fidelity of a test.

Notes on contributors Dr GILBERTO K.K. LEUNG, MBBS, BSc, FRCS, MS, is a Clinical Associate Professor and the Director of the Centre of Education and Training of the Department of Surgery. He is responsible for the study design, data analysis and manuscript preparation. Professor JOHN NICHOLLS, MBBS, FRCPA, is the Co-chairman of the Assessment Committee, and is responsible for the study design and manuscript revision.

Acknowledgements We thank Ms Ada Lam for her assistance in conducting the study. Declaration of interest: The authors have no conflict of interest or funding to declare.

References Bailey RH, Bauer JH. 1993. A review of common errors in the indirect measurement of blood pressure. Sphygmomanometry. Arch Intern Med 153(24):2741–2748. Barman A. 2005. Critiques on the Objective Structured Clinical Examination. Ann Acad Med Singapore 34(8):478–482. Harden RM, Gleeson FA. 1979. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ 13(1):41–54. Hodges B. 2003. Validity and the OSCE. Med Teach 25(3):250–254. Humphris GM, Kaney S. 2001. Examiner fatigue in communication skills objective structured clinical examinations. Med Educ 35(5):444–449. Iramaneerat C, Yudkowsky R, Myford CM, Downing SM. 2008. Quality control of an OSCE using generalizability theory and many-faceted Rasch measurement. Adv Health Sci Educ Theory Pract 13(4):479–493. Karnath B, Thornton W, Frye AW. 2002. Teaching and testing physical examination skills without the use of patients. Acad Med 77(7):753. Lyons C, Goldfarb D, Jones SL, Badhiwala N, Miles B, Link R, Dunkin BJ. 2013. Which skills really matter? proving face, content, and construct validity for a commercial robotic simulator. Surg Endosc 27(6):2020–2030. Schoonheim-Klein M, Hoogstraten J, Habets L, Aartman I, Van der Vleuten C, Manogue M, Van der Velden U. 2007. Language background and OSCE performance: A study of potential bias. Eur J Dent Educ 11(4):222–229. Tiong TS. 2008. Should clinical normality be examined in medical course? Singapore Med J 49(4):328–332.

179

Copyright of Medical Teacher is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Effect of clinical context on simulator-based assessment of blood pressure taking - a pilot randomized study.

Blood pressure measurement is an essential clinical skill that can readily be assessed in objective structured clinical examination (OSCE). While the ...
85KB Sizes 0 Downloads 0 Views