2015, 1–4, Early Online

HOW WE. . .

How we tackled the problem of assessing humanities, social and behavioural sciences in medical education DAWN GOODWIN & LAURA MACHIN Lancaster Medical School, Lancaster University, Lancaster, UK

Med Teach Downloaded from informahealthcare.com by Nyu Medical Center on 06/11/15 For personal use only.

Abstract Background: Assessment serves as an important motivation for learning. However, multiple choice and short answer question formats are often considered unsatisfactory for assessment of medical humanities, and the social and behavioural sciences. Little consensus exists as to what might constitute ‘best’ assessment practice. What we did: We designed an assessment format closely aligned to the curricular approach of problem-based learning which allows for greater assessment of students’ understanding, depth of knowledge and interpretation, rather than recall of rote learning. Conclusion: The educational impact of scenario-based assessment has been profound. Students reported changing their approach to PBL, independent learning and exam preparation by taking a less reductionist, more interpretative approach to the topics studied.

Introduction

Practice points

Inclusion of humanities, social and behavioural sciences into medical curricula is critical in ensuring doctors retain their humanistic attitudes (Benbassat et al. 2003; Campbell et al. 2007), thereby improving the quality of care patients receive (Mitchell et al. 1993; Savulescu 1999). However, assessment of these subjects is variable and sometimes even absent, leading to them being considered optional (Goldie et al. 2002) as students focus on subjects on which they expect to be tested (Epstein 2007). Therefore, assessment provides an important motivation for student learning (Fenwick et al. 2013). When the humanities, social and behavioural sciences are assessed components of medical curricula, tutors from these disciplines have expressed dissatisfaction with the assessment formats available (Brooks & Forrest 2013). Tutors have claimed that the conceptual knowledge of the humanities, social and behavioural sciences is not easily reducible to multiple choice and short-answer questions (Brooks & Forrest 2013) and that such formats are unsuited to these disciplines (Fenwick 2014). Wass (2014) argues that the range of assessment formats used in medical education inadequately addresses the ambiguities of clinical encounters and reinforces a theoretical ‘medical school world’. This disjuncture is exemplified by multiple choice question formats which, although now more subtle than true/false (e.g. ‘single best answer’), distinctly ‘right’ or ‘wrong’ answers, are still required. These formats also heighten the decontextualisation of knowledge, leading students to question the relevance of these disciplines for medical practice. Yet, when faced with staff shortages and large student cohorts, these assessment formats are appealing, as

Scenario-based assessment:  Provides a compromise between assessing depth and breadth of students’ learning.  Goes beyond assessing recall by focussing on application of knowledge.  Encourages students to think beyond ‘right’ and ‘wrong’ answers and acknowledge that all answers require justification.

marking is either computerised or aided by non-subject experts following prescriptive model answers. The intense focus on statistical reliability of assessments within medical education has driven multiple choice questions to be deemed the ‘gold standard’, and the essay to be heavily criticised (Fenwick et al. 2013). Yet, the essay is often the assessment form of choice in the humanities and social sciences as it is considered well suited to analytical and critical thinking skills (Scouller 1998). Compromises between potentially desirable and feasible (with large student cohorts) assessment formats are emerging. ‘Extended short answer’, ‘modified essay answer’ (Jolly 2010) or ‘short essay questions’ (Fenwick et al. 2013), which combine elements of essay and short answer formats, are said to require critical analysis and justification of answers (Fenwick et al. 2013). However, how such ‘compromises’ are constructed and implemented in practice, and their reliability and validity, have yet to receive thorough attention. Our problem, therefore, was how to move away from standard multiple choice and short answer questions to devise

Correspondence: Dr. Dawn Goodwin, Lancaster Medical School, Furness College, Lancaster University, Lancaster LA1 4YG, UK. Tel: +44-01524592756; E-mail: [email protected] ISSN 0142-159X print/ISSN 1466-187X online/15/0000001–4 ß 2015 Informa UK Ltd. DOI: 10.3109/0142159X.2015.1045844

1

Med Teach Downloaded from informahealthcare.com by Nyu Medical Center on 06/11/15 For personal use only.

D. Goodwin & L. Machin

Exam w writers idenfy topics to o be included in cu urriculum

Exam m writers create a set 10 of quesons q worth 1 ore poin nts on one or mo PBL LOs

Exam writers con nstruct model answerss for quesons using recommend reso ources

Exam writers amend PBL narios, and LOs, PBL scen d resources recommended dents’ exam in light of stud ers answe

Exam w writers develop PBL Learning Ojecves (LO) wriien for topics

Exaam writers choose d opic to assess and to relevant PBL LOs

Exam writers incclude acceptable altern nave answers to exxam quesons

Exam writers tweak exam questtions and n light of scenarios in nswers in students’ an again order to be used u

Studen nt learns topic duringg 2 week PBL m module

Students use PBL narios, PBL LOs, and scen mmended resourrces recom as a revision aids

Students sit exams

Students receive r feedback on o exam e for each performance PBL LO asssessed

Students may aend lecture / workshop on topic

Students read mmended resourrces recom ubject specialist on o by su topic

E Exam writers and d those e experienced in UG med edu mark studeents’ answers

Exam writers discuss the ‘acceptability’ of students’ unaancipated ers answe

Figure 1. The process of aligning assessment for the humanities, and social and behavioural sciences with the PBL curriculum. Dark grey (blue online) and light grey (orange online) boxes refer to the curriculum and assessment, respectively. a form of assessment that allowed greater assessment of depth and application of knowledge, and reduced reliance on rote learning and related abstraction of concepts.

What we did We designed an assessment closely aligned with the PBL approach adopted by Lancaster Medical School. In line with the integrated curriculum, we combined medical humanities and the social and behavioural sciences into one assessment format to encourage students to appreciate the multifaceted nature of healthcare. The assessment format, which is being implemented in Years 1–4 (at the end of which students sit their finals), consists of a scenario and four sets of questions, each set worth 10 marks.1 The scenario and questions are based upon four learning objectives from the PBL curriculum – divided equally between the different disciplines (Sociology, Psychology, Ethics and Law). The scenarios are written to illustrate the topics addressed in the PBL learning objectives, thereby aligning assessment of the humanities, social and behavioural sciences with the PBL curriculum (see Figure 1). As Schuwirth and van der Vleuten (2010) explain context-rich assessment items, such as ours, which contain a case description and questions that require decisions or an evaluation of the problem, whereas context-free items – such as standard short answer questions previously used – ask for general knowledge. Context-rich questions test application of knowledge and problem-solving, whereas context-free items do not. Accordingly, our questions require students to explore the scenario, identify how and where aspects of a topic are illustrated, and discuss their knowledge of these topics in 2

relation to the scenario (see Examples 1, 2 and 3), much like students do in PBL. Question-writing is collaborative, with each subject specialist selecting learning objectives for assessment and discussing how their topics may be combined into a coherent scenario. Questions and answers are drafted, by each subject specialist, in relation to the developing scenario, ensuring that the scenario incorporates the elements required to answer the questions. A quality assurance process follows in which the writing team, year directors and members of assessment committee review the exam paper to ensure clarity and minimise overlap of questions. Answers take the form of an ‘extended short answer’ and require a paragraph or two of free text, in keeping with assessments that require information synthesis, interpretation and reasoning (Jolly 2010). Each answer is double-marked and moderated blind, and markers are allocated particular sets of questions, thereby addressing inter-rater reliability (Jolly 2010). To allow for adjudication on alternative answers, each question is marked by a subject specialist. An answer is accepted if it addresses the question directly, rather than stating something tangential, and if it explains the scenario, thereby demonstrating interpretation and application. To evaluate whether scenario-based assessments are a consistent measure of students’ learning of the humanities, social and behavioural sciences, we applied the Cronbach’s alpha test to formative and summative exam papers combined (as a single exam contains too few questions to generate a meaningful score) for each year group (Cronbach 1951). The test is deemed a measure of internal consistency of an exam, expressed as a number between 0 and 1. The Cronbach alpha scores for the scenario-based assessments for year groups 1 and 2 were reassuring (0.837 and 0.730, respectively),

Assessment in medical education

Example 1 A Year 2 set of questions worth 10 marks for the professional practice, values and ethics curriculum theme.

how time, structure of questions and the level of difficulty affect students’ performance.

In the scenario, the clinical trial is an example of non-therapeutic research. Define non-therapeutic research (1 mark) and identify where this is illustrated in the scenario (1 mark). Identify four ethical considerations for healthcare professionals when recruiting participants for non-therapeutic research (4 marks) and give one example of each from the scenario (4 marks).

Time

Example 2 A Year 1 set of questions worth 10 marks for the health, culture and society curriculum theme.

Med Teach Downloaded from informahealthcare.com by Nyu Medical Center on 06/11/15 For personal use only.

Identify 4 aspects of this scenario that illustrate features of the concept of medicalisation (4 marks) and discuss these points with reference to definitions of medicalisation (6 marks).

Example 3 A Year 2 set of questions worth 10 marks for the professional practice, values and ethics curriculum theme.

In the scenario, the doctor knows that the tablets the patient can take to overcome her obesity do work because of the evidence surrounding the medicine. Define evidence-based medicine (2 marks). The GP refers to clinical guidelines during a patient’s appointment about her weight loss. Define clinical guidelines (2 marks) and state two purposes that they serve (2 marks). The GP opts not to follow the clinical guidelines. Discuss the roles of clinical guidelines and doctors’ judgement in clinical practice (4 marks).

suggesting that the questions were discriminating, i.e. the ‘good’ students achieved high marks on each question, whereas the ‘bottom’ students scored low marks (Tavakol & Dennick 2011). Wass and van der Vleuten (2009) identify three types of validity – face, content and construct. ‘Face validity’, defined as compatibility with the curriculum’s educational philosophy, is high in scenario-based assessment; it replicates a PBL approach. Content validity, meaning to assess the content of the curriculum, is assured by the scenarios being written around the PBL learning objectives. In terms of construct validity, defined as the differentiation between groups of different ability, scenario-based assessment provides a discerning picture of the variation of student knowledge, indicated by use of the full range of marks (0–10 marks) for each topic assessed. Moreover, scenario-based assessment illustrates concepts in context and, where possible, questions ask about the relevance of concepts for clinical practice (see final question in Example 3 above). Therefore, scenario-based assessment contributes to a further aspect of validity, in that it assesses what we feel is important to assess (application and interpretation) in addition to what is easy to assess (recall).

What we learnt We found that scenario-based assessment effectively assesses students’ understanding and interpretation, rather than recall of rote learning. However, we learnt three main lessons about

Initially, scenarios were written around five learning objectives and contained five related sets of questions. Students struggled to complete the exam and many commented that they were unable to properly demonstrate their knowledge in the time available. Consequently, we reduced the scope of a scenario to assess four learning objectives. This also resulted in a shorter scenario.

Structure of questions Structured questions (see Example 1) require students to answer on a point by point basis, and require quick, concise responses. The model answer is prescriptive, aiding speed and consistency of marking. Although they still require interpretation, they limit how students can respond and thus the ability of students to demonstrate their depth and breadth of knowledge. Loosely structured questions (see Example 2) take more time for students to answer as they have to organise their response and consider its clarity. The model answer contains a list of points that may be covered but allows markers discretion in how well the point is argued. This allows for greater delineation of student ability but at the potential cost of marking consistency. The convention of awarding one mark per item of knowledge, which arises from attempts to objectify marking criteria and increase marking consistency, does not fit easily with loosely structured questions, which allow for variation in clarity of thought and quality of expression. Accordingly, this question style is closer to an essay style question and retains some of its perceived shortcomings.2

Level of difficulty Students found the structured questions easier to answer, being clearer in how they should respond. However, as argued above, loosely structured questions show greater variation in student ability and, consequently, are good signifiers of progression. We have thus constructed exams so that first year exams contain mostly structured items, and the proportion of loosely structured items gradually increases across second, third and fourth year assessments.

What’s next To economise on the question-writing workload, we are exploring the possibilities for ‘recycling’ questions. We have the option of using the scenarios and questions again in their entirety or we could substitute some of the questions and adjust the scenario accordingly. Finally, we could take the stable components of the questions and answers (for example, definitions and features of a concept) and assess an entirely new combination of learning objectives by writing a new scenario.

3

D. Goodwin & L. Machin

Med Teach Downloaded from informahealthcare.com by Nyu Medical Center on 06/11/15 For personal use only.

Conclusion Following Fewtrell’s (2011) approach to devising a new form of assessment, we considered the reliability, validity, feasibility and educational impact of the assessment. Whilst all of Fewtrell’s factors are significant, the educational impact of scenario-based assessment has strengthened our belief in it. During student appraisals when exam performance is discussed, students reported changing their approach to PBL, independent learning and exam preparation by taking a less reductionist, more interpretative approach to the topics studied since the introduction of scenario-based assessment.3 Through scenario-based assessment, we have aligned our curriculum and style of learning, with what is evaluated and in what way (Dowie 2014). This assessment format permits an exploration of the depth of students’ learning and avoids assessing what has been described as ‘trivial’ knowledge (General Medical Council 2011). Finally, the expectation of alternative answers, which are discussed with the students in feedback sessions, prepares students for the ambiguity they encounter in clinical practice (Wass 2014). Students are encouraged to think beyond ‘right’ and ‘wrong’ answers, to consider the variety of appropriate responses that exist and to justify their answer, just as they would in clinical practice.

Notes on contributors DAWN GOODWIN, PhD, is a Senior Lecturer in Social Sciences at Lancaster Medical School, Lancaster University, Lancaster, UK. LAURA MACHIN, PhD, is a Senior Lecturer in Medical Ethics at Lancaster Medical School, Lancaster University, Lancaster, UK.

Acknowledgements We would like to thank Lancaster Medical School for the opportunity to be creative about the assessment of our disciplines and, in particular, our special thanks go to Dr Gill Vince for working through the glitches in question wording with us. A debt of gratitude is also owed to Dr Jemma Kerns for her assistance with the Cronbach’s alpha statistical analysis. Declaration of interest: The authors report no declarations of interest.

Notes 1. In Years 1 and 2, the exam consists of two scenarios and related questions, in Years 3 and 4, this is reduced to one scenario. This reduction acknowledges an increase in coursework assessment in Years 3 and 4. 2. We do not believe variation in student ability, seen in loosely structured questions, to be a product of subjective marking as all

4

answers are double marked and moderated to a mid-point. Therefore, if the markers disagreed, this would result in a concentration of midrange marks rather than the varying picture we actually saw. 3. Students’ comments were collated and form part of the introductory lecture to new students on the assessment of humanities, social and behavioural sciences.

References Benbassat J, Baumal R, Borkan JM, Ber R. 2003. Overcoming barriers to teaching the behavioral and social sciences to medical students. Acad Med 78(4):372–380. Brooks L, Forrest S. 2013. Sociology teaching in medical education: Current state of the art and future directions. Behavioural and Social Science Teaching in Medical Education (BeSST). Campbell A, Chin J, Voo T. 2007. How can we know that ethics education produces ethical doctors? Med Teach 29(5):431–436. Cronbach L. 1951. Coefficient alpha and the internal structure of tests. Psychmerika 16:297–334. Dowie A. 2014. Making sense of assessment in medical ethics and law. J Med Ethics 40(10):717–718. Epstein RM. 2007. Assessment in medical education. N Engl J Med 356(4):387–396. Fenwick A. 2014. Medical ethics and law: Assessing the core curriculum. J Med Ethics 40:719–720. Fenwick A, Johnston C, Knight R, Testa G, Tillyard A. 2013. Medical ethics and law: A practical guide to the assessment of the core content of learning. London: IME Publications. Fewtrell R. 2011. Assessment in medical education. In: Watmough S, editor. Succeeding in your medical degree. Exeter, UK: Learning Matters. pp. 41–51. General Medical Council. 2011. Assessment in undergraduate medical education. [Accessed 28 October 2014]. Available from http:// www.gmc-uk.org/education/undergraduate/tomorrows_doctors.asp Goldie J, Schwartz L, McConnachie A, Morrison J. 2002. The impact of three years’ ethics teaching, in an integrated medical curriculum, on students’ proposed behaviour on meeting ethical dilemmas. Med Educ 36(5):489–497. Jolly B. 2010. Written examinations. In: Swanwick T, editor. Understanding medical education. Chichester, UK: Wiley-Blackwell. pp 208–231. Mitchell K, Myser C, Kerridge I. 1993. Assessing the clinical ethical competence of undergraduate medical students. J Med Ethics 19: 230–236. Savulescu J, Crisp R, Fulford KW, Hope T. 1999. Evaluating ethics competence in medical education. J Med Ethics 25(5):367–374. Schuwirth LWT, van der Vleuten CPM. 2010. How to design a useful test: The principles of assessment. In: Swanwick T, editor. Understanding medical education. Chichester, UK: Wiley-Blackwell. pp 195–207. Scouller K. 1998. The influence of assessment method on students’ learning approaches: Multiple choice question examination versus assignment essay. High Educ 35:453–472. Tavakol M, Dennick R. 2011. Making sense of Cronbach’s alpha. Int J Med Educ 2:53–55. Wass V. 2014. Medical ethics and law: A practical guide to the assessment of the core content of learning. J Med Ethics 40:721–722. Wass V, van der Vleuten CPM. 2009. Assessment in medical education and training 1. In: Carter Y, Jackson N, editors. Medical education and training: From theory to delivery. Oxford, UK: Oxford University Press. pp 105–128.

How we tackled the problem of assessing humanities, social and behavioural sciences in medical education.

Assessment serves as an important motivation for learning. However, multiple choice and short answer question formats are often considered unsatisfact...
144KB Sizes 0 Downloads 12 Views