bs_bs_banner

Journal of Evaluation in Clinical Practice ISSN 1365-2753

Clinical thinking in psychiatry Lloyd A. Wells PhD MD Emeritus Consultant, Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, USA

Keywords clinical thinking, critical thinking, evidence-based psychiatry, logical fallacies, meta-analysis, residency training Correspondence Dr Lloyd A. Wells Department of Psychiatry and Psychology Mayo Clinic 200 1st Street, S.W. Rochester MN 55905 USA E-mail: [email protected]

Abstract I discuss the lack of precision in the term ‘clinical reasoning’ and its relationship to evidence-based medicine and critical thinking. I examine critical thinking skills, their underemphasis in medical education and successful attempts to remediate them. Evidencebased medicine (and evidence-based psychiatry) offer much but are hampered by the ubiquity and flaws of meta-analysis. I explore views of evidence-based medicine among psychiatry residents, as well as capacity for critical thinking in residents before and after a course in philosophy. I discuss decision making by experienced doctors and suggest possible futures of this issue.

Accepted for publication: 16 December 2014 doi:10.1111/jep.12324

Clinical thinking, or clinical reasoning, is difficult to define, although it is relatively easy to delineate what it is not. It is not evidence-based medicine (EBM); it is not eminence-based medicine; it is not critical thinking. While all these and much else may be components of clinical thinking, they do not individually or collectively comprise it. In many ways, clinical thinking stands alone, with contributions from many fields and with many components. In my view, there are at least four components of clinical reasoning: evidence, critical thinking, an appreciation of the subjective and practical decision making, often when evidence is not completely available. An examination of some of these components can begin with evidence-based medicine. Evidence-based medicine has become a slogan and a sine qua non. But it is a difficult mantra. No doctor is likely to say, ‘I reject evidence. I am a quack.’ But assessment and acceptance of evidence in medicine and psychiatry are difficult and challenging processes. And what do we mean by ‘evidence-based medicine’ or ‘evidence-based psychiatry’? I surveyed 34 residents in psychiatry, with replies from 28 of them. Twenty-seven viewed themselves as adherents of evidence-based psychiatry. (One favoured eminence-based psychiatry, in which one defers to the views of an expert.) But among the 27, there was no consensus about what evidence-based psychiatry is. For many, it is weighing scientific evidence when thinking about psychiatric disorder and its treatment. For increasingly small numbers, it consists of more and 514

more rules and traditions of ‘formal’ evidence-based medicine, including critical reviews and, especially, meta-analyses. Assuredly, deliberate weighing of the evidence relevant to a medical decision is a desirable act. What bothers me is the growing number of ‘rules and regulations’ that formal evidence-based medicine is promulgating. Some of the books and brochures devoted to evidence-based psychiatry list a number of silly and pedantic rules and end up with a very formulaic approach to the determination of evidence rather than aiding its careful and thoughtful examination. Thus, in one guide to evidence-based psychiatry, I find the ‘five-step process’, complete with ‘shortcuts’, along with ‘the four S’s to searching for answers’ [1]. These abbreviated and very simplified approaches to evidence-based medicine simply become formulaic. This sort of approach can actually move the field away from a true examination of evidence. It certainly leaves little room for thought and reflection. Evidence-based medicine places huge emphasis on randomized controlled trials (RCTs), and they are essential in all branches of medicine. In psychiatry, they have been used with medication studies, and these have been roundly criticized. They are often funded by Big Pharma and often show significant but tiny differences between drugs or between a drug and a placebo. They are even more problematic in the much touted ‘evidence-based psychotherapy’ studies. While these RCTs do show efficacy, promoting treatments to ‘evidence-based’ status, they are almost always based upon studies from 4- to 8-week duration. Studies of

Journal of Evaluation in Clinical Practice 21 (2015) 514–517 © 2015 John Wiley & Sons, Ltd.

L.A. Wells

such short duration have many confounding factors, most importantly the Hawthorne effect, where subjects in any study initially do well, and which clearly impacts evidence and its interpretation in EBM [2]. In the few longer-term studies of these treatment approaches, there are evident diminishing effects. Indeed, in the NIMH Treatment of Depression Collaborative Research Project, which followed up subjects for many months, there was a residual efficacy of about 30% for cognitivebehavioural therapy, interpersonal therapy, imipramine and clinical management, and clinical management, after 18 months [3]. None of these treatments showed improved efficacy over the others, over time. As Wampold has written, ‘Decades of psychotherapy research have failed to find a scintilla of evidence that any specific ingredient is necessary for therapeutic change’ [4]. Moreover, in the ‘evidence-based’ studies, one must carefully examine what is being studied. What is the hypothesis? What are the variables being compared? What are the actual findings? Dialectical behavioural therapy is touted as a panacea for borderline disorder, in part because it is ‘evidence-based’, but the actual evidence is for a slight reduction in rates of re-hospitalization and suicide, and not for any substantive change in the patients’ disorder [5]. The incredible number of new ‘evidence-based’ approaches becomes almost laughable when one examines them with any care at all. One ‘evidence-based’ approach lauded by many managed care and insurance companies is ‘clinical pathways’, in the hospital, in which the milieu treatment for the patient is based upon a formulaic, non-individualized approach to the behaviour, which led to the patient’s admission to the hospital. The studies compare a uniform milieu approach, for every patient on the unit, to an approach that targets, say, suicidality. The fact is however that only very poor hospitals provide the same modal treatment to every patient. Suicide attempts devolve to the individual, and there is a huge difference between an 80-year-old man who attempts suicide after the death of his wife and a 13-year old who impulsively swallows pills after an argument with her mother. Studies do point out a lack of efficacy [6]. In fact, the evidence-based movement is being suborned to provide a rationale for poor but cheap treatment practices. Meta-analysis is a high standard for evidence-based medicine, and the idea is a good one – comparing multiple studies of the same phenomenon. It is however exceedingly difficult to demarcate and control these studies because of many ineluctable, uncontrolled variables among studies on approximately the same topic. In the 1980s, studies demonstrated that tricyclic agents were not helpful in the treatment of bulimic patients, and other studies demonstrated that they were helpful in the treatment of bulimic patients. The difference in the studies was that the second group included only patients who had therapeutic levels of the drug, whereas the first did not use blood levels. The difference in the studies had to do with the fact that a great many bulimic patients were vomiting the medicine before it could be useful. Today, in evidence-based medicine, these studies might well be tossed into a meta-analysis that found very weak evidence for tricyclic agents in bulimia – and this a common usual outcome of meta-analyses – weak evidence, when more careful honing might produce more robust evidence. At the same time, there are some excellent guides to evidencebased medicine that do encourage and set the stage for serious thought. One of these is Evidence-Based Practice: Logic and

© 2015 John Wiley & Sons, Ltd.

Clinical thinking psychiatry

Critical Thinking in Medicine, by Jenicek and Hitchcock [7]. This is a challenging and pragmatic book that goes well beyond simplistic five-step processes and rules of four. The authors lament the absence of training of medical and pre-medical students in philosophy, epistemology and logic, and discuss their desirability in the curriculum. They state that ‘evidence must be used logically, and logic must be backed by evidence . . . A patient can be harmed as much by the illogical use of evidence as by the use of a logical decision unsupported by evidence and even more by the use of faulty logic without evidence.’ The authors do an excellent job of introducing ways to think about and evaluate evidence. There are other very useful components of evidence-based psychiatry. For example, the British Journal, Evidence-Based Mental Health, provides very helpful and clear studies such as the recent ‘15 antipsychotic drugs are more effective than placebo for the treatment of schizophrenia, but vary in their tolerability’ – not a catchy title, but a helpful paper [8]. This journal also sponsors excellent debates about evidence. Evidence is crucial to the advance of psychiatry. The present state of evidence-based medicine may sometimes retard the evaluation of real evidence. As a result, while evidence-based medicine can contribute to clinical thinking, it is, in its current state, insufficient. In Professor Peter Achinstein’s The Book of Evidence [9], he talks of good evidence and bad evidence, but also evidence that is perfectly well designed but has no chance of advancing the field. That is the state of a great deal of evidence in psychiatry today. One can safely provide a given treatment to a patient who wears braces or has hang-nails, according to the research. How psychiatrists sort through evidence is important to the treatment of their patients but is rarely taught in detail, in their training. There are also many examples, especially in complex fields in which various investigators do not understand what their collaborators are doing or why, which is very problematic. In studies of neural imaging, for example, co-investigators may include psychiatrists, psychologists, neuroradiologists, physicists and statisticians. These fields are all necessary for a good study, but they are so separated from each other that it is impossible for investigators from one field to know whether colleagues in another are doing a good job on the study. Another outstanding book that would be helpful to many psychiatrists is a very slim volume, The Pocket Guide to Critical Thinking, by R. L. Epstein [10]. This book is an excellent, highly condensed guide to critical thinking, with short chapters on such topics as concealed claims, models, statistics, populations, cause and effect, and decision making. The book is important and useful because it covers crucial topics in weighing evidence that are often not considered in the medical – and especially the psychiatric – curriculum. The medical curriculum is packed with very important topics, but they will not greatly benefit future doctors (or their patients) if the doctors cannot use this knowledge to make good clinical judgements. With an enormous volume of biological, psychological and social studies of varying promise, many psychiatrists are frankly baffled and react with confusion and fuzzy thinking. Part of the dilemma specific to psychiatry versus other medical specialties is that psychiatrists are doctors of brain and mind. A medical model used in dealing with brain becomes a hopeless metaphor in dealing with mind. How can we conceptualize 515

Clinical thinking psychiatry

treatments for disorders of a metaphysical concept? Reductionism, oversimplification, and post hoc ergo propter hoc arguments become rampant. I think that some or perhaps many residents complete their training in psychiatry and its subspecialties without a real grasp of the current intellectual status of the field and without the tools to evaluate the changing state of our knowledge and the changing scope of our practice. Residents are faced with dilemmas such as this hypothetical one. The brains of children who were horrifically abused in childhood, when studied in adult life, have a different level of a given brain chemical, measured very indirectly, than control adults who were not abused in childhood. Such findings may turn out to be very important for the care of patients, or they may be entirely epiphenomenal. Many residents have difficulty addressing this sort of information, with some paying no attention to it at all and others concluding that post-traumatic stress disorder has been ‘proven’ to be caused by altered secretion rates of the neurochemical in the patient’s brain. In addition to confusion about findings like this, many residents (and faculty) embrace simplistic and outdated but elegant models of neural sciences in various disorders. As an example, the dopamine theory of schizophrenia is simplistic and largely discredited but continues to be widely taught. Concomitantly, there are many unchallenged psychoanalytic beliefs about many psychiatric issues that are based on poor studies or no studies at all. At the other end of the spectrum, similar confusion occurs with neural imaging studies. These are not ‘pictures’ of the brain at work, but many take them to be. Their interpretation is very complex. While evidence-based medicine can be helpful in such situations, the resident must also rely upon his or her own clinical thinking – and outside-the-box considerations, as well. Ignorance of the rules of logic adds to this disarray by leading to a misunderstanding of studies and evidence. Perhaps the most common fallacy in clinical practice is post hoc ergo propter hoc. This comes up repeatedly when a positive result follows an intervention, often a medication. ‘He got better after Prozac; therefore, Prozac made him better.’ I have seen many depressed children and adolescents for whom I wanted to prescribe a medicine but whose parents refused, and several of them got better without this treatment. Some patients get better without a medication! On pharmacological grounds, we know that response to an antidepressant medicine takes weeks, but the literature now speaks of ‘ultra-rapid responders’ who improve within a couple of days – thus reinforcing the fallacy. Without instruction in these matters, residents find decision making difficult, if they approach it with care. Given this situation and concern, I administered the Cornell Critical Thinking Test – Level Z [10], a measure of critical thinking with good test–retest reliability, to all the fourth-year child and adolescent psychiatry residents, with controls from the fifth-year child psychiatry residents and the third-year general psychiatry residents [11]. Among the subjects, test scores ranged from 17 to 32, with control scores ranging from 15 to 36. The mean scores are lower than a standardized scale normed on second-year graduate students in the sciences, which is concerning regarding young doctors in a good training programme. I provided the subjects a course that included aspects of philosophy as applied to psychiatry [12]. These included behaviours and choices; how do we integrate fragmented knowledge from different disciplines into a cohesive whole?; systems of care, 516

L.A. Wells

managed care, how mentally ill children are treated in society and by social agencies; formal rules of reasoning with examples of common logical errors; our basic body of knowledge; how do we know what we think we know? These topics were presented through readings and case-based discussions. The course then followed a syllabus that combined curricular topics in child and adolescent psychiatry with consideration of these approaches relevant to each topic. Course content was designed to address deficits in reasoning as demonstrated in the initial scores in the test. After the course, which met for two hours per week for a year, subjects and controls again took the Cornell test. The subjects’ scores improved from a median of 24 to one of 37, which was highly statistically significant, whereas the scores of the control group fell slightly but not in a statistically significant way. Although it was not part of the study, the subjects’ scores on the national in-training examination also improved. The course was a great deal of work for the residents and was not especially popular, but the residents did acknowledge that they had more questions about the studies they read and the papers they heard. I believe that critical thinking can be taught in many ways, and can be learnt, but it is just one component in clinical reasoning, and there is more to clinical thinking than critical thinking. What can be said of clinical thinking beyond these two components, evidence and critical thinking? I believe that clinical reasoning often has its base in the doctor’s knowledge of the patient and in the relationship the patient and the doctor have with each other. This knowledge and relationship are not an alternative to evidence-based medicine but can greatly enhance its value. There is no point at all in my prescribing a treatment that I know a given patient cannot afford, or one to which he or she is opposed on any grounds, for, in both cases, the patient will not comply with the treatment. In the NIMH Treatment of Depression Collaborative Research Project [2], the only predictors of efficacy in the 30% of subjects in all four research groups who were doing well at 18 months was the patient’s perception of his or her relationship with the person treating him early in the course of the treatment. While one can ask patients (or doctors) about the quality of the relationship, the response will be subjective and difficult to quantify. But it is intrinsically important to efficacy and, I think, to clinical reasoning. There is much more than the quality of the relationship which is hard to measure. Merleau-Ponty’s view that a technicalized approach to subjectivity will always be inadequate is difficult to refute. We need to be very careful, however, in any claims for psychiatric exceptionalism. Zachar has argued persuasively that this can lead to the belief that psychiatry ‘does better’ outside of science’ [13], that the field is somehow hampered by the norms of science. He rejects this argument, and most of us believe and hope that psychiatry, its findings and approaches, are and will be part of science. Our unfortunate system of nosology retards our becoming more scientific or more humanistic. Blind adherence to psychiatry as completely reducible to evidence and to what is measurable makes it difficult to appreciate or even speculate about the patient’s subjective world. But this is the patient’s reality and adds enormously to the richness and potential of the work which the patient and psychiatrist do together. I was

© 2015 John Wiley & Sons, Ltd.

L.A. Wells

treating a teenager who met DSM criteria for eating disorder not otherwise specified, conduct disorder and substance use disorder, among others – but what does that tell us about her? At the end of an appointment, she handed me a letter: ‘Dr. Wells, I am a lying, cheating, sneaky, unrespectful, stealing, self-centered, attentionseeking, spoiled rotten brat who uses an eating disorder, drugs, suicide threats/attempts, anger and isolation as a way of defending my real feelings. Really, I am a confused, lonely, very scared, caring person who doesn’t know what’s going on and would like to understand why I need to defend myself and why I fight so hard for such negative things.’ Few can articulate it so well, but this is the position of many who seek us out – easy to miss and too often missed as we consult the manual. Another sine qua non of clinical reasoning is utility in practice, especially in the absence of evidence, a frequent occurrence in psychiatry. Practical, rational treatment planning in the relative absence of evidence, by whatever name, is an important component of clinical thinking. How does an experienced clinician make a decision? In part, he or she bases it on lore, when the evidence is not present, in part on evidence-based medicine, in part, on knowing the patient and the patient’s circumstances, and perhaps, in part, on intuition. But intuition can be a shorthand for one of these other factors. A couple of years ago, I encountered a young patient with severe, atypical, treatment-resistant depression. My immediate response was, ‘This patient reminds me of another patient, who had a superb response to a monoamine oxidase inhibitor, so perhaps I should try one. This is a very poor rationale for a clinical decision until it is parsed, but, in fact, the young man’s depression was categorically similar to that of the other patient, neither had responded to more common treatments, and there was a supportive, evidence-based literature for the use of a monoamine oxidase inhibitor in this sort of clinical situation. The patient responded well to the treatment. This type of clinical decision making, which at first seems to be intuitive, is common and seems like sleight of hand to the residents and students, but it is, in fact, based largely upon science and evidence. It is just that science and evidence are not immediately apparent unless the clinician takes time to think about it. This form of clinical reasoning is largely unconscious but can be made conscious. It is not true ‘intuition’. This type of reasoning takes many years to develop and cannot be taught directly to residents. Perhaps a contribution to our thinking about clinical reasoning can come from one of Robert B. Parker’s detective novels, in which a psychologist states, ‘guided by intelligence and experience, and . . . I hate the word, but intuition . . . You use a little science and a little art’ [14]. I believe I have discussed some important aspects of clinical reasoning in this paper. It is not a laboratory exercise, but one that involves a doctor, a patient and the world around them.

Clinical thinking psychiatry

journal, Evidence Based Mental Health. I greatly appreciate Denise M. Wells’ reading and commenting on several versions of the paper.

References 1. Gray, G. E. (2004) Concise Guide to Evidence-Based Psychiatry. Washington, DC: American Psychiatric Publishing, Inc. 2. Braunholtz, D. A., Edwards, S. J. L. & Lilford, R. J. (2001) Are randomized clinical trials good for us (in the short term)? Evidence for a ‘trial effect’. Journal of Clinical Epidemiology, 54, 217–224. 3. Shea, M., Elkin, I., Imber, S., Sotsky, S., Watkins, J., Collins, J., Pilkonis, P., Beckham, E., Glass, D. & Dolan, R. (1992) Course of depressive symptoms over follow-up. Findings from the National Institute of Mental Health Treatment of Depression Collaborative research program. Archives of General Psychiatry, 49, 782–787. 4. Wampold, B. E. (2001) The Great Psychotherapy Debate: Models, Methods and Findings. Mahwah, NJ: Lawrence Erlbaum Associates. 5. Linehan, M. M., Comtois, K. A., Murray, A. M., Brown, M. Z., Gallop, R. J., Heard, H. L., Korslund, K. E., Tutek, D. A., Reynolds, S. K. & Lindenboim, N. (2006) Two-year randomized controlled follow-up of dialectical behavior therapy vs therapy by experts for suicidal behavior and borderline personality disorder. Archives of General Psychiatry, 63, 757–766. 6. Emmerson, B., Frost, A., Fawcett, L., Ballantine, E., Ward, W. & Catts, S. (2006) Do clinical pathways really improve clinical performance in mental health settings? Australasian Psychiatry: Bulletin of Royal Australian and New Zealand College of Psychiatrists, 14, 395– 398. 7. Jenicke, M. & Hitchcock, D. L. (2005) Evidence-Based Practice: Logic and Critical Thinking in Medicine. Chicago, IL: AMA Press. 8. Citrome, L. & Volarka, J., et al. (2013) Review: 15 antipsychotic drugs are more effective than placebo for the treatment of schizophrenia, but vary in their tolerability. Evidence-Based Mental Health, 17, 9. 101612. 9. Achinstein, P. (2003) The Book of Evidence. Oxford: Oxford University Press. 10. Epstein, R. L. (ed.) (2003) The Pocket Guide to Critical Thinking, 2nd edn. Toronto: Wadsworth Group. 11. Ennis, R. H., Millman, J. & Tomko, T. M. (1985) Cornell Critical Thinking Tests Level X and Level Z Manual, 3rd edn. Pacific Grove, CA: Critical Thinking Books and Software. 12. Wells, L. A. (2002) Philosophy and psychiatry: a new curriculum for child and adolescent psychiatry. Academic Psychiatry: The Journal of the American Association of Directors of Psychiatric Residency Training and the Association for Academic Psychiatry, 26, 257–261. 13. Zachar, P. (2012) Evidence-based medicine and modernism: still better than the alternatives. Philosophy, Psychiatry & Psychology: PPP, 19, 313–316. 14. Parker, R. B. (1994) Walking Shadow. New York: G. P. Putnam.

Acknowledgements A version of this paper was presented at the 26th annual meeting of the Association for the Advancement of Philosophy and Psychiatry, New York City, May 2014. Some of the work described was supported by an educational grant from the Mayo Foundation. I appreciate Dr Bhanuprakash Kolla for introducing me to the

© 2015 John Wiley & Sons, Ltd.

517

Clinical thinking in psychiatry.

I discuss the lack of precision in the term 'clinical reasoning' and its relationship to evidence-based medicine and critical thinking. I examine crit...
93KB Sizes 0 Downloads 13 Views