Downloaded from jnis.bmj.com on May 4, 2014 - Published by group.bmj.com

Socioeconomics

REVIEW

Evidence-based clinical practice for the neurointerventionalist Joshua A Hirsch,1 Aquilla S Turk,2 J Mocco,3 David J Fiorella,4 Mahesh V Jayaraman,5 Phillip M Meyers,6 Albert J Yoo,7 Laxmaiah Manchikanti8,9 1

Neuroendovascular Program, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA 2 Departments of Neurosurgery and Radiology, Medical University of South Carolina, Charleston, South Carolina, USA 3 Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA 4 Department of Neurosurgery, State University of New YorkStonybrook, Stonybrook, New York, USA 5 Warren Alpert School of Medical at Brown University, Providence, Rhode Island, USA 6 Departments of Radiology and Neurological Surgery, Columbia University, New York, New York, USA 7 Neuroendovascular Program, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA 8 Pain Management Center of Paducah, Paducah, Kentucky, USA 9 Anesthesiology and Perioperative Medicine, University of Louisville, Louisville, Kentucky, USA Correspondence to Dr Joshua A Hirsch, Neuroendovascular Program, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA; [email protected] Received 4 February 2014 Accepted 5 February 2014

To cite: Hirsch JA, Turk AS, Mocco J, et al. J NeuroIntervent Surg Published Online First: [please include Day Month Year] doi:10.1136/ neurintsurg-2014-011155

ABSTRACT The field of neurointerventional (NI) surgery has developed in the context of technologic innovation. Many treatments readily provided in 2014 would have been hard to imagine as recently as 10 years ago. The reality of present day NI care is that, while providers, payors, policy makers and patients rely on evidence to guide NI decision-making, the available data are often less robust than participants might desire. In this paper we will explore the fundamentals of evidence-based clinical practice.

INTRODUCTION In 2014 perhaps the most frequently discussed research initiatives are evidence-based medicine (EBM) and comparative effectiveness research (CER).1 This paper will review EBM and describe a relevant variant for neurointerventional (NI) providers: evidence-based clinical practice. EBM implementation rationale hinges upon patient care improvement through applied clinical decision-making in diagnosis and treatment. EBM, as its name indicates, refers to the process of evaluating the currently available scientific, epidemiological and statistical evidence and then apply the resulting conclusions to clinical decision-making and practice.2 An acknowledged challenge for clinical decision-making is that it is difficult to predict whether the available data apply to a specific patient (ie, whether the patient resembles the relevant study population).3 Put differently, while it is difficult to be certain where a patient falls within a bell-shaped curve, evidence allows a practitioner to make the best possible estimation. For medical purposes, evidence can derive from any level of data or information and can be obtained through experience, observational studies or experimental research.4 EBM endeavors to systematize knowledge and stresses the criticality of evidence from clinical research.1 EBM also has drawn widespread attention in many circles including the Institute of Medicine (IOM).5 In ‘Crossing the Quality Chasm,’ the IOM called attention to the challenges that healthcare participants have in applying new developments to the day-to-day practice of medicine. In so doing, the IOM demonstrates its support of EBM. In addition, in 2011 the IOM re-engineered its definition of clinical guidelines from an earlier definition published in 1990.6 7 Further, the IOM has published multiple manuscripts including methodology to assist those conducting systematic reviews.8 As

Hirsch JA, et al. J NeuroIntervent Surg 2014;0:1–4. doi:10.1136/neurintsurg-2014-011155

defined by David Sackett, EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.9 Others have defined it more specifically as “the use of mathematical estimates of the risk of benefit and harm, derived from high quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients”.10 The practice of EBM integrates the physician’s individual clinical expertise with the best available external clinical evidence.8–14 While EBM continues to disseminate widely, criticism has included the idea that its universal application might suppress physician clinical freedom and restrict the ability of clinicians to alter treatment plans to address unique patientspecific problems which are often nuanced and not addressed by the body of existing clinical evidence.9–11 In part to broaden its application from individual patients to healthcare services in general, EBM has been called by various names including evidence-based practice (EBP) or evidence-informed healthcare or evidence-based healthcare. Consequently, EBP is an interdisciplinary approach to clinical practice. Evidence-based neurointerventional practice thus entails promoting health or providing care by integrating the best available evidence with practitioner expertise and other resources while simultaneously taking into account individual patient characteristics, values and preferences. The broad application of EBM includes rigorous analysis of published literature to synthesize high-quality evidence such as systematic reviews and preparation of clinical guidelines.6–8 The IOM has described systematic reviews as a tool to identify, select, assess and synthesize the findings of similar studies and to help clarify what is known and not known about the potential benefits and harms of drugs, devices and other health care services. The IOM also recently defined clinical practice guidelines as statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of available treatment options.6 Manuscripts or guidelines which incorporate EBM or CER methods can be prepared by any individual and, as such, have the potential to be misused. In neurointerventional surgery (NIS) the Society of NeuroInterventional Surgery (SNIS) guidelines are performed using the stated IOM guideline criteria with the strictest application of high-quality statistical methodology.15–17 1

Downloaded from jnis.bmj.com on May 4, 2014 - Published by group.bmj.com

Socioeconomics EBM is often seen as a scientific tool for quality improvement, even though its application requires consideration of scientific facts along with value judgments and the cost of different treatments. As such, EBM exerts a fundamental influence on certain key aspects of medical professionalism. The actual value of evidence is related to its application and the circumstances in which and for whom it is used. Multiple intricate factors include the type of evidence reviewed, the methodology utilized, the knowledge and experience of the reviewers and many other factors including bias, self-interest, as well as financial factors. In order for clinicians to interpret the results of clinical research effectively, a formal set of rules must complement medical training and common sense.1 Consequently, knowing the tools of EBP is necessary—but not alone sufficient—for delivering the highest quality of patient care. It continues to be a challenge to balance EBM, CER and neurointerventional practice with new scientific innovations and the traditional methods of caring for the sick. The definition of EBM is used loosely and can include conducting a statistical meta-analysis of accumulated research, promoting randomized controlled trials (RCTs), supporting uniform reporting styles for research or having a personal orientation toward critical self-evaluations.

HISTORICAL ASPECTS While the roots of EBM are several hundred years old, it was formally defined in the late 1970s when a group of researchers in Canada’s McMaster University authored a group of manuscripts on how to critically appraise scientific information.11 The term ‘evidence-based medicine’ first made an appearance in 1990 at McMaster University. It was a part of a packet of information supplied for entering residents. The term subsequently appeared in print in the ACP journal club in 1991.11 Subsequently, joined by academic doctors largely from the USA, they formed the first International EBM Working Group and published ‘The Users Guide to the Medical Literature,’ in JAMA between 1993 and 2000 as a 25-part series which still resonates today. These papers were later turned into a textbook on EBM.11 12 In 1993 the Cochrane Collaboration was created in response to Archie Cochrane’s call for up-to-date systematic reviews of all relevant RCTs of healthcare and continues to publish quarterly systematic reviews.13 They are used by the National Health Service in the UK and, because of their high quality, elsewhere. By 2013 the Cochrane reviews contained approximately 5804 full reviews and another 2386 protocols for reviews in production.13 In the USA, federal efforts at EBM might be considered as having started with the short-lived National Center for Healthcare Technology. In 1972 the Office of Technology Assessment (OTA) was created as an advisory agency to Congress. Healthcare was one of the issues they covered. With elimination of OTA in 1995, the Agency for Healthcare Policy and Research (AHCPR) was created as an arm of the Department of Health and Human Services (DHHS) in 1989 during the presidency of George H W Bush.14 The agency’s role was to enhance the quality and ultimately the effectiveness of healthcare services in the USA. This being an agency of government, the AHCPR prioritized areas that resulted in disproportionate government expenditures. They developed 19 clinical practice guidelines at the astronomical cost of $750 million.18 Ultimately, secondary to significant political pressure, the AHCPR morphed into the Agency for Healthcare Research and Quality (AHRQ) in December 1999.14 Their mission statement included the very straightforward proposition of 2

promoting ‘quality research for quality healthcare’. Thus, the AHRQ attempts to facilitate the generation and appropriate application of evidence that can be utilized to enhance the quality of healthcare. The Medicare Modernization Act (MMA) was a federal law of the USA, enacted in 2003 during the presidency of George H W Bush. It represented a tremendous change to Medicare.19 The MMA authorized AHRQ to spend up to $50 million in 2004 and additional amounts in future years to conduct and support research with a focus on ‘outcomes, comparative clinical effectiveness, and appropriateness of healthcare items and services’ for Medicare and Medicaid enrollees.19 Using that funding, AHRQ has established an ‘effective healthcare’ program. Further funding in the American Recovery and Reimbursement Act (AARA) boosted EBM, the role of government and CER.20 The Patient Protection and Affordable Care Act (the ACA, for short), signed into law on March 23, 2010, created the Patient Centered Outcomes Research Institute (PCORI) resulting in moving CER further forward.21–25

DEFINITION OF EBM As described earlier, EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.9 In contrast to EBM, CER is defined as the generation and synthesis of evidence that compares the benefits and harms of alternate methods to prevent, diagnose, treat and monitor a clinical condition or to improve the delivery of care.1 EBM and CER are not based solely on randomized trials even though, in the hierarchy of clinical research, randomized trials are considered a higher level of evidence. The authors of this review want to highlight that EBM should be seen as a comprehensive integration of the scientific evidence, not only data from RCTs. Clinician experience is a key consideration in patient-specific issues, and it is this amalgamation which best aids in clinical decision-making. EBM thus involves two basic principles. First, scientific evidence is important in clinical decision-making, but patients’ values should also be considered. Second, while evidence may carry different weights with an RCT being the highest, this hierarchy is not absolute.1 10 26 All definitions of EBM involve three overlapping processes:6 7 1. Systematic review of the available scientific studies 2. Integration of such scientific data with clinical experience 3. Patient participation in decision-making.

Hierarchy of evidence EBM is informed by hierarchical evidence, and this hierarchy informs clinical decision-making. The descending order of evidentiary weight is: (1) systematic reviews of multiple highquality randomized trials; (2) a single high-quality randomized trial; (3) systematic review of observational studies addressing patient-important outcomes; (4) single observational studies addressing patient-important outcomes; (5) physiologic studies; followed by (6) unsystematic clinical observations.12 It is important to reiterate that this hierarchy should not be viewed as absolute. Furthermore, it needs to be emphasized that the quality of a given trial’s design and subsequent relevance to current practice must be considered. A poorly designed or executed RCT is no better and, because of misapplication, may well be worse than observational or single-arm trial data. It is important to recognize that, if treatment effects are sufficiently large and consistent, observational studies may provide more compelling evidence than RCTs, particularly in situations where RCTs are not feasible.12 However, unsystematic clinical observations are certainly more susceptible to quality variation and are often limited

Hirsch JA, et al. J NeuroIntervent Surg 2014;0:1–4. doi:10.1136/neurintsurg-2014-011155

Downloaded from jnis.bmj.com on May 4, 2014 - Published by group.bmj.com

Socioeconomics by small sample size and, more importantly, by deficiencies of inference.12

DISCUSSION Scientific studies are a critical component of evidence-based clinical practice. However, pertinent studies must be well constructed and durable; a poorly designed RCT should not overturn clinical experience and observational studies. In addition, evidence derived from RCTs is only directly applicable to those patients who would have qualified for inclusion within those trials and to those treatments offered within the trials. As such, evidence-based clinical practice must remain constantly adaptive, particularly within the context of rapidly evolving and technologically-driven subspecialties within medicine. For the majority of patients, clinical decision-making requires an extrapolation of the knowledge gained from RCTs (or other studies) which address similar (but not identical) scenarios. This extrapolation requires the application of clinical experience and represents the art of medicine. Patient-centered NI care is a vision that can be realized. If the field is to move forward, it must advance through well thoughtout research and continuously motivate good practice. At the same time, we must still promote the continued development of new techniques that have revolutionized the field of NI throughout its existence. SNIS guidelines often employ the American Heart Association (AHA) Evidence-Based Scoring System. In this system, recommendations are classified from levels I to III using the paradigm summarized below. Class I recommendations are made for conditions for which there is evidence, general agreement, or both that a given procedure or treatment is useful and effective. At the other end of the spectrum, Class III recommendations are rendered for conditions for which there is evidence, general agreement, or both that the procedure/treatment is not useful/effective and in some cases may be harmful. Class II recommendations are given for conditions where there is conflicting evidence, a divergence of opinion, or both about the usefulness/efficacy of a procedure or treatment and is often further broken down into a and b subcategories. The AHA system also applies a stratification of the evidence that ranges from A to C where A is data derived from multiple RCTs, B is data derived from a single RCT or non-randomized study and C is consensus opinion of experts. There are a several examples of recent RCTs within the NI space that would suggest limited benefit to our core treatments including treatment of arteriovenous malformations, vertebral augmentation and stroke.27–29 These have been discussed in a number of articles.30 The recent IMS 3 trial is a representative example of an AHA Class I prospective international randomized trial which suggested that endovascular therapy does not help patients with stroke beyond the benefits of intravenous tissue plasminogen activator. The implications of this study are broad-reaching on the surface and, taken to their logical extreme, suggest that stroke patients do not benefit from endovascular treatment. However, upon more rigorous scrutiny, there are a number of limitations within the trial that weaken the conclusions one might draw. To discuss a few: endovascular therapy targets large vessel occlusions; however, there was no mandatory vessel imaging in the triage process and 26% of patients who underwent angiography did not have a large vessel occlusion. The prespecified subgroup analysis of patients with documented large vessel occlusions did show a statistically significant improved functional outcome in those patients that underwent endovascular therapy. The trial

took over 6 years to recruit patients with a significant technological evolution occurring during this time. At the conclusion of the trial, mechanical stent retrievers or larger bore aspiration catheters were considered the standard of care; however, these devices combined represented approximately 18% of the cases treated in the trial. The majority of cases (80%) were treated with intra-arterial thrombolysis or MERCI, which are rarely used in today’s NI practice. Shaneyfelt et al31 described the frequent failure of clinicians to implement clinical interventions that have been shown to be efficacious.32 33 The SNIS has responded with educational efforts at national meetings and numerous collaborative peerreviewed publications in subspecialty journals on evidence-based standards and guidelines.

CONCLUSION Where does that leave SNIS members and readers of JNIS? First, we must acknowledge that shifting toward evidence-based clinical practice is not as easy as it first sounds. EBM relies equally on the integrative skills of the individual clinician and on systematically organized analysis and synthesis provided by the review process itself. NI specialists must recognize that evidence is variable in quality and quantity and must be related to the circumstance(s) of the individual patient. Put differently, the meaning of any body of evidence differs for physicians, administrators, payers and patients. Being able to interpret both the validity of evidence and its relative value is essential to determining meaningful policy. It is in that context that the SNIS ushers in a new era of evidence-based clinical practice. Correction notice This article has been corrected since it was published Online First. The author name David A Fiorella has been amended to read David J Fiorella. Contributors JAH and LM did the original research and provided a first draft. All authors reviewed the draft, provided commentary and editorial suggestions. Competing interests None. Provenance and peer review Not commissioned; internally peer reviewed.

REFERENCES 1

2

3 4

5 6

7

8 9 10 11

Hirsch JA, et al. J NeuroIntervent Surg 2014;0:1–4. doi:10.1136/neurintsurg-2014-011155

Manchikanti L, Falco FJ, Singh V, et al. An update of comprehensive evidence-based guidelines for interventional techniques in chronic spinal pain. Part I: introduction and general considerations. Pain Physician 2013;16(2 Suppl):S1–48. Jenicek M. How do we see medicine, health and disease? A basic set of rules and fundamental paradigms. Foundations of Evidence-Based Medicine. New York: The Parthenon Publishing Group, 2005:3–13. Eisenberg JM. What does evidence mean? Can the law and medicine be reconciled? J Health Polit Policy Law 2001;26:369–81. Jenicek M. The work of physicians with individuals and communities. Epidemiology and other partners in evidence-based medicine. Foundations of Evidence-Based Medicine. New York: The Parthenon Publishing Group, 2005:3–13. Institute of Medicine (IOM). Crossing the quality chasm: a new health system for the 21st century. Washington, DC: The National Academies Press, 2001. Graham R, Mancher M, Wolman DM, et al. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Institute of Medicine. Clinical Practice Guidelines We Can Trust. Washington: The National Academies Press, 2011. Field MJ, Lohr K. Committee to advise the Public Health Service on Clinical Practice Guidelines, Institute of Medicine. Clinical Practice Guidelines. Directions for a New Program. Washington: The National Academies Press, 1990. Institute of Medicine (IOM). Finding what works in health care: standards for systematic reviews. Washington, DC: The National Academies Press, 2011. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71–2. Greenhalgh T. How to read a paper: the basics of evidence-based medicine. 4th edn. Wiley-Blackwell, 2010. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence-based medicine. London: Churchill Livingstone, 1996.

3

Downloaded from jnis.bmj.com on May 4, 2014 - Published by group.bmj.com

Socioeconomics 12

13 14 15 16

17

18 19 20 21 22

4

Guyatt G, Rennie D, Meade MO, et al. Users’ guides to the medical literature: a manual for evidence-based clinical practice. 2nd edn. McGraw-Hill Companies, 2008. The Cochrane Collaboration. Evidence-based health care. Secondary evidence-based health care. http://www.cochrane.org/about-us/evidence-based-health-care US Department of Health and Human Services. Agency for Healthcare Research and Quality. Secondary Agency for Healthcare research and Quality. http://www.ahrq.gov/ Hirsch JA, Meyers PM, Barr J, et al. Technical standards and practice guidelines: should we? Why now? Why SNIS? J Neurointerv Surg 2009;1:5–7. Powers CJ, Hirsch JA, Hussain MS, et al. Standards of practice and reporting standards for carotid artery angioplasty and stenting. J Neurointerv Surg 2014;6:87–90. Chandra RV, Meyers PM, Hirsch JA, et al. Vertebral augmentation: report of the Standards and Guidelines Committee of the Society of NeuroInterventional Surgery. J Neurointerv Surg 2014;6:7–15. Gonzalez E, Materson R. The nonsurgical management of acute low back pain: cutting through the AHCPR guidelines. New York: Demos Vermande, 1997. US Government. H.R. 1. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003, P.L. 108–173, Enacted 8 December 2003. US Government. H.R. 1. American Recovery and Reinvestment Act of 2009 signed by President Barack Obama on 2/17/2009. 2009. Manchikanti L, Hirsch JA. Patient Protection and Affordable Care Act of 2010: a primer for neurointerventionalists. J Neurointerv Surg 2011;4:141–6. Manchikanti L, Hirsch JA. Obamacare 2012: prognosis unclear for interventional pain management. Pain Physician 2012;15:E629–40.

23

24 25 26 27

28 29 30 31 32 33

US Government. Centers for Medicare and Medicaid Services. Health Insurance Marketplace. Secondary Centers for Medicare and Medicaid Services. Health Insurance Marketplace. https://www.healthcare.gov Patient-Centered Outcomes Research Institute (PCORI). Secondary Patient-Centered Outcomes Research Institute (PCORI). http://www.pcori.org Manchikanti L, Helm S, Hirsch JA. The evolution of the Patient-Centered Outcomes Research Institute. J Neurointerv Surg 2012;4:157–62. Jonas WB. Building an evidence house: challenges and solutions to research in complementary and alternative medicine. Forsch Komplement Med 2005;12:159–67. Mohr JP, Parides MK, Stapf C, et al. Medical management with or without interventional therapy for unruptured brain arteriovenous malformations (ARUBA): a multicentre, non-blinded, randomised trial. Lancet 2014;383:614–21. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009;361:569–79. Broderick JP, Palesch YY, Demchuk AM, et al. Endovascular therapy after intravenous t-PA versus t-PA alone for stroke. N Engl J Med 2013;368:893–903. Zaidat OO, Lazzaro MA, Gupta R, et al. Interventional Management of Stroke III Trial: establishing the foundation. J Neurointerv Surg 2012;4:235–7. Shaneyfelt T, Baum KD, Bell D, et al. Instruments for evaluating education in evidence-based practice: a systematic review. JAMA 2006;296:1116–27. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635–45. Hayward RA, Asch SM, Hogan MM, et al. Sins of omission: getting too little medical care may be the greatest threat to patient safety. J Gen Intern Med 2005;20:686–91.

Hirsch JA, et al. J NeuroIntervent Surg 2014;0:1–4. doi:10.1136/neurintsurg-2014-011155

Downloaded from jnis.bmj.com on May 4, 2014 - Published by group.bmj.com

Evidence-based clinical practice for the neurointerventionalist Joshua A Hirsch, Aquilla S Turk, J Mocco, et al. J NeuroIntervent Surg published online February 27, 2014

doi: 10.1136/neurintsurg-2014-011155

Updated information and services can be found at: http://jnis.bmj.com/content/early/2014/03/06/neurintsurg-2014-011155.full.html

These include:

References

This article cites 17 articles, 6 of which can be accessed free at: http://jnis.bmj.com/content/early/2014/03/06/neurintsurg-2014-011155.full.html#ref-list-1

P

Evidence-based clinical practice for the neurointerventionalist.

The field of neurointerventional (NI) surgery has developed in the context of technologic innovation. Many treatments readily provided in 2014 would h...
114KB Sizes 2 Downloads 3 Views