Neurocrit Care (2015) 22:337–347 DOI 10.1007/s12028-015-0132-y

REVIEW ARTICLE

Global Monitoring in the Neurocritical Care Unit DaiWai M. Olson1 • W. Andrew Kofke2 • Kristine O’Phelan3 • Puneet K. Gupta1 • Stephen A. Figueroa1 • Stelios M. Smirnakis4 • Peter D. Leroux5 • Jose I. Suarez4 • the Second Neurocritical Care Research Conference Investigators

Published online: 7 April 2015 Ó Springer Science+Business Media New York 2015

Abstract Effective methods of monitoring the status of patients with neurological injuries began with non-invasive observations and evolved during the past several decades to include more invasive monitoring tools and physiologic measures. The monitoring paradigm continues to evolve, this time back toward the use of less invasive tools. In parallel, the science of monitoring began with the global assessment of the patient’s neurological condition, evolved to focus on regional monitoring techniques, and with the advent of enhanced computing capabilities is now moving back to focus on global monitoring. The purpose of this

session of the Second Neurocritical Care Research Conference was to collaboratively develop a comprehensive understanding of the state of the science for global brain monitoring and to identify research priorities for intracranial pressure monitoring, neuroimaging, and neuroelectrophysiology monitoring. Keywords Neuromonitoring  Neurocritical care  Intracranial pressure  Electrophysiology  Neuroimaging  Neuroprotection

Introduction A complete list of members of the Second Neurocritical Care Research Conference Investigators appears in the ‘‘Appendix’’. The opening session of the Second Neurocritical Care Research Conference (NCCRC) meeting organized by the Neurocritical Care Research Network (NCRN) was moderated by Dr. DaiWai M. Olson and discussed global cerebral monitoring in the neurocritical care unit (NCCU). This article provides edited summaries of the following presentations during the opening session of this meeting. Dr. W. Andrew Kofke presented on foibles in research in neuroprotection. Dr. Kristine H. O’Phelan presented on intracranial pressure (ICP) monitoring. Dr. Puneet K. Gupta presented on electrophysiology monitoring, and Dr. Stelios M. Smirnakis presented on magnetic resonance imaging (MRI) techniques. & DaiWai M. Olson [email protected] Jose I. Suarez [email protected] 1

2

Department of Neurology and Neurotherapeutics, UT Southwestern Medical Center, University of Texas Southwestern, 5323 Harry Hines Blvd, Dallas, TX 75390-8548, USA

The concept of cerebral monitoring has historically been dichotomized as either global or focal. Global cerebral monitoring was the first form of brain monitoring available, but limited the practitioner by requiring the assumption that all parts of the brain are equally impacted by ischemia, pressure, or blood flow. Focal or targeted cerebral monitoring was developed in response to the need of practitioners to become more informed about a specific lesion or penumbral tissue. Thanks to technological advances these two technologies now overlap, blurring the 3

Department of Neurology, Miller School of Medicine, University of Miami, Miami, FL, USA

4

Division of Vascular Neurology and Neurocritical Care, Department of Neurology, Baylor College of Medicine, Houston, TX, USA

5

Main Line Health Brain and Spine Center, Wynnewood, PA, USA

Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, PA, USA

123

338

lines of dichotomization. The past few decades have seen new methods of monitoring the brain, advances in established methods of brain monitoring, and advances in information and database technology. While there has been tremendous growth in the volume of data from the neurocritical care unit, the challenge remains to define which information is useful and what will give us a true representation of the patient’s condition in order to minimize secondary brain injury. Should we improve on the design of the wheel or focus efforts on inventing a new mode of transportation? Inherent with modifications and enhancements made to older technologies is the need to examine their relative contribution within the context of newer modalities of monitoring. There is no evidence that improving older technologies will result in better patient outcomes. Similarly, there is no evidence that improving older technologies will improve our ability to conduct research. For example, there was a great gain in global cardiac monitoring with the advent of the electrocardiograph (EKG) monitor, and the technological advances that allowed for 12-lead EKG moved cardiac monitoring forward both clinically and in research. However, this does not support a theory that faster 36-lead EKG would provide a greater deal of certainty in the clinical arena nor would it add to the knowledge base in the research setting. There have been major technological advances in global brain monitoring over the past century. Four themes (1) neuroprotection research, (2) intracranial pressure, (3) electrophysiology, and (4) neuroimaging are central to this discussion. These advances are critical in the detection of primary injury to the brain and monitoring for secondary brain injury. The goal of caring for acute neurologically ill patients is to minimize secondary brain injury and, when possible, reverse the underlying pathology. Various monitoring methods to detect the occurrence of secondary brain injury and gage its effect have been employed and used to guide trials of agents that impede the cellular injury cascade. Unfortunately, to date all clinical trials of neuroprotective agents have failed despite promising laboratory and preclinical efficacy. These failures are rooted both in the complexity of the cellular injury cascade pathway and in the heterogeneity of patient populations. New methodologies for neuroprotection research must therefore be developed that (1) incorporate rigorous criteria for patient selection, (2) have well-defined secondary endpoints based on validated biomarkers that quantify neuronal death, and (3) use novel designs that take advantage of multimodal monitoring to incorporate the underlying biology. A dynamic paradigm similar to that proposed by Kofke, incorporating concepts from the Plan-Do-Study-Act (PDSA) model, will help future research projects to embrace rather than embattle clinical practice.

123

Neurocrit Care (2015) 22:337–347

In 1783, Dr. Monro helped propel the concept of ICP by describing the skull as a rigid box with three components (blood, tissue, and CSF). His writings were later endorsed by Dr. Kellie in 1824 and now stand as what has become known as the Monro-Kellie Doctrine. ICP monitoring is a fundamental measurement in neurocritical care and there are many studies detailing its use in subarachnoid hemorrhage, ischemic stroke, ICH, and traumatic brain injury. Monitoring of ICP is generally considered standard of care but, although in extreme examples its usefulness is manifest, there have been no prospective, randomized studies that show benefit at the population level. Although the first electroencephalogram (EEG) and evoked potential (EP) were done in the late 1800s, their first applications in humans took place nearly 50 years later. Since then, advances in technology and interpretation may allow for novel indications, including detection of secondary brain injury and prognostication [1]. EEG classically has been used in the detection of seizures in the neurocritical care unit, which are typically nonconvulsive, and have a high prevalence in subarachnoid hemorrhage, ischemic stroke, intracerebral hemorrhage (ICH), traumatic brain injury, and hypoxic-ischemic encephalopathy (HIE). Beyond understanding seizure activity, signal-processed multi-lead EEG has been recently explored as a method to identify secondary brain injury. Quantitative EEG uses the digital EEG signal and compressed spectral analysis to reformat the data into a form that can be used in real time at the bedside as a monitor of the global electrical activity of the brain. Somatosensory evoked potentials have also been studied in the neurocritical care unit but their use has been limited. The basic concept of radiography was discovered in 1895 and became portable in the 1920s. Computed tomography (CT) scans were introduced in 1972, quickly followed by positron emission tomography (PET) in 1975 [2] and Magnetic resonance imaging (MRI), in 1977. [3] Specialized MR sequences allow us to image tissue ischemia both acutely (diffusion weighted imaging) and chronically (T1, T2, FLAIR), visualize brain hemorrhage at its various stages (CT, MR T1, T2, susceptibility imaging), quantify brain edema, and visualize the breakdown of blood brain barrier (contrast enhanced CT, MRI). Imaging techniques continued to evolve and now we have specialized protocols that allows us to image vascular anatomy (MRA, MRV, CTA, CTV), brain tissue perfusion parameters (MRP, CTP, PET, Xenon CT), vascular reactivity and reserve (acetazolamide MRP/CTP, functional MRI), oxygen utilization and extraction fractions (CMRO2, OEF, PET), as well as the concentration of metabolites and neurotransmitters involved in biochemical processes (PET, MR spectroscopy). Multiple other specialized MR sequences and PET ligands are been developed that increase

Neurocrit Care (2015) 22:337–347

the sensitivity and specificity of specific measurements to which we cannot do justice to here. Neuroimaging methods are extremely powerful for precisely quantitating the degree of existing brain injury, as well as for monitoring hemodynamic and tissue parameters that correlate with the risk of impending injury. They therefore represent an extremely promising tool that can be used to improve patient selection in clinical trials, as well as to provide quantitative, physiologically meaningful, measurements that serve as biomarker endpoints. Judiciously incorporating such endpoints in outcome-oriented neurocritical care trials will provide important, quantitative, measures by which to gage the success of applied interventions and will allow us to understand better the underlying pathophysiology. Each section below briefly describes the evolution and current state of the science for each of these four themes.

Neuroprotection The current state of the science in human neuroprotection is best described as a series of failed attempts at translation. There are over 2310 completed clinical trials in stroke, yet few have yielded reproducible results showing efficacy. While many interventions have been trialed and discarded, most of these interventions showed promise in the laboratory or preclinical setting using homogeneous animal models. Primarily, the approach taken to date is one wherein a discrete pathophysiologically focused intervention is examined for its impact on a specific population with a non-discrete multifaceted pathophysiological process. For reasons outlined below this has led to a call for a new approach to translating neuroprotection research. Unfortunately, the disease processes involved in neuroprotection studies have many pathophysiological pathways. Sherman Stein from the University of Pennsylvania, Department of Neurosurgery, in personal communication with A. Kofke, provides additional insight toward understanding complexity. Referencing Degracia [4], Stein points out that the complexity is made even more difficult through notions of multiple pathophysiological accommodation mechanisms that accrue after an insult, suggesting that not only are the targets multiple but that they are also moving targets. It is further suggested that there are multiple timedependent pathophysiological factors each with multiple weights and with exponential interactions that make it seem obvious that a therapy directed to one pathway cannot work, unless the pathway has a very high time-related weight, like rapid resumption of blood flow in stroke (early reperfusion good, late reperfusion ineffective or bad). In addition the problem is further exacerbated by the heterogeneity within and between health systems. For example, some NCCUs

339

measure CPP based on MAP at the level of the heart, whereas others use MAP at the level of the tragus, leading to up to 15 mmHg differences in CPP between hospitals—this could be important for multi-institutional research protocols if CPP 60 mmHg is a goal [5]. The current state of the science for neuroprotection is one which heretofore has called for single therapies for complex diseases and in the context of multi-institutional studies also deals with between health-system heterogeneity. The solution to this suggested by Kofke is to use multimodal pathophysiologically directed therapeutic interventions to thus address the multiple pathophysiological pathways in the genesis of neurological secondary injury [5]. Unfortunately, just evaluating a bunch of therapies all at once is likely another prescription for failure. What if there are negative between-therapy interactions? Should not there be an assessment of the likely potential weights of proposed therapies and introduce them from high to low weights? Rather, a rational step-wise system with serial introduction of each therapy is needed. Kofke suggests adapting the PDSA approach which has been widely used for quality improvement processes in hospitals as a potential solution to this problem (Fig. 1). In brief, the PDSA method is Plan–Do–Study–Act paradigm used in local quality improvement work. This means an experimental investigation is first Planned, then it is Done, then it is Studied as the data are evaluated, and then Action is taken in deciding whether to keep an intervention or not. As applied to neuroprotection, PDSA could be applied to each incrementally added therapy in a planned clinical investigation eventually leading to a ‘‘cocktail’’ of combined therapies. This is done in a nonrandomized manner but then this combination of therapies, now found to be safe with reasonable evidence of efficacy, can be introduced into a traditional prospective randomized investigation. Presumably, the effect size will be larger when tested in a heterogenous human disease population. This should increase the tolerance of the therapy to differences between health systems. PDSA is non-randomized and done with sequential changes or additions in a local hospital/unit. In clinical research, one would incrementally introduce a new intervention to current standard of care. Each new intervention is supported by preclinical or perhaps retrospective clinical data indicating neuroprotective potential. Then each targeted intervention gets serial measurements of that target’s desired physiological effect. Example: a drug used to treat blood pressure (BP) following subarachnoid hemorrhage is added to the current standard and BP is measured. Concurrently there is evaluation of no worsening in outcome based on surrogates, such as biomarkers or perhaps EEG effects of lower BP. If no worsening of neurological or systemic outcomes is suggested, and there is an effect on

123

340

Neurocrit Care (2015) 22:337–347

Start New Single

Discard

Unacceptable

Y

N

Desired Physiologic Biochemical effect at P

Global monitoring in the neurocritical care unit.

Effective methods of monitoring the status of patients with neurological injuries began with non-invasive observations and evolved during the past sev...
442KB Sizes 2 Downloads 6 Views