Available online at www.sciencedirect.com

ScienceDirect Editorial overview: Large-scale recording technology: Scaling up neuroscience Francesco P Battaglia and Mark J Schnitzer Current Opinion in Neurobiology 2015, 32:iv–vi For a complete overview see the Issue Available online 7th May 2015 http://dx.doi.org/10.1016/j.conb.2015.03.002 0959-4388/# 2015 Elsevier Ltd. All rights reserved.

Francesco P Battaglia Donders Institute for Brain, Cognition and Behaviour, Radboud Universiteit, Heyendaalseweg 135, Nijmegen 6525AJ, The Netherlands e-mail: [email protected] Francesco Battaglia got his PhD in Computational Neuroscience at SISSA, Trieste Italy. After that, he learned neural ensemble recording at the University of Arizona, Tucson, AZ, and since then, he focuses on characterizing cortical neural ensemble code, and hippocampal-cortical interactions. Presently, he is at the Donders Institute for Brain, Cognition, and Behaviour at the Radboud University, Nijmegen, The Netherlands, where he leads the ‘Neuronal Networks of Memory’ group.

Mark J Schnitzer Department of Biology and Applied Physics, Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA Mark J Schnitzer is an Investigator of the Howard Hughes Medical Institute and a faculty member in the Departments of Applied Physics and Biology at Stanford University. His lab has innovated a number of optical technologies for in vivo brain imaging, several of which are widely used and commercially available. His lab is now applying optical approaches to the study of large-scale neural coding and dynamics in behaving animals, in neural circuits underlying memory formation and retrieval.

As neuroscience advances, it becomes increasingly clear that many of the most pressing questions about brain function can only be answered by studying large-scale neural circuits. Until recently, research at this level of analysis was relatively under-represented, for two main reasons. First, the technical means for measuring the activity of many neurons simultaneously were lacking, or required heroic efforts that only a few pioneering labs could master. Second, the theoretical foundations and analytical tools for making sense of the enormous amount of complex data generated by monitoring of neural circuits, were still quite rudimentary. The last few years saw impressive advances in the technological domain, and this is spurring rapid, catch-up progress in the conceptual, analytical realm. Neurotechnology has become a field in its own right, combining disciplines as diverse as optics, electronics and material engineering, molecular biology and genetics. One of the main goals in neurotechnology is to expand the amount of data that can be collected from a neural system: monitoring more neurons, for longer durations, with superior spatio-temporal resolution and with better specificity. Thanks to recent technological strides, the typical dataset for a systems neuroscience experiment increased from a few hundreds of kilobytes in size, to tens or hundreds of gigabytes, or more. Consequently, data analytic methods are rapidly developing, paralleling the tendency of research in many fields toward ‘big data’, but with several interesting peculiarities and elements of originality. Overall, important conceptual advances in systems neuroscience are occurring that would have been unimaginable just a few years ago. Our aims for this issue of Current Opinion in Neurobiology are threefold. We offer an overview of technical advances for monitoring neural circuit activity, including optical, electrical and fMRI modalities. Next, we look at data analysis and theoretical neuroscience concepts that are helping to organize and make sense of the great wealth of data generated. Finally, we provide some examples of conceptual breakthroughs that emerged from recent experiments enabled by the approaches highlighted in this issue. Our summary of technologies for recording from large-scale neural ensembles starts from optical imaging, which presently can sample up to a thousand or more individual neurons, while also providing the possibility to explore the dynamics of neuronal sub-components such as dendrites or synapses. Peron et al. offer a primer of in vivo optical brain imaging and a critical examination of its present strengths and weaknesses and how these are evolving with the emergence of new methodologies. The current and upcoming generations of Genetically Encoded Calcium indicators (GECI)

Current Opinion in Neurobiology 2015, 32:iv–vi

www.sciencedirect.com

Editorial overview Battaglia and Schnitzer v

are discussed, along with microscopy methods, from TwoPhoton Scanning Laser Microscopy (TPSLM) to newer options like light-sheet and light-field microscopy that can boost the speed of data acquisition by orders of magnitude. Fast imaging methods permit imaging of large brain areas, or, in small organisms like zebrafish, the brain in its entirety. As described by Ahrens et al., this species has enormous advantages for optical imaging; as the larval zebrafish is optically transparent, the brain may be imaged without invasive surgery preparation. Moreover, there are transgenic lines expressing GECIs in specific cells selected genetically. As a completely new, exciting direction for imaging cellular activity in the mammalian brain, optical imaging is enabling large-scale studies of cellular excitation in glial cells, opening a new world of interaction between these cells and neurons (Nimmerjahn et al.). Most optical imaging techniques do suffer however from two limitations. First, it is usually necessary to fix the animal’s head under a microscope, which constrains the range of behaviors that can be analyzed. Second, deep brain areas remain hard to access optically. Ziv et al. describe how optical microendoscopy can help overcome these problems: cylindrically shaped ‘optical needles’ can be implanted into tissue to access deep brain regions. When combined with a miniaturized fluorescence microscope that can be mounted on the head of a freely moving mouse, imaging can be applied to the study of naturalistic behaviors. Very significant progress has also boosted ‘old-style’ electrophysiology. Electrical signals still offer substantial advantages over imaging, as they reflect voltage dynamics directly, including at the sub-millisecond time scale, which is crucial to an understanding of phenomena such as temporal coding and neural synchronization. Also, electrophysiological probes can readily be implanted in deep structures and are easy to integrate in lightweight implants usable for freely moving animals. In this domain, silicon CMOS-based probes have seen considerable progress in the recent years, summarized by Ruther et al. The spatial resolution and number of recoding sites have been scaled up, and the ability to integrate pre-processing electronics (e.g. amplification, digitization, multiplexing) in the probe base or probe shaft helps dealing with the resulting, very high data rates, without too cumbersome cables or electronics. Microfabricated electrophysiological probes have also been realized on flexible (e.g. polyimide) supports, which may be used for epidural implant, and give high-resolution information about electrical activity across multiple brain areas even in larger, primate brains. Fries et al. and Fujii et al. showcase electrocorticography (ECoG), a technique that makes use of this approach, describing how it can describe long-range www.sciencedirect.com

activity coherence between different visual cortical areas (Fries et al.), and as a readout for motor BrainMachine Interfaces (Fujii et al.). At even higher spatial resolution, nanotechnology is starting to show notable promise. Melosh et al. describe interfacing nanoprobes that have similar sizes as the biological components that they target. These probes can make intimate contact with neurons, for example by being engulfed by them, and can offer a rich view of membrane potential, paralleling traditional intracellular electrophysiology, but in a far more scalable fashion. Another important change in the neuroscience community is the availability of open-source hardware for electrophysiology, as discussed by Siegle et al. This has lowered the barriers for access to large ensemble recording, and offers to creative researchers the possibility to realize innovative experimental setups. An example of this are closed-loop systems in which neural data or behavioral data are used to control in real time stimulators or actuators (Carmena et al.) which, in turn, may be used both to neuroscientific investigation and for applications such as brain–machine interface. Another avenue to scale up the reach of electrophysiology is to combine fMRI with local field potential recording. Logothetis et al. show that one can trigger the analysis of fMRI signals by EEG events, and they discuss whole-brain responses to fast events such as hippocampal sharp waves. The large datasets acquired with many of the new techniques necessitates new ways to systematize and analyze them. Tiesinga et al. discuss data organization and integration efforts by the European Human Brain Project, one of the large initiatives concerning ‘big data’ in neuroscience. Freeman et al. describe how open source software tools, derived from the information technology industry, may be adapted to the analysis of neuroscience data, thereby improving standardization and collaboration. At a more conceptual level, Roudi et al. give a more specific example of this by showing how methods from statistical physics may be applied to infer the functional connectivity in a network of neurons, from large scale ensemble data. Linking theoretical modeling to data analysis offers further opportunities for insight; Marder et al. point out that most theoretical models in neuroscience may be hard to constrain. Multiple parameter sets (corresponding to very different underlying systems) may fit the data, thus Marder et al. propose that ‘conceptual models’ may yield more valuable insights under those conditions. For example, such conceptual models have been proposed for the mechanisms underlying the generation of hippocampal place cells and entorhinal cortex grid cells. Combinations of neural ensemble recordings and lesions and inactivation techniques (Leutgeb et al.), as well as transgenic manipulations of ion channels (Giocomo et al.), have Current Opinion in Neurobiology 2015, 32:iv–vi

vi Large-scale recording technology

enabled researchers to rule out some of these models, while gaining confidence in others. On the other hand, as we can read the activity of a very large number of neurons, we can characterize the brain state even in very short times. This enables novel types of experimental designs where, rather than relying on averaging over many trials, one looks at ‘snapshots’ capturing the instantaneous situation. Ganguli et al. propose ways to test whole ‘spaces’ of theoretical hypotheses against experimental data, even for single-trial data. Concrete examples of how the availability of large-scale datasets may help toward crafting experiments on more

Current Opinion in Neurobiology 2015, 32:iv–vi

naturalistic behavioral paradigms are offered by Redish et al., who show how the detection of ‘replay’ of neural activity sequences provides a handle on instantaneous decision-making processes, and by Roelfsema et al. who describe how ensemble activity may help to characterize complex visual perception phenomena. In conclusion, as we hope this collection of papers conveys, we are faced with a blossoming field, but also one that is still at a very initial stage. We are already learning rapidly about neural circuits, but the best is still to come — possibly in the form of new theories of the brain inspired by the data revolution.

www.sciencedirect.com

Editorial overview: Large-scale recording technology: Scaling up neuroscience.

Editorial overview: Large-scale recording technology: Scaling up neuroscience. - PDF Download Free
176KB Sizes 3 Downloads 8 Views