PHYSICAL REVIEW E 89, 032725 (2014)

Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals Lianchun Yu* Institute of Theoretical Physics, Lanzhou University, Lanzhou 730000, China and Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000, China

Liwei Liu College of Electrical Engineering, Northwest University for Nationalities, Lanzhou 730070, China (Received 11 September 2013; revised manuscript received 20 January 2014; published 31 March 2014) The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained. DOI: 10.1103/PhysRevE.89.032725

PACS number(s): 87.19.lc, 87.16.Vy, 87.19.lb, 87.19.lj

I. INTRODUCTION

Neural processing is metabolically expensive [1]. The human brain accounts for about 20% of an adult’s resting metabolic rate, and a large fraction of this energy is used for action potential (AP) generation, which relies on the potential energy stored in transmembrane ion gradients [2]. Although AP does not consume energy, maintaining ionic concentration gradients and restoring them after generation of APs requires the release of energy during adenosine triphosphate (ATP) hydrolysis to drive ATPase Na+ /K+ exchangers [3,4]. These metabolic demands can be large enough to influence the design, function, and evolution of brains [5]. Therefore, under the pressure of natural selection, neurons, neural coding, and neuronal circuits will evolve to be energy efficient when metabolic energy is limited [6,7]. Energy efficiency, which quantifies the ability through which neurons convey information during signaling with a certain amount of energy consumption, depends on the “design” of neural systems. For example, in the process of AP generation, the inward Na+ current depolarizes the membrane and generates the upstroke, whereas the outward K+ current repolarizes the membrane and facilitates the downstroke. The overlap of those two ion fluxes results in an electrically neutral exchange of positive ions, wasting Na+ and, consequently, energy [8]. Thus, minimizing the overlap of these two ion fluxes will improve the energy efficiency of AP generation [9,10]. Recent experimental investigations on the nonmyelinated mossy fibers of the rat hippocampus support this suggestion [11].

*

[email protected]

1539-3755/2014/89(3)/032725(12)

The energy efficiency of the neural system is also dependent on the size of the neurons [12]. For example, one would expect that neurons with small sizes are more efficient than large ones, because fewer ion channels are involved in the former, and less ion exchange occurs through ion pump. However, ion channels are stochastic devices; thus fewer ion channels in the neurons will lead to larger fluctuations in the transmembrane ion current [13]. This so-called ion channel noise has been demonstrated to cause variability in the response of neurons to external stimuli and can induce spontaneous APs, thus damaging the reliability of signal processing [14,15]. In this case, tradeoffs between information transfer and energy use can strongly influence the number of ion channels used by the neurons. An elegant theoretical analysis, using a simple model for the generation of graded electrical signals by Na+ and K+ channels, showed that an optimum number of ion channels that maximizes energy efficiency exists. The optima depend on several factors: the relative magnitudes of the signaling cost (current flow through channels), the fixed cost of maintaining the system, the reliability of the input, additional sources of noise, and the relative costs of upstream and downstream mechanisms [16]. However, instead of a graded electrical signal, APs carry information in neural systems, and contribute to the major cost of energy. Therefore, studying the energy efficiency of neurons that code signals digitally with APs is interesting when ion channel noise is considered. In addition, the information process in the brain is accomplished by a group of neurons working cooperatively, not with only one neuron [17]. Studies have shown that neurons can be synchronized to prevent noise perturbation and facilitate the reliability of information transmission [18,19]. However, large numbers of synchronized spikes generated in the neural systems will increase the burden on energy consumption. In this case, tradeoffs between

032725-1

©2014 American Physical Society

LIANCHUN YU AND LIWEI LIU

PHYSICAL REVIEW E 89, 032725 (2014)

information transfer and energy use may influence the size of the neural network. The evaluation of energy efficiency requires knowledge of both the metabolic energy cost and the input-output relation of the neurons. This input-output relation is usually characterized by Shannon’s information-theoretic framework [16,20,21]. In this paper, we characterized the input-output relation of neurons by their coding capacity, whose definition is based on the firing probability of neurons in response to pulse inputs and the spontaneous firing rate due to noise. The coding capacity measures the ability of neurons to detect pulse inputs with APs and discriminate between signals and noise perturbation. A simplified model of a one-dimensional bistable Langevin equation, which mimics the action potential generations with a particle crossing the barrier of a double well, is analytically solved to determine pulse signal detection and spontaneous firing rates. The coding capacity and energy efficiency are then calculated for a single neuron performing a pulse signal detection task in the ion channel noise environment. Results show that neurons have an optimal number of ion channels corresponding to maximal energy efficiency during pulse signal detection. The optima depend on the signal properties, e.g., pulse strength, interpulse interval. To investigate the energy efficiency of a neuron population, the pulse inputs are applied to a group of neurons. The outputs of these neurons are received and read out by a downstream coincidence detector (CD) neuron. We found that the coding capacity is enhanced when this network is applied for pulse signal detection, and the optimal combination of neuron and ion channel number that maximizes energy efficiency is determined. The optimal number of neurons depends on the threshold of downstream CD neurons. Predictions of the bistable neuron model are consistent with the computer simulation results of a stochastic Hodgkin-Huxley (HH) model. This paper is organized as follows. Section II presents the stochastic version of the HH neuron model, as well as the bistable neuron model, and its solution for pulse signal detection rate and spontaneous firing rate. In Sec. III, we discuss the threshold fluctuation and spontaneous firing rate due to ion channel noise. Section IV is devoted to a discussion of the energy efficiency for single neurons and Sec. V the neuron population. Discussions and conclusions are presented in Sec. VI. II. MODELS AND METHODS A. The stochastic Hodgkin-Huxley model

According to the classic HH model, the current flowing across a giant axon membrane is represented by the sum of the capacitative current and the ionic currents through the corresponding conductive components, which are averaged deterministic terms assuming the numbers of ion channels are large [22]. For limited numbers of ion channels, the total ion conductances for Na+ and K+ should be replaced with the single-channel conductances of Na+ and K+ channels, resulting in the equivalent circuit demonstrated in Fig. 1(a). The membrane dynamics of HH equations is then given by      dV rev Cm = − GK V − VKrev + GNa V − VNa dt  + GL (V − VL ) + I, (1)

(a)

(b)

(c)

FIG. 1. Scheme of the stochastic Hodgkin-Huxley model. (a) Electrical equivalent circuit of the Hodgkin-Huxley model with consideration of each ion channel conductance. (b) The Markov chain model for K+ channels. Channels are open when they are in [n4 ] state and closed in other states. (c) The Markov chain model for Na+ channels. Channels are open when they are in [m3 h1 ] state and closed in other states.

where V is the membrane potential and I the input current. rev VKrev , VNa , and VL are the reversal potentials of the K+ , Na+ and leakage currents, respectively. GK , GNa , and GL are the corresponding specific ion conductances, and Cm is the specific membrane capacitance. The conductances for potassium and the sodium channel are given by GK (V ,t) = γK [n4 ]/S, GNa (V ,t) = γNa [m3 h1 ]/S,

(2)

where γK and γNa are the single-channel conductances of the K+ and Na+ channels. [n4 ] refers to the number of open K+ ion channels and [m3 h1 ] refers to the number of open Na+ ion channels. S is the membrane area of a neuron. The total number of Na+ channels and K+ channels are given by ρNa S and ρK S, respectively, where ρNa and ρK are the densities of Na+ and K+ channels. In this work, membrane area is used to control the size of the neuron cell, i.e., the number of ion channels in a stochastic HH model. The gating dynamics of each ion channel is modeled with the corresponding Markov process, as demonstrated in Figs. 1(b) and 1(c). K+ channels can exist in five different states and switch between these states according to the voltage dependence of the transition rates. These channels open only when they are in n4 state. Similarly, the Na+ channel has eight states, with only one open state m3 h1 . In each time step, the numbers of open K+ and Na+ channels are determined by stochastic simulation of the Markov chain model of those channels, and Eq. (1) is integrated by the Euler-forward method with a time step of 0.01 ms [23]. The parameters and rate

032725-2

OPTIMAL SIZE OF STOCHASTIC HODGKIN-HUXLEY . . .

PHYSICAL REVIEW E 89, 032725 (2014)

TABLE I. Parameters and rate functions used in the simulations. Cm VKrev rev VNa VL γK γNa GL ρK ρNa αn βn αm βm αh βh

Specific membrane capacitance Potassium reversal potential Sodium reversal potential Leakage reversal potential Potassium channel conductance Sodium channel conductance Leakage conductance Potassium channel density Sodium channel density

1 μF/cm2 −77 mV 50 mV −54.4 mV 20 pS 20 pS 0.3 mS/cm2 20/μm2 60/μm2

In the case of channel noise, since noisiness of a membrane current declines in proportion to the square root of the number of channels [14,26], the dynamics of the bistable neuron is described as x˙ = ax − x 3 + n−1/2 (t),

(3)

where (t) = 0;

0.01(V +55) 1−e−(V +55)/10 −(V +65)/80

0.125e

0.1(V +40) 1−e−(V +40)/10 −(V +65)/18

4e 0.07e−(V +65)/20 1 1+e−(V +35)/10

functions used in the simulation of the stochastic HH model are listed in Table I. The energy cost of a neuron is proportional to the number of ions pumped in and out of the cell. One can integrate the Na+ or K+ current over time, and then convert them into the number of Na+ or K+ ions. Since the Na+ /K+ pump hydrolyzes one ATP molecule for three Na+ ions extruded and two K+ ions imported, the energy cost is estimated by the amount of ATP molecules expended [24], or in units of energy if the powers of the Na+ /K+ pump are measured from experiment (for example, 50 kJ/mol in the heart [25]) [9]. For the single-compartment neuron model, its energy cost can be divided into two groups: cost that is independent of S and cost that depends on the membrane area S [16]. The former cost is composed of two parts: the cost of APs and the cost of subthreshold membrane potential fluctuation. Given that the amplitudes of subthreshold fluctuation are very small compared with the amplitudes of APs [13], the energy cost of the subthreshold fluctuation is trivial compared with the energy cost of APs and could be ignored. Therefore, the energy cost of the stochastic HH model can be estimated by the product of membrane area and the energy cost of APs in unit area. Furthermore, the shape of AP is stereotyped, and the currents flowing in and out of the membrane are described in terms of unit area in the model, so the energy cost in unit area for each AP is the same, independent of signals. Thus for the stochastic HH model with membrane area S, we can estimate its energy cost by the product of the number of APs during the entire pulse detection process and the membrane area. B. The bistable neuron model

A comprehensive theoretical and numerical analysis of the HH equations with the inclusion of stochastic channel dynamics was conducted by Chow and White [14]. They showed that the HH system can be approximated by a one-dimensional bistable Langevin equation that describes the dynamics of a particle in a double-well potential. The generation of an AP is then analogous to a particle surmounting the barrier and crossing to the other minimum.

(t)(t  ) = 2δ(t − t  ),

(4)

and x is the position of the particle in the double well, which is equivalent to the membrane potential of the neuron. n is taken here as the number of ion channels in the neuron and controls the intensity of ion channel √ noise. For the above system, one of its minimum xs1 = − a is taken here as the resting state of the neuron and its saddle point xu = 0 the threshold. The probability of finding the particle in the right well after a long enough time, i.e., the probability that an AP generated after the pulse signal was applied, could be calculated with Lecar and Nossal’s approach of linearizing around the saddle point [27] (see Appendix A for details). In the absence of signals, the escape rate of the particle from one well to the other, i.e., the spontaneous firing rate of the neuron, is then estimated with the Kramers formula for escape rate [28] (Appendix B). Given a pulse input that pushes the membrane potential from its resting state to the position x  , the firing probability for the bistable neuron with n ion channels could be derived from Eq. (A10), which is  pc = 12 [1 + erf( an/2x)], (5) where x = x  − xu is the strength of the pulse inputs. From Eq. (B2), the corresponding spontaneous firing rate in the absence of inputs is written as √ 2a −a 2 n/4 e pr = . (6) 2π III. THRESHOLD FLUCTUATION AND SPONTANEOUS FIRINGS

The classic HH neuron model is a deterministic device. APs are generated only when it receives suprathreshold stimulus. In reality, ion channel noise can exert substantial effects on the AP initiation process. It makes the neuron fire in response to subthreshold stimuli, or fail to fire in response to suprathreshold stimuli [29]. To investigate the effect of so-called threshold fluctuation on the signal detection ability of neurons, we applied short (duration 1 ms) current pulse inputs in the stochastic HH model; here the membrane area S controls the number of ion channels and thus the intensity of ion channel noise. If an AP is immediately generated (less than 8 ms after current injection), we say the pulse input is detected by the neuron, as demonstrated in Fig. 2. The pulse detection rate of the neuron is measured as the number of pulse signals being detected over the total number of pulse signals applied. In the absence of ion channel noise, i.e., the membrane area is extremely large, the stochastic HH model is equivalent to the deterministic HH model, which has a rigid threshold. However, when the membrane area is limited, the stochastic HH model shows a “softened” threshold in response to signals, as shown in Fig. 3(a). The ion

032725-3

LIANCHUN YU AND LIWEI LIU

PHYSICAL REVIEW E 89, 032725 (2014)

(a)

(b)

FIG. 2. (Color online) Demonstration of membrane potential traces of stochastic HH models receiving pulse inputs. The membrane area of the neuron is 120 μm2 in (a) and 240 μm2 in (b). Wide gray lines correspond to input pulses with strength I = 5 μA/cm2 and thin red lines I = 8 μA/cm2 . The black triangles mark the time points in which the pulse inputs are applied. The black (lower) and red (upper) stars mark the APs in response to pulse inputs with strength I = 5 μA/cm2 and I = 8 μA/cm2 , respectively.

(b) 100 2

80

2

S=4X10 m S=2x10 3 m2 S=1x10 4 m2

60 40 20 0

4

5

6

7

2

8

9

Spontaneous AP Rate (Hz)

Pulse Detection Rate (%)

(a)

channel noise could facilitate threshold crossing of the weak signals, thus enabling the neuron to detect the subthreshold signals [30,31]. The detection rate for subthreshold pulses increases as the membrane area is decreased. Meanwhile, ion channel noise sabotages the neuron’s detection ability for suprathreshold signals, and the detection rate drops as the membrane area decreases. However, for the threshold signals with only adequate strength to cause neurons to fire in the absence of noise, the detection rate is 50%, regardless of how large the noise intensity is. Thus the detection rate for threshold pulses is independent of the neuron cell size. Aside from threshold fluctuations, ion channel noise can also trigger APs, even in the absence of external inputs. Figure 3(b) demonstrates the firing rate of spontaneous APs for neurons with different membrane areas. The spontaneous firing rate of APs decreases rapidly as the membrane area increases, and the spontaneous firings become very rare when the membrane area is larger that 200 μm2 . Figures 3(c) and 3(d) show the pulse detection rate as a function of x and n described by Eq. (5) and the spontaneous firing rate as a function of n described by Eq. (6), respectively. It is seen that the channel-noise-induced threshold fluctuation and spontaneous firing behaviors in the stochastic HH model are captured well by the bistable neuron model. It is noted that if a train of pulse inputs is applied to the neuron, APs in response to the pulse inputs and the spontaneous APs can be mutually suppressed because of the refractory periods following the APs. If a spontaneous AP

40

30

20

10

0 50

Pulse Strength ( A/cm ) (d)

0.15

1.0 0.8

Pc

0.6

1000

Membrane Area ( m2)

n= 500 n= 50 n= 12 n= 5 n= 1

0.10

pr

(c)

500

200

100

0.4

0.05

0.2

0.00 0.0 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5

x

100

101

n

102

103

FIG. 3. (Color online) Pulse detection rate and spontaneous firing rate of neurons with ion channel noise. (a), (b) Simulation results from the stochastic HH model. Lines are for guiding the eyes, and S is the membrane area. (c), (d) Analytical solutions of Eqs. (5) and (6) from the bistable model. 032725-4

OPTIMAL SIZE OF STOCHASTIC HODGKIN-HUXLEY . . .

PHYSICAL REVIEW E 89, 032725 (2014)

occurs just before the pulse input is applied, the pulse input will not invoke an AP [for example, the second AP in the wide gray line shown in Fig. 2(a)]. The higher the spontaneous firing rate, the higher the likelihood the pulse-invoked APs will be suppressed. Likewise, the spontaneous APs will not be generated in the refractory periods after the APs invoked by pulse inputs. The higher the frequency of pulse inputs applied, the higher the degree of suppression of spontaneous APs. Therefore, for a train of pulse inputs, the detection rate and spontaneous firing rate should be lower than their values measured separately with the stochastic HH model or predicted by the bistable model. However, if the spontaneous firing rate is not very high (i.e., the membrane area is fairly large) and the interpulse interval is not very short, mutual suppression effects can be ignored. In this case, Eqs. (5) and (6) provide a good approximation for the detection rate and spontaneous firing rate of the bistable neuron model with a train of pulse inputs. IV. THE ENERGY EFFICIENCY OF A SINGLE NEURON

We now investigate the energy efficiency of a bistable neuron in a task of detecting the transient pulse train. The pulse train is composed of nall pulse inputs with strength x. The interpulse interval is set as t. We define the energy efficiency as the ratio of the coding capacity C to the energy cost in unit time. The coding capacity C is defined as the difference between the detection rate in unit time and the spontaneous firing rate, which is C=

pc − pr . t

(7)

Note that this definition can yield negative values of C. For the bistable neuron at hand, let us suppose the energy cost for a neuron with only one ion channel to generate an AP is 1. The neuron would generate nall pc APs for correctly detecting nall pc pulses, with the energy cost nnall pc . The neuron also generates ≈nall tpr spontaneous APs due to noise perturbation, with pc energy cost nnall tpr . The energy cost per unit time is n( t + pr ), and the energy efficiency of the bistable neuron model is Q=

pc − tpr . n(pc + tpr )

(8)

Note that Q could be negative as well. In the following paragraph, we take Eqs. (5) and (6) as an approximation for pc and pr in the detection of the pulse train to calculate C and Q. From the above analysis, with an increasing number of ion channels, the spontaneous firing rate pr decreases quickly to zero, and the detection rate pc converges to 1, 0.5, and 0 for suprathreshold, threshold, and subthreshold signals, respectively. Therefore, as n is increased to infinity, by definition the coding capacity C will converge to 1/t, 1/2t, and 0 for suprathreshold, threshold, and subthreshold signals, respectively [Fig. 4(a)]. The dependence of C on n is not monotonic for subthreshold signals but exhibits a maxima at a particular value of n [for example, x = −0.2 in Fig. 4(a)], which implies an optimal number of ion channels for maximal coding capacity for subthreshold signals [19].

For the stochastic HH model, a pulse train composed of nall = 2000 pulses with strength I and duration 1 ms, separated on average by t = 100 ms, is applied for detection. The APs generated in response to the pulse inputs are determined, and the detection rate is calculated as before. The rest of the APs are considered spontaneous firings. The coding capacity is calculated in time units of millisecond and are demonstrated in Fig. 4(b). The dependence of coding capacity on the membrane area is similar to the above prediction of the bistable model. For subthreshold signals of 5 and 6 μA/cm2 , the optimal membrane areas for maximal coding capacity are around 250 and 300 μm2 . Figure 4(c) shows the energy efficiency Q as a function of the number of ion channels n for the bistable neuron, calculated from Eq. (8). As n increases, the energy efficiency increases to reach a maximum and then drops. This resonance of energy efficiency with the number of ion channels implies an optimal number of ion channels for the neuron that can perform the pulse signal detection task most efficiently in the energy consumption. This prediction is confirmed by the simulation results of the stochastic HH model, from which the energy efficiency is calculated as the ratio of coding capacity and energy cost per unit time as measured by the method descried in Sec. II [Fig. 4(d)]. The optimal membrane area for the stochastic HH model is around 200 μm2 . Figure 4 also shows that the coding capacity and energy efficiency are dependent on the pulse input strength. Given that neurons tend to detect a strong pulse more reliably [Figs. 3(a) and 3(c)], increasing the pulse input strength increases the coding capacity, thereby increasing the energy efficiency while decreasing the optimal size of the neurons. The coding capacity and energy efficiency are also dependent on the interpulse interval t. Taking the bistable model, for example, decreasing t will not increase the detection rate pc but instead the coding capacity per unit time without changing the spontaneous firing rate pr . For the stochastic HH model, the spontaneous firing rate even decreases because of the refractory period effects. Therefore, the coding capacity will increase as the interpulse interval decreases [Figs. 5(a) and 5(b)]. Although decreasing t will increase the energy cost per unit time as well, from Eq. (8) we could expect that the ratio of coding capacity to energy cost per unit time, i.e., energy efficiency, will increase, as demonstrated in Figs. 5(c) and 5(d). Again, as the energy efficiency increases, the optimal size of the neuron decreases.

V. THE ENERGY EFFICIENCY OF NEURONAL POPULATION

The neural systems often transmit signals with synchronous spikes, which has been shown to prevent signals being damaged by noise, and the signal information in the synchronized spikes can be reliably read out by the coincidence-detector neuron [19]. In this study, we used a simple scenario to describe this process and investigate its energy efficiency. As shown in Fig. 6, a population of neurons receiving the same pulse train inputs send their outputs to a CD neuron. The CD neuron takes the outputs from neuron population as the inputs, and fires only when multiple inputs arrive simultaneously within a short time

032725-5

LIANCHUN YU AND LIWEI LIU

PHYSICAL REVIEW E 89, 032725 (2014)

(a)

(b)

(c)

(d)

FIG. 4. (Color online) The coding capacity and energy efficiency of a single neuron during a pulse detection task for different pulse strengths. (a) Coding capacity C as a function of the number of ion channels n of the bistable neuron model for different input pulse strengths x. (b) Coding capacity as a function of membrane area of the stochastic Hodgkin-Huxley neuron model for different input pulse strengths I . (c) Energy efficiency Q as a function of number of ion channels n of the bistable neuron model for different input pulse strengths x. (d) Energy efficiency as a function of membrane area of the stochastic Hodgkin-Huxley neuron model for different input pulse strengths I . a = 1 and t = 100 in (a) and (c). The interpulse interval is 100 ms in (b) and (D).

window Tw . A refractory period Tr follows immediately after the firing events. Suppose the CD neuron is excited by θ or more than θ inputs from N neurons whose firing probability in response to pulse stimulation is pc . The detection ability of this neuron population, i.e., the firing probability of the CD neuron, is then written as N   N α pc (1 − pc )N−α = 1 − F (θ,N,pc ), (9) Pc = α α=θ where pcα (1 − pc )N−α is the probability that only α neurons fire at the same time, and (Nα ) =

N! α!(N−α)!

is the number of ways

N! of picking α neurons from population N . So N α=θ α!(N−α)! is the total number of ways of selecting θ or more than θ neurons

out of the population N. F (x,p,n) = xi=0 (ni)pi (1 − p)(n−i) is the binomial cumulative probability function. We must note that during our analysis of the detection rate of a single neuron, we assumed that the response time of a single neuron to the pulse input could be infinitely large. However, most responses happen immediately after the pulse input is applied. Therefore, we could expect a peak in the poststimulus time histogram (PSTH) immediately after the stimulus, riding on the baseline of spontaneous firings (for example, see Fig. 2(a) in Ref. [19]). Thus, if the CD time window Tw is

identical to the width of the peak in PSTH, pc in Eq. (9) can be approximated by Eq. (5). In the following paragraph, we use this approximation for the analytical solution of the CD neuron. In the simulation of the stochastic HH model, we take the CD time Tw = 8 ms to include most firings in response to the pulse input, but not too many spontaneous firings [32]. In the absence of input signals, the CD neuron receives inputs with rate Npr from the N bistable neurons, each of which fires spontaneously at rate pr . Thus the CD neuron will generate spontaneous APs and its rate depends on not only the threshold, detection window, and the refractory period of the CD neuron, but also the input rate from the upstream neuron population. A recent work by Krips and Furst studied the dependence of output rate of CD neurons on the nonhomogeneous Poisson input rate and CD neuron properties [33]. Since the distribution of interspike intervals of spontaneous APs due to ion channel noise is similar to a Poisson process [14], according to their results for Poisson inputs with constant rate, the spontaneous firing rate of the above CD neuron with N inputs with rate pr is written as Pr = N !

N  (1 − pr Tw )N−α pα T α−1 r

α=θ

w

(N − α)!(α − 1)!

.

(10)

In the presence of both signals and noise, APs due to signals and spontaneous APs of the CD neuron will mutually suppress

032725-6

OPTIMAL SIZE OF STOCHASTIC HODGKIN-HUXLEY . . .

PHYSICAL REVIEW E 89, 032725 (2014)

(a)

(b)

(c)

(d)

FIG. 5. (Color online) Coding capacity and energy efficiency of a single neuron during the pulse detection task for different interpulse intervals. (a) Coding capacity C as a function of number of ion channels n of the bistable neuron model for different interpulse intervals t. (b) Coding capacity as a function of membrane area of the stochastic HH neuron model for different interpulse intervals. (c) Energy efficiency Q as a function of number of ion channels n of the bistable neuron model for different interpulse intervals t. (d) Energy efficiency as a function of membrane area of the stochastic Hodgkin-Huxley neuron model for different interpulse intervals. a = 1 and x = 0.1 in (a) and (c). The input pulse strength I = 8 μA/cm2 in (b) and (d).

each other due to the refractory period. These effects could be significant for a neuron network with large N and small n. We will discuss this effect in the following paragraph. The coding capacity of the above network is then written as Pc − Pr . C= (11) t

To detect the pulse train described above, the neuron network would respond with N nall pc APs in the bistable neuron population and nall Pc APs of the CD neuron. Meanwhile, N nall pr t spontaneous APs are generated in the bistable neuron population. For simplicity, we neglected the energy consumption of the CD neuron in the following analytical and numerical solutions. Therefore, the energy cost per unit time pc of this neuron network is N n( t + pr ). The energy efficiency of the neuron network is Q=

FIG. 6. (Color online) Pulse signal detection scenario for neural systems with neuron population. The front layer is composed of a stochastic neuron with ion channel noise (SHH), each of which receives the same pulse train input. CD is the coincidence detector neuron with threshold θ. It fires an AP when θ or more than θ upstream SHH neurons fire at the same time.

Pc − tPr . N n(pc + pr t)

(12)

Can detection ability be improved with neuron population? Combining Eqs. (5) and (9), we calculated the detection rate of the CD neuron Pc for different size configurations (n, N ) of the upstream neuron population, i.e., the number of ion channels in each neuron and the number of neurons in the population. It is seen from Figs. 7(a)–7(c) that for different input pulse strengths, there are different critical lines (the rightmost contour lines of Pc ≈ 0.999 9) in the (n,N ) plane. The reliable detection (Pc ≈ 1) is realized in the area to the right of the critical lines. As the input pulse strength increases, this critical line moves to the left in the (n,N ) plane, which implies that reliable detection could be realized with fewer neurons in the population. For subthreshold or suprathreshold

032725-7

LIANCHUN YU AND LIWEI LIU (b) 100 80 60 40 20

1

0.5

20

40

0

0.5

20

N (e)

0.01

(f)

15

10

0.01

0

20

0.01

15

(h)

n 20

x 10 6

100 80 60 40 20

0 20

40

N

−2

n

2

40

20

0

10 −0.01

5

40

20 x 10

(l)

4 2 0

40

N

−5

100 80 60 40 20

−5

x 10

100 80 60 40 20

4 2 0 −2

−2 20

0

0.01

15

−0.01

N

4

(i)

0

5

(k)

0.01 20

0.01

20 −5

0

10

40

N

10

N

15

−0.01

5

40

20

0

10

0.02

N

n

20

0

5

n

40

40

20

0.02

5

N

n

20

n

10

20

0.5

N

15

5

n

0

20

0.02

n

n

15

(j)

40

1

N

(d) 20

(g)

(c) 100 80 60 40 20

1

n

100 80 60 40 20

n

n

(a)

PHYSICAL REVIEW E 89, 032725 (2014)

40

N

20

40

N

FIG. 7. (Color online) Signal detection rate, spontaneous firing rate, coding capacity, and energy efficiency as a function of the number of ion channels n and the number of neurons N for a bistable neuron network. The left column corresponds to a suprathreshold input pulse strength x = 0.1 [(a), (d), (g), (j)]. The middle column corresponds to a threshold input pulse strength x = 0 [(b), (e), (h), (k)]. The right column corresponds to the subthreshold input pulse strength x = −0.1 [(c), (f), (i), (l)]. Each row, from top to bottom, corresponds to the signal detection rate [(a)–(c)], spontaneous firing rate [(d)–(f)], coding capacity [(g)–(i)], and the energy efficiency [(j)–(l)] of the network, respectively. a = 1, Tw = 0.01, and t = 100.

pulse inputs, those critical lines are dependent on both the number of neurons and the number of ion channels in the neurons. For suprathreshold inputs, the larger the number of ion channels existing in the neurons, the higher the detection rate of the single neuron; thus fewer neurons are needed to reach reliable detection [Fig. 7(a)]. For subthreshold pulse inputs, the larger the number of ion channels in the neurons, the lower the detection rate of the single neuron; thus more neurons are needed to transmit the input information to the CD neurons to achieve correct detection [Fig. 7(c)]. However, for threshold pulse inputs, since the detection rate of each single neuron is 0.5, independent of the number of ion channels in neurons, the critical line is independent of n [Fig. 7(b)]. Since we did not consider the mutual suppression of signalinduced APs and noise-induced APs, the dependence of Pr on N and n is the same for pulses with different strengths [Figs. 7(d)–7(f)]. Pr decreases as N decreases, or n increases, implying that the CD neuron could reduce spontaneous firings when progressing across neural levels [33]. From Eq. (10), we see that the spontaneous firing rate of the CD neuron increases as the detection window Tw increases. Note that Tw is the only arbitrary variable left undetermined, because it is not

needed in the calculation of Pc . For the current parameter setting, we set Tw = 0.01, but our calculations indicated that the following results hold as long as Tw < 0.021. In other words, the following results hold if the spontaneous firing rate of the CD neuron is low enough. From the following simulation results, we will see that the CD neuron can indeed keep a low rate of spontaneous firing without damage to signal processing. From Eq. (11), we calculated the coding capacity C for different size configurations (n,N ) of the upstream neuron population, as shown in Figs. 7(g)–7(i). It is seen that for inputs with different strengths, the coding capacity is enhanced as n and N increase. The enhancement of coding capacity for the network is a consequence of the higher firing rate of upstream neurons in response to pulse inputs rather than the spontaneous firing rate [19]. Since the pulse signals can push the system to a position near the threshold, the probability that the system crosses the threshold with the force of “pulse plus noise” is always higher than the probability with noise alone. The CD neuron can read out the information in pulse inputs because it is more sensitive to the nested high rate outputs of upstream neurons in response to pulse stimuli than their spontaneous firings that are distributed randomly in time. For

032725-8

OPTIMAL SIZE OF STOCHASTIC HODGKIN-HUXLEY . . .

PHYSICAL REVIEW E 89, 032725 (2014)

a certain threshold θ , with the increase of N , Pc remains 1 1 . For t = 100, the and Pr becomes 0, and C will reach t maximal C is 0.01, the same value that the single neuron can reach. Further increases in N will increase not Pc but Pr , so C drops again. One may also note that as N increases, Pr will increase faster than Pc when θ is too low or n is too small, 1 so that C drops before it reaches t . Thus the horizontal parts of the contour lines in Figs. 7(g)–7(i) are tilted upward in the right-hand sides of the figures. The energy efficiencies Q as a function of n and N for the neuron network with different input strengths are calculated from Eq. (12) and shown in Figs. 7(j)–7(l). It is seen that there is an optimal combination of the number of neurons and the number of ion channels in each neuron (n∗ ,N ∗ ) with which the energy efficiency of the neuron population is at its maximum. Thus we argue that in the network level, not only the size of the neuron cell, but also the size of the network is optimized for maximal energy efficiency. As discussed above, with the increase of the number of neurons, the coding capacity increases and thereby the energy efficiency increases. However, more neurons will generate more APs, whether induced by pulse inputs or spontaneous firings. As the number of neurons increases, the energy cost increases too, which will decrease the energy efficiency. Therefore, the optimal number of neurons for the neuron networks is a compromise between coding capacity and energy cost. For our simulation of the stochastic HH model population, the same pulse train as before is applied to each neuron in the population. The output APs of these neurons are converted to a point process in time and scanned by a sliding window of width Tw = 8 ms. If θ or more than θ events are found in the sliding window, the CD neuron is marked with a “firing” at the time the last event in the window occurs. The window then starts to slide again at Tr = 10 ms after the last event. A pulse input is detected if a “firing” in CD neuron that happen less than 8 ms after applying pulse is observed. The rest of the firings are considered spontaneous. The coding capacity for this network is calculated based on the detection rate and spontaneous firing rate of the CD neuron. The energy costs are calculated in the same way as before for upstream neuron population. The ratio of coding capacity to energy cost in unit time is taken as the energy efficiency of the network. Figures 8(e)–8(h) show the dependence of coding capacity and energy efficiency on the cell and the network size of the stochastic HH neuron network. (The results for threshold pulse inputs are not shown.) We see that the simulation results are similar to the predictions from the bistable neuron network. For a fixed input strength, an optimal configuration of membrane area and number of neurons with which the energy efficiency is maximized indeed exists. We note here that we did not consider the refractory effect of the CD neuron’s spontaneous firings in the calculation of the detection rate of CD neurons with Eq. (9), so the detection rate can reach 1 for even a very small cell size [bottom of Figs. 7(a)– 7(c)]. Actually, when the membrane area is small and the number of neurons is large, the spontaneous firing rate of the CD neuron is high and thus its suppression of the CD neuron’s detection rate is significant [see bottom of Figs. 8(a) and 8(b)]. Similarly, in Figs. 7(d)–7(f) the spontaneous firing

FIG. 8. (Color online) Signal detection rate, spontaneous firing rate, coding capacity, and energy efficiency as a function of membrane area of the stochastic HH model and the number of stochastic HH neurons in the neuron population. The left column corresponds to the subthreshold input pulse strength I = 8 μA/cm2 [(a), (c), (e), (g)], and the right column corresponds to the suprathreshold input pulse strength I = 5 μA/cm2 [(b), (d), (f), (h)]. Each row, from top to bottom, corresponds to the signal detection rate [(a) and (b)], spontaneous firing rate [(c) and (d)], coding capacity of the CD neuron [(e) and (f)], and the energy efficiency [(g) and (h)] of the network, respectively. The interpulse interval is t = 100 ms, Tw = 8 ms, Tr = 10 ms, and the CD threshold is θ = 4.

rates are the same for different pulse strengths because the refractory effect of Pc on Pr is not considered. However, we can see that the spontaneous firing rate is higher for weak pulses [Fig. 8(d)] than for strong pulses [Fig. 8(c)], because strong pulses can induce more APs, thus suppressing more spontaneous APs. The optimal network size depends on the interpulse interval (results not shown), input pulse strength, and coincidencedetector threshold, as demonstrated in Fig. 9. When the CD threshold is fixed, the maximal energy efficiency increases with increasing input pulse strength, whereas the optimal network size decreases. For a fixed input pulse strength, as the CD threshold increases, more synchronous spikes induced by the input pulses are required for correct detection, so the optimal number of neurons increases and energy efficiency decreases.

032725-9

LIANCHUN YU AND LIWEI LIU

PHYSICAL REVIEW E 89, 032725 (2014)

(a)

(b)

FIG. 9. Dependence of the maximal energy efficiency and corresponding optimal neuron population size on the CD threshold and input pulse strength. (a) Analytical results for the neuronal network with the bistable model. The points in each line, from left to right, correspond to the CD threshold θ = 1,2,3,4,5, respectively. a = 1, Tw = 0.01, and t = 100. (b) Simulation results for the neuronal network with the stochastic HH model. The points in each line, from left to right, correspond to the CD threshold θ = 2,4,6,8,10, respectively. Tw = 8 ms and Tr = 10 ms. The interpulse interval is t = 100 ms. VI. DISCUSSION AND CONCLUSION

Neural systems usually employ APs to carry input information. The generation and propagation of APs is supported by metabolic energy, which is a large burden for animals and is believed to shape the neural system through evolution pressure. In this paper, we investigated the energy efficiency of a neural system performing a pulse signal detection task under the perturbation of ion channel noise. We found that the energy efficiency exhibits peak values corresponding to optimal sizes of the neural system, i.e., the number of ion channels in a neuron and the number of neurons in a network. The optimal size of neurons for maximal energy efficiency is a result of the tradeoff between signal processing reliability and energy cost, as previously pointed out by Schreiber et al. with linear input-output neurons composed of Na+ and K+ channels [16]. In our study, we showed that this principle holds true when neurons use APs to transfer information. We further showed that this principle also holds true for neuron population, where the number of neurons is optimized for maximal energy efficiency. The intrinsic ion channel noise, which is inversely proportional to the number of ion channels, plays the role of a doubleedged sword in the signal processing of neurons, as well as

when studied in the context of stochastic resonance [34,35]. In this paper we showed that ion channel noise is also important in the energy efficiency of neurons in signal processing. Ion channel noise helps neurons to detect subthreshold signals, thereby improving neuron energy efficiency; however, it damages the reliability of neurons for suprathreshold signal detection and causes spontaneous APs, thereby decreasing energy efficiency. From the deduction of detection rate and spontaneous firing rate for the bistable neuron network, we see that the number of neurons in a network may be optimized by energy efficiency with other types of noise, e.g., synaptic noise, which is caused by large number of independent presynaptic events [32]. In this study, we evaluated the energy efficiency in a simple coding scenario of neural systems in which the input pulse signals are encoded digitally with the APs. We concluded that the neural system size can be optimized for maximal energy efficiency. In a general way, the information conveying the ability of neural systems could be measured by the information theory formulated by Shannon and Weaver [20]. So the conclusion in our study could be rechecked in this general framework by defining the energy efficiency as the ratio between the amount of transmitted information and the metabolic energy required [36]. Though our conclusions are in line with Ref. [16], a recent simulation work carried out by Sengupta et al. showed that the energy efficiency drops monotonically as the cell size increases, and it is the ion channel density that maximizes the energy efficiency [24]. Therefore, further research on this topic is recommended. The analysis presented in our work provides a starting point in this direction and can guide further experimental and theoretical studies. ACKNOWLEDGMENTS

This work was supported by the National Natural Science Foundation of China (Grant No. 11105062), the Fundamental Research Funds for the Central Universities (Grant No. lzujbky-2011-57), and the Fundamental Research Funds for the Central Universities of Northwest University for Nationalities (No. zyz2011051). We would like to thank Prof. Yuguo Yu and the anonymous referees for their valuable comments on the manuscript. APPENDIX A: THE RESPONSE FUNCTION OF BISTABLE NEURON MODEL TO PULSE INPUTS

For a neuron subject to a pulse signal and noise, its dynamics is analogous to a particle in a double-well potential with pulse force and noise perturbation. Thus it could be described with the following equation: x˙ = −U  (x) + (t),

(A1)

where (t) = 0; 4

(t)(t  ) = 2Dδ(t − t  ).

(A2)

U = − a2 x 2 + x4 is a double-well potential having two minima √ √ at xs1 = − a and xs2 = a and a saddle point xu = 0, and D is noise intensity. Let us assume that a short duration force moves the particle parallel to the x axis into the region of the

032725-10

OPTIMAL SIZE OF STOCHASTIC HODGKIN-HUXLEY . . .

PHYSICAL REVIEW E 89, 032725 (2014)

saddle point. After the force is removed, the particle drifts up to the region of the saddle point. Near the saddle point, the phase trajectories are repelled, causing the particle to accelerate away from the saddle-point region towards one of the minima. Following Lecar and Nossal’s approach of linearizing around the saddle point, we can obtain the probability of finding the particle in the right well after a long enough time, i.e., the probability that a pulse input signal is detected by the neuron [27]. First we expand Eq. (A1) in the neighborhood of the threshold singular point xu . Letting = x − xu (A3) and recalling xu = 0, we obtain the equation ˙ = a + (t). The solution of Eq. (A4) has the form t (t) = (0)eat + ea(t−s) (s)ds.

(A4)

(A5)

0

This integral can be rewritten as



D 2at (e − 1). (A8) a The probability of the neuron firing after a pulse input is equal to the probability that (t) > 0 when t → ∞, given an initial displacement (0). Then ∞ P [ (t) > 0| (0)] = lim P (X,t)dX (A9) X2  =

t→∞ − (0)eat

or P [ (t) > 0| (0)] =

 1 (0) 1 + erf √ , 2 2D/a

where erf is the so-called error function, defined as x 2 2 e−t dt. erf(x) = √ π 0

(A10)

(A11)

APPENDIX B: THE SPONTANEOUS FIRING RATE OF A BISTABLE NEURON MODEL

t

X(t) = (t) − (0)eat =

time. From Eq. (A6) one finds

ea(t−s) (s)ds.

(A6)

0

Since (t) is a Gaussian random variable, it follows that the time integral X(t) also obeys a Gaussian distribution. We then have  −X2 1 , (A7) P (X,t) = (2π X2 )− 2 exp 2X2  where the angular bracket denotes an expectation value. The expectation value of X2 can be expressed in terms of the joint expectation of the variable  taken with itself at a different

[1] D. Richter, Metabolism of the Nervous System, 1st ed. (Elsevier Science & Technology Books, New York, 1957). [2] D. Attwell and S. B. Laughlin, J. Cerebr. Blood Flow Metab. 21, 1133 (2001). [3] P. J. Magistretti, Science 325, 1349 (2009). [4] P. Siekevitz, Science 306, 410 (2004). [5] S. B. Laughlin, R. R. de Ruyter van Steveninck, and J. C. Anderson, Nat. Neurosci. 1, 36 (1998). [6] J. E. Niven and S. B. Laughlin, J. Exp. Biol. 211, 1792 (2008). [7] S. B. Laughlin, Curr. Opin. Neurobiol. 11, 475 (2001). [8] D. G. Margineanu and E. Schoffeniels, Proc. Natl. Acad. Sci. USA 74, 3810 (1977). [9] Y. Yu, A. P. Hill, and D. A. McCormick, PLoS Comput. Biol. 8, e1002456 (2012). [10] B. Sengupta, M. Stemmler, S. B. Laughlin, and J. E. Niven, PLoS Comput. Biol. 6, e1000840 (2010). [11] H. Alle, A. Roth, and J. R. P. Geiger, Science 325, 1405 (2009). [12] R. Sarpeshkar, Neural Comput. 10, 1601 (1998). [13] P. N. Steinmetz, A. Manwani, C. Koch, M. London, and I. Segev, J. Comput. Neurosci. 9, 133 (2000). [14] C. Chow and J. White, Biophys. J. 71, 3013 (1996). [15] S. B. Laughlin, in The Computing Neuron, edited by R. Durbin, C. Miall, and G. Mitchison (Addison-Wesley Longman Publishing Co., Inc., Boston, MA, 1989), pp. 322–336.

The spontaneous firing rate of a bistable neuron is considered as the escape rate of a particle from the right well to the left well. According to the Kramers formula, the escape rate is  1   U  , (B1) K= U (xs )|U (xu )|exp − 2π D where U = U (xu ) − U (xs1 ). Thus the escape rate of a particle in the double-well potential is √  a2 2a  exp − . (B2) K = 2π 4D

[16] S. Schreiber, C. K. Machens, A. V. M. Herz, and S. B. Laughlin, Neural Comput. 14, 1323 (2002). [17] M. Bear, B. Connors, and M. Paradiso, Neuroscience: Exploring the Brain (Lippincott Williams & Wilkins, New York, 2007). [18] N. Tabareau, J.-J. Slotine, and Q.-C. Pham, PLoS Comput. Biol. 6, e1000637 (2010). [19] Y. Chen, L. C. Yu, and S. M. Qin, Phys. Rev. E 78, 051909 (2008). [20] C. E. Shannon and W. Weaver, The Mathematical Theory of Communication (University of Illinois Press, Urbana, IL, 1949). [21] L. Kostal, P. Lansky, and M. D. McDonnell, Biol. Cybern. 107, 355 (2013). [22] A. L. Hodgkin and A. F. Huxley, J. Physiol. (London) 117, 500 (1952). [23] S. Zeng and P. Jung, Phys. Rev. E 70, 011903 (2004). [24] B. Sengupta, A. A. Faisal, S. B. Laughlin, and J. E. Niven, J. Cereb. Blood Flow Metab. 33, 1465 (2013). [25] P. Crotty, T. Sangrey, and W. B. Levy, J. Neurophysiol. 96, 1237 (2006). [26] J. A. White, J. T. Rubinstein, and A. R. Kay, Trends Neurosci. 8, 131 (2000). [27] H. Lecar and R. Nossal, Biophys. J. 11, 1048 (1971). [28] C. Gardiner, Stochastic Methods: A Handbook for the Natural and Social Sciences, 4th ed. (Springer, New York, 2009).

032725-11

LIANCHUN YU AND LIWEI LIU

PHYSICAL REVIEW E 89, 032725 (2014)

[29] J. A. White, R. Klink, A. Alonso, and A. R. Kay, J. Neurophysiol. 80, 262 (1998). [30] R. K. Adair, Proc. Natl. Acad. Sci. USA 100, 12099 (2003). [31] W. C. Stacey and D. M. Durand, J. Neurophysiol. 86, 1104 (2001). [32] Y. L. Chen, H. Zhang, H. Wang, L. C. Yu, and Y. Chen, PLoS One 8, e56822 (2013).

[33] R. Krips and M. Furst, Neural Comput. 21, 2524 (2009). [34] M. D. McDonnell and D. Abbott, PLoS Comput. Biol. 5, e1000348 (2009). [35] K. Wiesenfeld and F. Moss, Nature (London) 373, 33 (1995). [36] A. Moujahid, A. d’Anjou, F. J. Torrealdea, and F. Torrealdea, Phys. Rev. E 83, 031912 (2011).

032725-12

Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals.

The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically ...
1MB Sizes 0 Downloads 3 Views