IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

751

A Stochastic Mean Field Model for an Excitatory and Inhibitory Synaptic Drive Cortical Neuronal Network Qing Hui, Member, IEEE, Wassim M. Haddad, Fellow, IEEE, James M. Bailey, and Tomohisa Hayakawa, Member, IEEE Abstract— With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we develop a mean field synaptic drive firing rate cortical neuronal model and demonstrate how the induction of general anesthesia can be explained using multistability; the property whereby the solutions of a dynamical system exhibit multiple attracting equilibria under asymptotically slowly changing inputs or system parameters. In particular, we demonstrate multistability in the mean when the system initial conditions or the system coefficients of the neuronal connectivity matrix are random variables. Uncertainty in the system coefficients is captured by representing system uncertain parameters by a multiplicative white noise model wherein stochastic integration is interpreted in the sense of Itô. Modeling a priori system parameter uncertainty using a multiplicative white noise model is motivated by means of the maximum entropy principle of Jaynes and statistical analysis. Index Terms— Brownian motion, excitatory and inhibitory neurons, general anesthesia, mean field model, multiplicative white noise, spiking neuron models, stochastic multistability, uncertainty modeling, Wiener process.

I. I NTRODUCTION

T

HE neurosciences have relied on mathematical modeling throughout the relatively short history of this discipline [1]–[4]. Mathematical models facilitate the interpretation of experimental results, and lead to new hypotheses about neurological function, ranging from the simplest reflex arc to

Manuscript received October 15, 2012; revised August 29, 2013; accepted September 2, 2013. Date of publication September 30, 2013; date of current version March 10, 2014. This work was supported in part by the Defense Threat Reduction Agency under Grant HDTRA1-10-1-0090 and Grant HDTRA1-13-1-0048, in part by the QNRF under NPRP Grant 4-187-2-060, and in part by the Air Force Office of Scientific Research under Grant FA9550-12-1-0192. Q. Hui is with the Department of Mechanical Engineering, Texas Tech University, Lubbock, TX 79409-1021 USA (e-mail: [email protected]). W. M. Haddad is with the School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA (e-mail: [email protected]). J. M. Bailey is with the Department of Anesthesiology, Northeast Georgia Medical Center, Gainesville, GA 30503 USA (e-mail: [email protected]). T. Hayakawa is with the Department of Mechanical and Environmental Informatics, Tokyo Institute of Technology, Tokyo 152-8552, Japan (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2013.2281065

the question of what is consciousness. Nonlinear dynamical system theory, in particular, provides a framework for exploring the behavior of large-scale networks of neurons. An important application of nonlinear dynamical systems theory to neuroscience is the study of phenomena that exhibit nearly discontinuous transitions between macroscopic states [5]. Understanding these phenomena is immediately clinically relevant as anesthetic agents exhibit this behavior [6]–[11]. In both animal and human studies, it has been observed that with increasing doses of anesthetic agents the transition from consciousness to unconsciousness or from responsiveness to nonresponsiveness in the individual subject is very sharp, almost an all-or-none transition [12], confirming the clinical observations of generations of clinicians. The neural network of the brain consists of approximately 1011 neurons (nerve cells) with each having 104 –105 connections. Our research is postulated on the belief that the explanation for the mechanism of action of anesthetics lies in the network properties of the brain. The challenge is how to account for such a transition in network properties in terms of the known molecular properties of the anesthetic agent. While the earliest theories of the mechanism of action of general anesthesia postulated a perturbation of the lipid bilayer membrane of neurons, more recent focus is on the interaction of the anesthetic agent with specific protein receptors [13]–[17]. The experimental evidence indicates that general anesthetics alter postsynaptic potentials [18], [19]. However, it is not immediately clear how the effect on postsynaptic potentials translates to the observed very sharp transition between consciousness and unconsciousness induced by general anesthetic agents. The dynamics of even a single neuron are quite complex [20] and when this complexity is coupled with the large scale and high connectivity of the neural network of the brain, theoretical analysis becomes intractable without major simplifying assumptions. Typically, the description of single neuron dynamics is simplified and assumed to be described either by an integrate-and-fire model or by a spike response model [21]. In addition, the scale and connectivity of the network are simplified using mean field theories. Earlier mean field theories assumed that the brain is organized into a limited number of pools of identical spiking neurons [22]. However, more commonly mean field theories assume that the strength of connection between neurons is normally distributed around some mean value.

2162-237X © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

752

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

Mean field theories impose self-consistency on field variables; for example, if postsynaptic potentials are assumed to be a function of some mean firing rate, then those postsynaptic potentials should lead to a consistent predicted mean firing rate. The idea of applying mean field theories, drawn from the study of condensed matter, originated with [23]. Subsequently, Sompolinsky et al. [24] developed a mean field theory for neural networks analogous to the equations developed for spin glasses with randomly symmetric bonds [25]. Amit and Brunel [26] investigated the stability of system states for a network of integrate-and-fire neurons, while Brunel and Hakim [27] extended this theoretical model to the analysis of oscillations. Gerstner et al. [21], [28] subsequently developed a mean field theory using a spike response model and also demonstrated that the integrate-and-fire model was a special case of the spike response model. In [5], we used deterministic multistability theory to explain the underlying mechanism of action for anesthesia and consciousness using a synaptic drive firing model framework [20]. Specifically, we applied our results to the induction of general anesthesia with a mean field assumption leading to a two-state (mean excitatory and mean inhibitory) model, and within this assumption, we demonstrated multistability as a consequence of changes in the inhibitory postsynaptic potential induced by anesthetic agents. In this paper, we extend these results further by demonstrating multistability in the mean when the coefficients of the neuronal connectivity matrix are random variables or when the synaptic drive firing model initial conditions are random. Specifically, we use a stochastic multiplicative uncertainty model to include modeling of a priori uncertainty in the coefficients of the neuronal connectivity matrix by means of state-dependent noise. A stochastic multiplicative uncertainty model uses state-dependent Gaussian white noise to represent parameter uncertainty by defining a measure of ignorance, in terms of an information-theoretic entropy, and then determining the probability distribution, which maximizes this measure subjected to agreement with a given model. II. B IOLOGICAL N EURAL N ETWORKS The fundamental building block of the central nervous system, the neuron, can be divided into three functionally distinct parts, namely, the dendrites, soma (or cell body), and axon. The dendrites play the role of input devices that collect signals from other neurons and transmit them to the soma; whereas the soma generates a signal that is transmitted to other neurons by the axon. The axons of other neurons connect to the dendrites and soma surfaces by means of connectors called synapses. The behavior of the neuron is best described in terms of the electrochemical potential gradient across the cell membrane. If the voltage gradient across the membrane increases to a critical threshold value, then there is a subsequent abrupt steplike increase in the potential gradient, the action potential. This action potential is transmitted from the soma along the axon to a dendrite of a receiving neuron. The action potential elicits the release of neurotransmitter molecules that diffuse to the dendrite of a receiving neuron. This alters the voltage gradient across the receiving neuron.

The electrochemical potential for a neuron can be described by a nonlinear four-state system [20]. Coupling these system equations for each neuron in a large neural population is computationally prohibitive. To simplify the mathematical modeling, it has been common to use phenomenological firing rate models for studying neural coding, memory, and network dynamics [20]. Firing rate models involve the averaged behavior of the spiking rates of groups of neurons rather than tracking the spike rate of each individual neuron cell. In such population models, the activity of a neuron, that is, the rate at which the neuron generates an action potential (fires) is modeled as a function of the voltage (across the membrane). The firing of a neuron evokes voltage changes, postsynaptic potentials on receiving neurons; that is, neurons electrically connected to the firing neurons via axon–dendrite connections. In general, neurons are either excitatory or inhibitory depending on whether the postsynaptic potential increases or decreases the potential of the receiving neuron. In particular, excitatory neurotransmitters depolarize postsynaptic membranes by increasing membrane potentials and can collectively generate an action potential. Inhibitory neurotransmitters hyperpolarize the postsynaptic membrane by decreasing membrane potentials, thereby nullifying the actions of excitatory neurotransmitters and in certain cases prevent the generation of action potentials. Biological neural network models predict a voltage in the receiving or postsynaptic neuron given by V (t) =

nE   i=1

αiE (t − t j ) +

nI   i  =1

j

αiI (t − t j  )

j

where i ∈ {1, . . . , n E } and i  ∈ {1, . . . , n I } enumerate the action potential or firings of the excitatory and inhibitory transmitting (presynaptic) neurons at firing times t j and t j  , respectively, and αiE (·) and αiI (·) are functions (in volts) describing the evolution of the excitatory and inhibitory postsynaptic potentials, respectively. Using a (possibly discontinuous) function f i (·) to represent the firing rate (in hertz) of the i th neuron and assuming that the firing rate is a function of the voltage v iE (·) (respectively, v iI (·)) across the membrane of the i th neuron given by f i (v iE ) (respectively, f i (v iI )), X, Y ∈ {E, I}, it follows that v iE (t) =



nE 

AEE ij

j =1, j  =i nI 

+

v iI (t) =

AEI ij



t −∞

t

  α Ej (t − τ ) f j v Ej (τ ) dτ

  α Ij  (t − τ ) f j  v Ij  (τ ) dτ

−∞ j  =1 E (t), i = 1, . . . , n E +v thi  t nE    IE Ai j α Ej (t − τ ) f j v Ej (τ ) dτ −∞ j =1  t n I    II + Ai j  α Ij  (t − τ ) f j  v Ij  (τ ) dτ −∞ j  =1, j   =i I +v thi (t), i = 1, . . . , n I

(1)

(2)

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS

where the neuronal connectivity matrix AXY , with units of volts×synapse, contains entries AXY i j , X, Y ∈ {E, I}, representing the coupling strength of the j th neuron on the i th neuron XI such that either AXE i j > 0 or A i j < 0, X ∈ {E, I}, if the j th neuron is connected (i.e., contributes a postsynaptic potential) = 0, otherwise. Furthermore, to the i th neuron, and AXY ij E (·) and v I (·) are continuous threshold input voltages. Note v thi thi II that AEE ii  A ii  0 by definition. Next, defining the synaptic drive—a dimensionless quantity per synapse—of each (excitatory or inhibitory) neuron by  t (E,I) (E,I) (E,I) Si (t)  αi (t − τ ) f i (v i (τ ))dτ (3) −∞

and assuming (E,I)

αi

(t) = B (E,I)e



t (E,I) λi

(4)

where the dimensionless gain B (E,I) is equal to B E if the i th neuron is excitatory and B I if the i th neuron is inhibitory, and (E,I) (E,I) (E,I) (E,I) similarly for Si , v i , αi , and λi , it follows from (3) and (4) that dSi(E,I) (t) 1 = − (E,I) Si(E,I) (t) + B (E,I) fi (v i(E,I) (t)). dt λ i

Now, using the expressions for the excitatory and inhibitory voltages given by (1) and (2), respectively, it follows that ⎛ nE  dSiE (t) 1 E = − E SiE (t) + B E fi ⎝ AEE i j S j (t) dt λi j =1, j  =i ⎞ nI  I E ⎠ AEI (5) + i j  S j  (t) + v thi (t) , i = 1, . . . , n E j  =1

⎛ nE  dSiI (t) 1 I I ⎝ E = − I Si (t) + B fi AIE i j S j (t) dt λi j =1 ⎞ n I  I I ⎠ + AII i j  S j  (t)+v thi (t) , i = 1, . . . , n I . (6) j  =1, j   =i

The above analysis reveals that a form for capturing the neuroelectronic behavior of biological excitatory and inhibitory neuronal networks can be written as ⎛ ⎞ n  dSi (t) = −τi Si (t) + Bi f i ⎝ Ai j S j (t) + v thi (t)⎠ dt j =1

Si (0) = Si0 , t ≥ 0, i = 1, . . . , n

(7)

where Si (t) ∈ D ⊆ R, t ≥ 0, is the i th synaptic drive, v thi (t) ∈ R, t ≥ 0 denotes the threshold input voltage of the i th neuron, Ai j is a constant representing the coupling strength of the j th neuron on the i th neuron, τi  1/λi is a time constant, Bi is a constant gain for the firing rate of the i th neuron, and f i (·) is a nonlinear activation function describing the relationship between the synaptic drive and the firing rate of the i th neuron. In this paper, we assume that f i (·) is a continuous function such as a half-wave rectification function. Specifically, for a

753

typical neuron [3] fi (x) = [x]+

(8)

where i ∈ {1, . . . , n} and [x]+ = x if x ≥ 0, and [x]+ = 0, otherwise. Alternatively, we can approximate f i (x) by the smooth (i.e., infinitely differentiable) half-wave rectification function xeγ x (9) fi (x) = 1 + eγ x where i ∈ {1, . . . , n} and γ  0. Note that fi (x) ≈ 1 for x > 0 and f i (x) ≈ 0, x = 0. In addition, note that (8) and (9) reflect the fact that as the voltage increases across the membrane of the i th neuron, the firing rate increases as well. Often, the membrane potential-firing rate curve exhibits a linear characteristic for given range of voltages. At higher voltages, however, a saturation phenomenon appears, showing that the full effect of the firing rate has been reached. To capture this effect, f i (·) can be modeled as f max eγ x (10) 1 + eγ x where i ∈ {1, . . . , n}, γ  0, and f max = limγ →∞ f i (x) are the maximum firing rate. f i (x) =

III. T WO -C LASS M EAN E XCITATORY AND I NHIBITORY S YNAPTIC D RIVE M ODEL As shown in [5], the excitatory and inhibitory neural network model given by (5) and (6) can possess multiple equilibria. For certain values of the model parameters it can be shown that as the inhibitory time constants λIi get larger, the equilibrium states can flip their stabilities. Since molecular studies suggest that one possible mechanism of action of anesthetics is the prolongation of the time constants of inhibitory neurons [18], [19], this suggests that general anesthesia is a phenomenon in which different equilibria can be attained with changing anesthetic agent concentrations. In this section, we develop a simplified model involving mean excitatory and inhibitory synaptic drives to explore this multistability phenomenon. Consider the excitatory and inhibitory synaptic drive model given by (5) and (6) with f i (·) = f (·), B E = B I = 1, λEi = λE , and λIi = λI . In this case, (5) and (6) become ⎛ ⎞ nE nI   dSiE (t) E I E ⎠ = f⎝ AEE AEI i j S j (t) + ik Sk (t) + v thi (t) dt j =1

k=1

1 − E SiE (t), i = 1, . . . , n E λ ⎛ ⎞ nE nI   dSiI (t) E I I ⎠ = f⎝ AIE AII i j S j (t) + ik Sk (t) + v thi (t) dt j =1

(11)

k=1

1 − I SiI (t), i = 1, . . . , n I (12) λ II where f (·) is given by either (9) or (10) and AEE ii = A ii = 0. EE EI EE EE EI EI Next, let Ai j = A + i j , Ai j = A + i j , AIE ij = A

IE

II

II II + IE i j , and A i j = A + i j , where A

XY

 (1/(n X n Y ))

754

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

nX nY

XY AXY i j , X, Y ∈ {E, I}, denote mean and i j , X, Y ∈ {E, I}, are deviations from the mean. In this case, it follows that i=1

j =1

nE  nE 

EE ij =

i=1 j =1

nE  nI 

EI ij =

nI  nE 

i=1 j =1

=

IE ij

i=1 j =1

nI  nI 

II i j = 0.

first-order expansion of (16) and (17) gives   dSiE (t) EE E EI I E = f n E A S (t) + n I A S (t) + v thi (t) dt   EE E EI I E (t) + f  n E A S (t) + n I A S (t) + v thi ⎡ nE nI   E I × ⎣ S (t) EE EI i j + S (t) ik j =1

(13)

i=1 j =1

Now, using the average and perturbed expressions for AXY i j , X, Y ∈ {E, I}, (11) and (12) can be rewritten as nE  dSiE (t) EE E E = f n E A S (t) + EE i j S j (t) dt j =1 nI  EI I EI I E +n I A S (t) + ik Sk (t) + v thi (t) k=1

1 − E SiE (t), i = 1, . . . , n E λ nE  dSiI (t) IE E E = f n E A S (t) + IE i j S j (t) dt II I

+n I A S (t) +

j =1 nI 

I I II ik Sk (t) + v thi (t)

j =1

EI ik +

j =1

k=1

+

nI 

E EE i j δ j (t)

I EI ik δk (t)



E + v thi (t)



k=1

dSiI (t) dt

1 E S (t), λE i

j =1 I

+S (t)

nI 

II ik +

+

k=1

E IE i j δ j (t)

j =1

k=1 nI 

nE 

I II ik δk (t)

I + v thi (t)



i = 1, . . . , n I .

j =1 nE 

E IE i j δ j (t) +

k=1 nI 

1 I S (t), λI i (17)

Next, assume that all terms with a factor XY i j , X, Y ∈ {E, I} , i = 1, . . . , n X and j = 1, . . . , n Y , in (16) and (17) are small relative to the remaining terms in f (·). Then, a

(18)



I ⎦ II ik δk (t) −

k=1

1 I S (t), λI i (19)

Now, assuming that the higher order terms can be ignored, (18) and (19) become   dSiE (t) EE E EI I E = f n E A S (t) + n I A S (t) + v thi (t) dt   EE E EI I E + f  n E A S (t) + n I A S (t) + v thi (t) ⎡ ⎤ nE nI   E I ⎦ × ⎣ S (t) EE EI i j + S (t) ik j =1

k=1

1 − E SiE (t), i = 1, . . . , n E (20) λ   dSiI (t) IE E II I I = f n E A S (t) + n I A S (t) + v thi (t) dt   IE E II I I (t) + f  n E A S (t) + n I A S (t) + v thi ⎤ ⎡ nE nI   E I ⎦ × ⎣ S (t) IE II i j + S (t) ik j =1

(16) i = 1, . . . , n E n E  IE E E II I = f n E A S (t) + S (t) IE i j + n I A S (t)

1 E S (t), λE i

  dSiI (t) IE E II I I = f n E A S (t) + n I A S (t) + v thi (t) dt   IE E II I I + f  n E A S (t) + n I A S (t) + v thi (t) ⎡ nE nI   E I × ⎣ S (t) IE + S (t) II ij ik

i = 1, . . . , n I .

1 I S (t), i = 1, . . . , n I (15) λI i

E I where S (t)  (1/n E ) nj E=1 S Ej (t) and S (t) 

nI I (1/n I ) j =1 S j (t) denote the mean excitatory synaptic drive and mean inhibitory synaptic drive in dimensionless units of E 1/synapse2 , respectively. Now, defining δiE (t)  SiE (t)− S (t) I and δiI (t)  SiI (t) − S (t), where δiE (t) and δiI (t) are deviations from the mean, it follows that (14) and (15) become nE  dSiE (t) EE E E EI I = f n E A S (t) + S (t) EE i j + n I A S (t) dt I



I ⎦ EI ik δk (t) −

k=1

j =1



+S (t)

+

nI 

i = 1, . . . , n E

+

nE 

E EE i j δ j (t)

k=1

j =1

(14)

k=1

nI 

+

nE 

k=1

1 − I SiI (t), i = 1, . . . , n I . (21) λ Finally, summing (20) and (21) over i = 1, . . . , n E and i = 1, . . . , n I , dividing by n E and n I , respectively, assuming E (t) = v E (t) = · · · = v E (t) = v E and v I (t) = that v th1 th2 thn E th th1 I I (t) = v I , t ≥ 0, and using (13), v th2 (t) = · · · = v thn th I it follows that the average excitatory synaptic drive and the average inhibitory synaptic drive are given by E   dS (t) EE E EI I E = f n E A S (t) + n I A S (t) + v th dt 1 E − E S (t), t ≥ 0 (22) λ I  1  dS (t) IE E II I I I − I S (t). (23) = f n E A S (t)+n I A S (t)+v th dt λ

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS

755

 IV. M ULTISTABILITY A NALYSIS OF THE M EAN F IELD S YNAPTIC D RIVE M ODEL W ITH R ANDOM I NITIAL C ONDITIONS

= (A − L)

In this section, we consider the unforced version (i.e., E (t) ≡ 0 and v I (t) ≡ 0) of (11) and (12) given by v thi thi ⎛ ⎞ nE nI   dSiE (t) 1 E E I ⎠ = f⎝ AEE AEI i j S j (t) + ik Sk (t) − E Si (t), dt λ j =1

k=1

i = 1, . . . , n E , t ≥ 0 (24) ⎛ ⎞ nE nI   dSiI (t) 1 I E I ⎠ = f⎝ AIE AII i j S j (t) + ik Sk (t) − I Si (t), dt λ j =1

k=1

i = 1, . . . , n I .

(25)

In this case, (22) and (23) can be further simplified as E   1 E dS (t) EE E EI I = f n E A S (t) + n I A S (t) − E S (t), dt λ t ≥0 (26) I   dS (t) 1 I IE E II I = f n E A S (t) + n I A S (t) − I S (t). (27) dt λ Hence, (26) and (27) represent the spatial average (mean) dynamics of the system (24) and (25). It is important to note that the system (26) and (27) is not the first moment equation of (24) and (25). Next, we develop several key results on boundedness of solutions of the average synaptic drives. 2 Proposition 1: Let R+ denote the nonnegative orthant of 2 R . Consider the dynamical system (26) and (27), and assume 2 E I S (0) ≥ 0 and S (0) ≥ 0. Then R+ is an invariant set with respect to (26) and (27). Proof: The result is a direct consequence of Proposition 2.1 of [29] by noting that f (x) = xeγ x /(1 + eγ x ) ≥ 0 (respectively, f (x) = f max eγ x /(1 + eγ x ) ≥ 0) for all x ≥ 0.

For the statement of the next result, recall that a matrix A ∈ Rn×n is Lyapunov stable if and only if e At is bounded for all t ≥ 0. Proposition 2: Consider the dynamical system (26) and (27), E I and assume that S (0) ≥ 0 and S (0) ≥ 0. If A − L is Lyapunov stable, where     EE EI 1 0 nE A nI A E λ (28) A IE II , L  0 λ1I nE A nI A then the solutions to (26) and (27) are bounded for all t ≥ 0. E Proof: It follows from Proposition 1 that S (t) ≥ 0 and I S (t) ≥ 0 for all t ≥ 0. Now, since eγ x /(1 + eγ x ) < 1 for every x ∈ R, it follows that f (x) ≤ x for all x ≥ 0. Thus, denoting by “≤≤” a componentwise inequality, it follows that ⎤ ⎡   EE E EI I E f n E A S (t) + n I A S (t) − λ1E S (t)  ⎦ ⎣  IE E II I I f n E A S (t) + n I A S (t) − λ1I S (t)   EE E EI I E n E A S (t) + n I A S (t) − λ1E S (t) ≤≤ IE E II I I n E A S (t) + n I A S (t) − λ1I S (t)

 E S (t) , t ≥0 I S (t)

which implies that the solutions to (26) and (27) are bounded for all t ≥ 0. The following corollary to Proposition 2 holds for the case where f i (·) = f (·) is given by (9). Corollary 1: Consider the dynamical system (26) and (27), E I and assume that S (0) ≥ 0 and S (0) ≥ 0. If (1/2)A − L is unstable, then the solutions to (26) and (27) are unbounded for all t ≥ 0. Proof: The proof is similar to the proof of Proposition 2 by noting that f (x) ≥ (1/2)x for all x ≥ 0. Alternatively, in light of Proposition 1 we can use a linear function to give a sufficient condition guaranteeing boundedness of solutions for (26) and (27). Proposition 3: Consider the dynamical system (26) and (27), E I and assume that S (0) ≥ 0 and S (0) ≥ 0. Furthermore, assume that there exist positive scalars L 11 and L 12 such that EE Q 11 ≤ 0 and Q 12 ≤ 0, where Q 11 = −L 11 /λE + L 11 n E A + IE II EI L 12 n E A and Q 12 = −L 12 /λI + L 12 n I A + L 11 n I A . Then the solutions to (26) and (27) are bounded for all t ≥ 0. 2 Proof: Consider the function V (S) = L S, S ∈ R+ , where E I L = [L 11 , L 12 ]T and S = [S , S ]T , and note that L 11 E V˙ (S(t)) = L 11 S˙ E (t) + L 12 S˙ I (t) = − E S (t) λ   EE E EI I +L 11 f n E A S (t) + n I (t)A S (t)   L 12 I IE E II I − I S (t) + L 12 f n E A S (t) + n I A S (t) , λ t ≥ 0. Now, using similar arguments as in the proof of Theorem 1 given below, it follows that   L 11 E EE E EI I V˙ (S(t)) ≤ − E S (t) + L 11 n E A S (t) + n I A S (t) λ   L 12 I IE E II I − I S (t) + L 12 n E A S (t) + n I A S (t) λ   L EE IE E 11 = − E + L 11 n E A + L 12 n E A S (t) λ  I  L II EI 12 + − I + L 12 n I A + L 11 n I A S (t) λ E I = Q 11 S (t) + Q 12 S (t), t ≥ 0. Choosing L 11 and L 12 such that Q 11 ≤ 0 and Q 12 ≤ 0, and E I noting that S (t) ≥ 0 and S (t) ≥ 0, t ≥ 0, it follows that V˙ (S(t)) ≤ 0, t ≥ 0, which proves that all the solutions to (26) and (27) are bounded for all t ≥ 0. Next, we study the stability of the moments of (26) and (27) to obtain the average asymptotic motion of (24) and (25). Let E⊂ R2 denote the equilibrium set of (26) and (27). Clearly, EE EI if (x, y) ∈ E, then f (n E A x + n I A y) − (1/λE )x = 0 and IE II f (n E A x + n I A y) − (1/λI )y = 0. For the next set of definE I itions E[ · ] denotes expectation and S(t)  [S (t), S (t)]T , t ≥ 0. Inspired by [5] and [30], we have the following definitions. Definition 1: Let p > 0 and let D ⊆ R2 be a positively invariant set with respect to (26) and (27). An equilibrium

756

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

solution S(t) ≡ α ∈ E ∩ D of (26) and (27) is semistable in the pth mean with respect to D if the following statements hold. 1) For every ε > 0, there exists δ = δ(ε) > 0 such that S(t) ∈ D and E[ S(t) − α p ] < ε for every t ≥ 0 and E[ S(0) − α p ] < δ. 2) There exist ε > 0 and α ∗ ∈ E ∩ D such that, for every E[ S(0) − α p ] < ε and S(0) ∈ D, limt →∞ E[ S(t) − α ∗ p ] = 0. The system (26) and (27) is semistable in the pth mean with respect to D if every equilibrium solution in D of (26) and (27) is semistable in the pth mean with respect to D. If, alternatively, S(t) ≡ α ∈ E only satisfies 1), then the equilibrium solution S(t) ≡ α ∈ E of (26) and (27) is Lyapunov stable in the pth mean with respect to D. Definition 2: An equilibrium solution S(t) ≡ α ∈ E ∩ D of (26) and (27) has a semistable expectation with respect to D if the following statements hold. 1) For every ε > 0, there exists δ = δ(ε) > 0 such that S(t) ∈ D and E[S(t)] − α < ε for every t ≥ 0 and E[S(0)] − α < δ. 2) There exist ε > 0 and α ∗ ∈ E ∩ D such that, for every E[S(0)] − α < ε and S(0) ∈ D, limt →∞ E[S(t)] − α ∗ = 0. The system (26) and (27) has a semistable expectation with respect to D if every equilibrium solution in D of (26) and (27) has a semistable expectation with respect to D. If, alternatively, S(t) ≡ α ∈ E only satisfies 1), then the equilibrium solution S(t) ≡ α ∈ E of (26) and (27) has a Lyapunov stable expectation with respect to D. Definition 3: Consider the dynamical system (26) and (27). Let μ(·) denote the Lebesgue measure in R2 and let D ⊆ R2 be a positively invariant set with input to (26) and (27). The dynamical system (26) and (27) is multistable in the pth mean with respect to D if the following statements hold.

exist scalars K i j ∈ R, i, j = 1, 2, such that M11 < 0 1 M11 M22 − (M12 + M21 )2 ≥ 0 4

(30)

where   Ki j + K j i 3 1 YX + sign(K + K ) nX A j j j j 2λX 4 4   1 1 3 1 + sign (K 12 + K 21 ) + (K 12 + K 21 ) 2 4 4 2

Mi j = −

×n X A

ZX

, i, j = 1, 2

X, Y, Z ∈ {E, I}, X = E if i = 1; otherwise, X = I, Y = E if j = 1; otherwise, Y = I, and Z = {E, I}\Y, and sign(σ )  σ/|σ |, σ = 0, and sign(0)  0. In addition, assume that every point in the largest invariant set of N (M + M T ), where M M  11 12 M= and N (X) denotes the null space of X, is M21 M22 a Lyapunov stable equilibrium point in the pth mean for (26) 2 and (27) with respect to R+ . Then (26) and (27) is multistable in the pth mean and has a multistable expectation with respect 2 to R+ . Proof: Consider the function V (S) = (1/2)S T K S, S ∈ 2 R+ , where K ∈ R2×2 is to be determined, and note that E I E I V˙ (S(t)) = K 11 S (t) S˙ (t) + K 22 S (t) S˙ (t) E 1 I + (K 12 + K 21 ) S˙ (t)S (t) 2 I 1 E + (K 12 + K 21 )S (t) S˙ (t) 2  K 11 E EE E = − E (S (t))2 + K 11 f n E A S (t) λ  K 22 I EI I E +n I A S (t) S (t) − I (S (t))2 λ   I IE E II I +K 22 f n E A S (t) + n I A S (t) S (t)

1) E ∩ D\{(0, 0)} = ∅. 2) For every S(0) ∈ D, there exists α ∈ E ∩ D such that limt →∞ E[ S(t) − α p ] = 0. 3) There exists a subset M ⊂ D satisfying μ(M) = 0 and a Lyapunov stable in the pth mean with respect to D equilibrium point α ∈ E ∩ D such that, for every S(0) ∈ D\M, limt →∞ E[ S(t) − α p ] = 0. We say that the system (26) and (27) has a multistable expectation with respect to D if limt →∞ E[ S(t) − α p ] = 0 is replaced by lim t →∞ E[S(t)] − α = 0 and α in 3) has a Lyapunov stable expectation with respect to D. Next, we use [5, Th. 1] to prove multistability in the pth mean for (26) and (27). The key idea behind [5, Th. 5.1] is to characterize multistability of dynamical systems via Lyapunov functions that do not make assumptions on sign definiteness. Theorem 1: Consider the dynamical system (26) and (27). E I Assume that S (0) ≥ 0 and S (0) ≥ 0 are random variables 2 and E ∩ R+ \{(0, 0)} = ∅. Furthermore, assume that the hypothesis of Proposition 2 or Proposition 3 hold and there

(29)

1 K 12 + K 21 E I S (t)S (t) + (K 12 + K 21 ) E 2   2λEE E EI I I × n E A S (t) + n I A S (t) S (t) −

1 K 12 + K 21 I E S (t)S (t) + (K 12 + K 21 ) I 2  2λIE E II I E × n E A S (t) + n I A S (t) S (t). −

EE E

EI I

Now, since, by Proposition 1, n E A S + n I A S ≥ 0 and IE E II I n E A S + n I A S ≥ 0, it follows that EE E

EI I

IE E

II I

eγ (nE A S +nI A S ) 1 ≤ 0, lim S(0)→α P[sup0≤t 0. Since z is stochastically Lyapunov stable there exists an open neighborhood Bδ (z), where δ = δ(ε1 , ε2 ) > 0, such that, for every S(0) ∈ Bδ (z), P[supt ≥0 S(t) − z ≥ ε1 ] < ε2 , and hence, P[supt ≥0 S(t) − z < ε1 ] ≥ 1 − ε2 . Now, since z ∈ (S), it follows that there exists a divergent sequence {ti }∞ i=1 in [0, ∞) such that P[lim i→∞ S(ti ) = z] = 1, and hence, for every ε3 , ε4 > 0, there exists k = k(ε3 ) ≥ 1 such that P[supi≥k S(ti ) − z > ε3 ] < ε4 or, equivalently, P[supi≥k S(ti ) − z < ε3 ] ≥ 1 − ε4 . Next, note that P[supt ≥tk S(t)− z < ε1 ] ≥ P[supt ≥0 S(t) −z < ε1 ]. It now follows that   P sup S(t) − z < ε1 t ≥tk     ≥ P sup S(t) − z < ε1  sup S(ti ) − z < ε3 t ≥tk i≥k   ×P sup S(ti ) − z < ε3 R2

i≥k

≥ (1 − ε2 )(1 − ε4 ) where P[·|·] denotes conditional probability. Since ε1 , ε2 , and ε4 were chosen arbitrarily, it follows that P[z = limt →∞ S(t)] = 1. Thus, P[limn→∞ S(tn ) = z] = 1 for every divergent sequence {tn }∞ n=1 , and hence, (S) = {z}; that is,

for every S(0) ∈ R2 , there exists α = α(ω) ∈ E, ω ∈ , such that P[limt →∞ S(t) = α(ω)] = 1. Next, recall from [34] that a matrix A˜ ∈ Rn×n is semi˜ stable if and only if limt →∞ e At exists. In other words, ˜ A˜ is semistable if and only if for every λ ∈ spec( A), λ = 0 or Re λ < 0 and if λ = 0, then 0 is semisimple. Furthermore, if A˜ is semistable, then the index of A˜ is zero or one, and hence, A˜ is group invertible. The group inverse A˜ # of A˜ is a special case of the Drazin inverse A˜ D in the case where A˜ has index zero or one [34]. In this case, ˜ limt →∞ e At = In − A˜ A˜ # [34]. Proposition 5: If A˜ is semistable, then, for sufficiently small |ν|, the dynamical system (32) is globally stochastically semistable. Proof: First, note that the solution to (32) is given by  t ˜ ˜ S(t) = e At S(0) + e A(t −s) A˜ s S(s)dw(s), t ≥ 0. (33) 0 ˜

Since A˜ is semistable, it follows that limt →∞ e At S(0) exists. ˜ In this case, let S∞ = limt →∞ e At S(0). Furthermore, note ˜ [34], where A˜ # denotes that S∞ = (I2 − A˜ A˜ # )S(0) ∈ N ( A) t ˜ ˜ the group inverse of A. Next, note that 0 e A(t −s) A˜ s S(s)dw(s) is an Itô integral and let · denote the Euclidean norm on R2 . Then, it follows from Property e) of Theorem 4.4.14 of [30, p. 73] that  2    t A(t ˜ −s) ˜ E  As S(s)dw(s)   e 0

 2   ˜  E e A(t −s) A˜ s S(s) ds 0  t  2   ˜  2 ˜ E e A(t −s) AS(s) =ν  ds 0   t  2  ˜ −s)  A(t  2 # ˜ ˜ ˜ E (e − (I2 − A A )) AS(s) ds =ν 0  t  2  ˜ −s)  A(t  2 # ˜ ˜ ˜ E (e − (I2 − A A )) A(S(s) − S∞ ) ds =ν 

=

t

0

(34) where E[ · ] denotes expectation with respect to the probability space ( , F , P). ˜ Next, define e(t)  e At S(0) − (I2 − A˜ A˜ # )S(0) = ˜ At e S(0) − S∞ . Then it follows from the semistability of A˜ ˜ ˙ = Ae(t) for every t ≥ 0, that limt →∞ e(t) = 0. Since e(t) it follows from the equivalence of (uniform) asymptotic stability and (uniform) exponential stability for linear timeinvariant systems [35] that there exist real scalars σ, r > 0 such that e(t) ≤ σ e−rt e(0) , t ≥ 0, or, equivalently, ˜ [e At − (I2 − A˜ A˜ # )]S(0) ≤ σ e−rt A˜ A˜ # S(0) , t ≥ 0. Hence ˜ e At − (I2 − A˜ A˜ # ) 

˜

[e At − (I2 − A˜ A˜ # )]S(0) S(0) S(0)∈R2 \{0} ˜ A A˜ # S(0) ≤ σ e−rt max S(0) S(0)∈R2\{0} −rt ˜ ˜ #  = σ e A A , t ≥ 0 =

max

(35)

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS

where ·  = σmax (·) and σmax (·) denotes the maximum singular value. Thus, (35) implies ˜ e At − (I2 − A˜ A˜ # )  ≤ ρe−rt , t ≥ 0

(36)

where ρ  σ A˜ A˜ #  . Next, it follows from (34) and (36) that  t  2  ˜ −s) ˜  A(t  As (s)S(s) ds E e 0  t   2 2 ˜ 2 e−2r(t −s)E S(s) − S∞ 2 ds, t ≥ 0. (37) ≤ ν ρ A 0

Now, it follows from (33) and (37), and the triangle inequality that   E S(t) − S∞ 2 ˜ ˜ 2 ≤ e At S(0) − S∞ 2 + ν 2 ρ 2 A  t   × e−2r(t −s) E S(s) − S∞ 2 ds

˜ 2 ≤ e S(0) − S∞ 2 + ν 2 ρ 2 A  t   ×e−2rt e2rs E S(s) − S∞ 2 ds, t ≥ 0 and hence   e2rt E S(t) − S∞ 2 ˜ ˜ 2 ≤ e2rt e At S(0) − S∞ 2 + ν 2 ρ 2 A  t   × e2rs E S(s) − S∞ 2 ds, t ≥ 0. 0

Hence, it follows from the Gronwall-Bellman lemma [36, p. 125] that   e2rt E S(t) − S∞ 2  t ˜ At 2rt 2 2 2 ˜ 2 ≤ e e S(0) − S∞ + ν ρ A e2rs 2 ρ 2 A ˜ 2 (t −s)

or, equivalently, for ν = 0   E S(t) − S∞ 2 ˜

˜ 2 ≤ e At S(0) − S∞ 2 + ν 2 ρ 2 A ˜

× e As S(0) − S∞ 2 eν



2 ρ 2 A ˜ 2 (t −s)

t

0

ds, t ≥ 0

0

˜

˜ 2 S(0) 2 ≤ e At S(0) − S∞ 2 + ν 2 ρ 4 A ×eν

2 ρ 2 A ˜ 2 (t −s)



t

e−2rt

0

ds

˜ At

˜ 2 S(0) 2 = e S(0) − S∞ 2 + ν 2 ρ 4 A  t 2 2 ˜ 2 2 2 ˜ 2 ×e−(2r−ν ρ A )t e−ν ρ A s ds ˜ At

0 2

˜ 2 )t 2 −(2r−ν 2 ρ 2 A

Taking |ν| to be such that ˜ 2 < 2r ν 2 ρ 2 A

and P[limt →∞ S(t) exists] = 1. Thus, the dynamical system (32) is globally stochastically semistable. Remark 1: If A˜ is semistable, then there exists an invertible transformation matrix T ∈ R2×2 such that ˜ and λ > 0. In this ˜ −1 = diag[−λ, 0], where λ ∈ spec( A) T AT ˆ case, defining the new coordinates [ S1 (t), Sˆ2 (t)]T  T S(t), (32) yields the two decoupled stochastic differential equations given by (39) (40)

Finally, we provide a sufficient condition for stochastic multistability for the dynamical system (32). For this result, the following lemma is first needed. Lemma 1: Let A˜ ∈ Rn×n . If there exist n × n matrices P = P T ≥ 0 and R = R T ≥ 0, and a nonnegative integer k such that ( A˜ k )T ( A˜ T P + P A˜ + R) A˜ k = 0   n  ˜ k = min l ∈ Z+ : N (R A˜ i+l−1 ) = N ( A)

(41) (42)

i=1

˜ x(t) ˙ = Ax(t), x(0) = x 0 , t ≥ 0

= e S(0) − S∞ + ρ S(0) e   2 2 ˜ 2 × 1 − e−ν ρ A t , t ≥ 0. 2

2

˜ ⊆ N (R A˜ k ) and 2) N ( A) ˜ ∩ then 1) N (P A˜ k ) ⊆ N ( A) ˜ = {0}. R( A) Proof: The proof is similar to that of Lemma 4.5 of [38] and, hence, is omitted. Theorem 2: Consider the dynamical system (32). Suppose there exist 2 × 2 matrices P = P T ≥ 0 and R = R T ≥ 0, and a nonnegative integer k such that (41) and (42) hold with n = 2. If N (A − L)\{(0, 0)} = ∅ and |ν| is sufficiently small, then the dynamical system (32) is stochastically multistable. Proof: By Proposition 5 it suffices to show that A˜  A − L is semistable. Consider the deterministic dynamical system given by

e−2r(t −s)

ds

2 2

Since the analytical solution to (39) is given by Sˆ1 (t) = 2 Sˆ1 (0)e−λ(1+1/2λν )t −νλw(t ), it follows that   −λ(1+ 21 λν 2 )t −νλw(t ) −1 ˆ −1 Sˆ1 (0)e . S(t) = T S(t) = T Sˆ2 (0)

0

˜

˜

it follows that limt →∞ e−(2r−ν ρ A )t = 0. In this case, limt →∞ E[ S(t) − S∞ 2 ] = 0, that is, S(t), t ≥ 0, converges to S∞ in the mean square. Finally, by Theorem 7.6.10 of [37] or [30, p. 187] (Khasminskiy’s theorem), for every initial condition S(0) ∈ R2 and every ε > 0, we have   1 P sup S(t) − S∞ ≥ ε ≤ 2 E[ S(0) − S∞ 2 ] ε 0≤t 0.93 ms, (A− L) is unstable, whereas for 0.9 ms < λI < 0.93 ms, (A − L) is asymptotically stable. Clearly, rank (A − L) < 2 for λI = 0.9 ms. Hence, it follows from Theorem 2 that the stochastic dynamical system (32) exhibits multistability for λI = 0.9 ms. In this case, A − L is semistable and the N (A − L) is characterized by the direction vector [1, 0.9]T . For our simulation, we use the initial condition S(0) = [0.1, 0.5]T . Figs. 2 and 3 show the time response for the average excitatory and inhibitory synaptic drives, and the

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS

Fig. 5. State trajectories of the sample trajectories of (31) for λI = 0.78 with ν = 0.2.

Fig. 6. Phase portrait of the sample trajectories of (31) for λI = 0.78 with ν = 0.2.

761

Fig. 9. State trajectories of the sample trajectories of (31) for λI = 0.9 with ν = 1.

Fig. 10. ν = 1.

Phase portrait of the sample trajectories of (31) for λI = 0.9 with

Moreover, its averaging dynamics will be analogous to the results of the deterministic model given in [5]. VII. C ONCLUSION

Fig. 7. State trajectories of the sample trajectories of (31) for λI = 1.20 with ν = 0.2.

Fig. 8. Phase portrait of the sample trajectories of (31) for λI = 1.20 with ν = 0.2.

phase portrait for λI = 0.9 ms with ν = 0.2. Furthermore, for λI = 0.9 ms with ν = 0.2, Fig. 4 shows a histogram of the limit points of loge S E (or, equivalently, loge 0.9S I ) over 10 000 samples. Note that the mean and variance of loge S E is −1.6135 and 0.3152, respectively. Similar plots shown in Figs. 2 and 3 are shown for λI = 0.78 ms and λI = 1.20 ms with ν = 0.2 in Figs. 5–8. Finally, Figs. 9 and 10 show similar simulations for the case where λI = 0.9 ms (i.e., (A − L) is semistable) and ν = 1. However, note that in this case condition (38) of Proposition 5 is not satisfied, and hence, the model exhibits instability. The trajectories of (26) and (27) can exhibit unstable and multistable behaviors for different values of the parameters, which are similar to the simulation results for (31).

There has been remarkable progress in understanding the molecular properties of anesthetic agents [6]–[11], [15]–[19]. However, we have lagged behind in understanding how these molecular properties lead to the behavior of the intact organism. The concentration–response relationship for anesthetic agents is still described by empiric equations [40] rather than by relationships that reflect the underlying molecular properties of the drug. Clinicians have observed for generations that the transition from consciousness to the anesthetized state seems to be a sharp, almost all-or none change. One fascinating possibility is that the anesthetic bifurcation to unconsciousness or the nearly all-or none characteristic induction of anesthesia is a type of phase transition of the neural network. This possibility was first considered by Steyn-Ross et al. (see [41] and the references therein). Their focus was on the mean voltage of cell body of the neuron. Specifically, the authors in [41] show that the biological change of state to anesthetic unconsciousness is analogous to a thermodynamic phase change involving a liquid to solid phase transition. For certain ranges of anesthetic concentrations, their first-order model predicts the existence of multiple steady states for brain activity, leading to a transition from normal levels of cerebral cortical activity to a quiescent state. In earlier research [5], we have demonstrated how the induction of anesthesia can be viewed as an example of multistability, the property whereby the solutions of a dynamical system exhibit multiple attracting equilibria under asymptotically slowly changing inputs or system parameters. In this paper, we demonstrate multistability in the mean when the synaptic drive firing model initial conditions are random variables and

762

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

we also demonstrate stochastic multistability when the mean rate system coefficients of the neuronal connectivity matrix are perturbed by a Wiener process. We postulate that the induction of anesthesia may be an example of multistability with two attracting equilibria; consciousness and hypnosis. The philosophy of representing uncertain parameters by means of multiplicative white noise is motivated by means of the maximum entropy principle of Jaynes [42], [43] and statistical analysis [44]. Maximum entropy modeling is a form of stochastic modeling wherein stochastic integration is interpreted in the sense of Itô to provide a model for system parameter uncertainty. The use of stochastic theory to model system parameter uncertainty has been used within a modern information-theoretic interpretation of probability theory [42], [43], [45]. In particular, rather than regarding the probability of an event as an objective quantity such as the limiting frequency of outcomes of numerous repetitions, maximum entropy modeling adopts the view that the probability of an event is a subjective quantity, which reflects the observer’s certainty to a particular event occurring. This quantity corresponds to a measure of information. The validity of a stochastic model for a biological neural network does not rely on the existence of an ensemble model but rather in the interpretation that it expresses modeling certainty or uncertainty regarding the coefficients of the neuronal connectivity matrix. The stability of neural network models of the brain has been the subject of multiple investigations and we refer the reader to selected references [24]–[28], [46] and especially Chapter 8 of [3] (which provides multiple other references). More recently, Cessac [47] has reviewed neural network models for the brain as dynamical systems. In particular, Cessac [48] has derived rigorous results, with minimal simplifying assumptions, for a discrete-time neural network model and has offered strong arguments for the use of a discrete-time model. In contrast to both prior investigations and to the results in [47] and [48], in this paper we present Lyapunov-based tests for multistability in stochastic systems having a continuum of equilibria. The results in this paper are predicated on a mean field assumption that reduces the complex (approximately 1011 × 1011 ) neuronal connectivity matrix to a 2 × 2 excitatory/inhibitory system. Although this is a drastic assumption, it has been commonly used in theoretical neuroscience going back to the pioneering work of Wilson and Cowan [22]. ACKNOWLEDGMENT The authors would like to thank H. Li for carrying out some of the numerical calculations in Section VI. Tomohisa Hayakawa is grateful to the School of Aerospace Engineering at Georgia Tech for their hospitality during the month of March 2012. R EFERENCES [1] L. Lapicque, “Recherches quantitatives sur l’excitation electiique des nerfs traitee comme une polarization,” J. Physiol. Gen., vol. 9, pp. 620–635, Jan. 1907. [2] A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and application to conduction and excitation in nerve,” J. Physiol., vol. 117, no. 4, pp. 500–544, 1952.

[3] P. Dayan and L. F. Abbott, Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, MA, USA: MIT Press, 2005. [4] B. Ermentrout and D. H. Terman, Mathematical Foundations of Neuroscience. New York, NY, USA: Springer-Verlag, 2010. [5] Q. Hui, W. M. Haddad, and J. M. Bailey, “Multistability, bifurcations, and biological neural networks: A synaptic drive firing model for cerebral cortex transition in the induction of general anesthesia,” Nonlinear Anal., Hybrid Syst., vol. 5, no. 3, pp. 554–573, Dec. 2011. [6] G. A. Mashour, “Consciousness unbound: Toward a paradigm of general anesthesia,” Anesthesiology, vol. 100, no. 2, pp. 428–433, 2004. [7] A. Y. Zecharia and N. P. Franks, “General anesthesia and ascending arousal pathways,” Anesthesiology, vol. 111, no. 4, pp. 695–696, 2009. [8] J. M. Sonner, J. F. Antognini, R. C. Dutton, P. Flood, A. T. Gray, R. A. Harris, G. E. Homanics, J. Kendig, B. Orser, D. E. Raines, J. Trudell, B. Vissel, and E. I. Eger, “Inhaled anesthetics and immobility: Mechanisms, mysteries, and minimum alveolar anesthetic concentration,” Anesthesia Analgesia, vol. 97, no. 3, pp. 718–740, 2003. [9] J. A. Campagna, K. W. Miller, and S. A. Forman, “Mechanisms of actions of inhaled anesthetics,” New England J. Med., vol. 348, no. 21, pp. 2110–2124, 2003. [10] E. R. John and L. S. Prichep, “The anesthetic cascade: A theory of how anesthesia suppresses consciousness,” Anesthesiology, vol. 102, no. 2, pp. 447–471, 2005. [11] S. Hameroff, “The entwined mysteries of anesthesia and consciousness: Is there a common underlying mechanism?” Anesthesiology, vol. 105, no. 2, pp. 400–412, 2006. [12] J. Vuyk, T. Lim, F. H. M. Engbers, A. G. L. Burm, A. A. Vietter, and J. G. Bovill, “Pharmacodynamics of Alfentanil as a supplement to propofol of nitrous oxide for lower abdominal surgery in female patients,” Anesthesiology, vol. 78, no. 6, pp. 1936–1945, 1993. [13] E. Overton, Studien über die Narkose Zugleich ein Beitrag zur Allgemeinen Pharmakologie. New York, NY, USA: Fischer, 1901. [14] H. Meyer, “Welche eigenschaft der anesthetica bedingt ihre narkotische wirkung?” Archiv Für Experim. Pathol. Und Pharmakol., vol. 42, no. 1, pp. 109–118, 1889. [15] I. Ueda, “Molecular mechanisms of anesthesia,” Anesth. Analgesia, vol. 63, no. 10, pp. 929–945, 2003. [16] N. P. Franks and W. R. Lieb, “Molecular and cellular mechanisms of general anesthesia,” Nature, vol. 367, no. 6464, pp. 607–614, 1994. [17] C. North and D. S. Cafiso, “Contrasting memebrane localization and behavior of halogenated cyclobutanes that follow or violate the MeyerOverton hypothesis of general anesthetic potency,” Biophys. J., vol. 72, no. 4, pp. 1754–1761, 1997. [18] A. Kitamura, W. Marszalec, J. Z. Yeh, and T. Narahishi, “Effects of halothane and propofol on excitatory and inhibitory synaptic transmission in rat cortical neurons,” J. Pharmacol., vol. 304, no. 1, pp. 162–171, 2002. [19] A. Hutt and A. Longtin, “Effects of the anesthetic agent propofol on neural populations,” Cognit. Neurodyn., vol. 4, no. 1, pp. 37–59, 2009. [20] B. Gutkin, D. Pinto, and B. Ermentrout, “Mathematical neuroscience: From neurons to circuits to systems,” J. Physiol., Paris, vol. 97, nos. 2–3, pp. 209–219, 2003. [21] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge, U.K.: Cambridge Univ. Press, 2002. [22] H. R. Wilson and J. D. Cowan, “Excitatory and inhibitory interactions in localized populations of model neurons,” Biophys. J., vol. 12, no. 1, pp. 1–24, 1972. [23] S. Amari, K. Yoshida, and K. Kanatani, “A mathematical foundation for statistical neurodynamics,” SIAM J. Appl. Math., vol. 33, no. 1, pp. 95–126, 1977. [24] H. Sompolinsky, A. Crisanti, and H. Sommers, “Chaos in random neural networks,” Phys. Rev. Lett., vol. 61, no. 3, pp. 259–262, 1988. [25] D. J. Amit, H. Gutfreund, and H. Sompolinsky, “Spin-glass models of neural networks,” Phys. Rev. A, vol. 32, no. 2, pp. 1007–1018, 1985. [26] D. J. Amit and N. Brunel, “Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex,” Cerebral Cortex, vol. 7, no. 3, pp. 237–252, 1997. [27] N. Brunel and V. Hakim, “Fast global oscillations in networks of integrate-and-fire neurons with low firing rates,” Neural Comput., vol. 11, no. 7, pp. 1621–1671, 1999.

HUI et al.: STOCHASTIC MEAN FIELD MODEL FOR NEURONAL NETWORKS

[28] W. Gerstner, “Time structure of the activity in neural network models,” Phys. Rev. E, vol. 51, no. 1, pp. 738–758, 1995. [29] W. M. Haddad, V. Chellaboina, and Q. Hui, Nonnegative and Compartmental Dynamical Systems. Princeton, NJ, USA: Princeton Univ. Press, 2010. [30] L. Arnold, Stochastic Differential Equations: Theory and Applications. New York, NY, USA: Wiley, 1974. [31] J. Zhou and Q. Wang, “Stochastic semistability with application to agreement problems over random networks,” in Proc. Amer. Control Conf., 2010, pp. 568–573. [32] Q. Hui, W. M. Haddad, and S. P. Bhat, “Finite-time semistability and consensus for nonlinear dynamical networks,” IEEE Trans. Autom. Control, vol. 53, no. 8, pp. 1887–1900, Sep. 2008. [33] Q. Hui, W. M. Haddad, and S. P. Bhat, “Semistability, finite-time stability, differential inclusions, and discontinuous dynamical systems having a continuum of equilibria,” IEEE Trans. Autom. Control, vol. 54, no. 10, pp. 2465–2470, Oct. 2009. [34] D. S. Bernstein, Matrix Mathematics, 2nd ed. Princeton, NJ, USA: Princeton Univ. Press, 2009. [35] J. Shen, J. Hu, and Q. Hui, “Semistability of switched linear systems with applications to sensor networks: A generating function approach,” in Proc. IEEE Conf. Decision Control, Dec. 2011, pp. 8044–8049. [36] W. M. Haddad and V. Chellaboina, Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach. Princeton, NJ, USA: Princeton Univ. Press, 2008. [37] R. B. Ash, Real Analysis and Probability. New York, NY, USA: Academic, 1972. [38] Q. Hui, “Optimal semistable control for continuous-time linear systems,” Syst. Control Lett., vol. 60, no. 4, pp. 278–284, 2011. [39] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences. New York, NY, USA: Academic, 1979. [40] J. M. Bailey and W. M. Haddad, “Drug dosing control in clinical pharmacology: Paradigms, benefits, and challenges,” IEEE Control Syst. Mag., vol. 25, no. 1, pp. 35–51, Jun. 2005. [41] M. L. Steyn-Ross, D. A. Steyn-Ross, and J. W. Sleigh, “Modelling general anesthesia as a first-order phase transition in the cortex,” Progr. Biophys. Molecular Biol., vol. 85, nos. 2–3, pp. 369–385, 2004. [42] E. T. Jaynes, “Information theory and statistical mechanics,” Phys. Rev., vol. 106, no. 4, pp. 620–630, 1957. [43] E. T. Jaynes, “Prior probabilities,” IEEE Trans. Sys. Sci. Cybern., vol. SSC-4, no. 3, pp. 227–241, Sep. 1968. [44] R. H. Lyon, Statistical Energy Analysis of Dynamical Systems: Theory and Applications. Cambridge, MA, USA: MIT Press, 1975. [45] R. D. Rosenkrantz and E. T. Jaynes, Papers on Probability, Statistics and Statistical Physics. Boston, MA, USA: Reidel, 1983. [46] C. A. van Vreeswijk and H. Sompolinsky, “Chaos in neuronal networks with balance excitatory and inhibitory activity,” Science, vol. 274, no. 5293, pp. 1724–1726, 1996. [47] B. Cessac, “A view of neural networks as dynamical systems,” Int. J. Bifurcations Chaos, vol. 20, no. 6, pp. 1585–1629, 2010. [48] B. Cessac, “A discrete-time neural network model with spiking neurons. Rigorous results on the spontaneous dynamics,” J. Math. Biol., vol. 56, no. 3, pp. 311–345, 2008.

Qing Hui (S’01–M’08) received the B.Eng. degree in aerospace engineering from the National University of Defense Technology, Changsha, China, in 1999, the M. Eng. degree in automotive engineering from Tsinghua University, Beijing, China, in 2002, and the M.S. degree in applied mathematics and the Ph.D. degree in aerospace engineering from the Georgia Institute of Technology, Atlanta, GA, USA, in 2005 and 2008, respectively. He joined the Department of Mechanical Engineering, Texas Tech University, Lubbock, TX, USA, in 2008, where he is currently an Assistant Professor. He is co-author of the book “Nonnegative and Compartmental Dynamical Systems” (Princeton, NJ: Princeton University Press, 2010). His current research interests include network robustness and vulnerability analysis, consensus, synchronization and control of network systems, network optimization, network interdependency and cascading failures, swarm optimization, hybrid systems, biomedical systems, and scientific computing.

763

Wassim M. Haddad (S’87–M’87–SM’01–F’09) received the B.S., M.S., and Ph.D. degrees in mechanical engineering from Florida Tech, Melbourne, FL, USA, in 1983, 1984, and 1987, respectively. In 1988 he joined the faculty of the Mechanical and Aerospace Engineering Department at Florida Tech. Since 1994, he has been with the School of Aerospace Engineering, Georgia Tech, where he holds the rank of Professor, the Andrew and David Lewis Chair in Dynamical Systems and Control, and serves as Chair of the Flight Mechanics and Control Discipline. He also holds joint Professor appointments with the School of Biomedical Engineering and Electrical and Computer Engineering, Georgia Tech, Atlanta, GA, USA. His interdisciplinary research contributions in systems and control are documented in more than 540 archival journal and conference publications, and seven books in the areas of science, mathematics, medicine, and engineering. His current research interests include the nonlinear robust and adaptive control, nonlinear systems, large-scale systems, hierarchical control, impulsive and hybrid systems, system thermodynamics, network systems, system biology, mathematical neuroscience, the history of science and mathematics, as well as natural philosophy. Dr. Haddad is an NSF Presidential Faculty Fellow in recognition for his demonstrated excellence and continued promise in scientific and engineering research; a member of the Academy of Nonlinear Sciences for contributions to nonlinear stability theory, dynamical systems, and control; and an IEEE Fellow for contributions to robust, nonlinear, and hybrid control systems. He has received numerous outstanding research scholar awards including an Outstanding Alumni Achievement Award for his contributions to nonlinear dynamical systems and control, and recognition for outstanding contributions to joint university and industry programs. James M. Bailey received the B.S. degree from Davidson College, St, Davidson, NC, USA, in 1969, the Ph.D. degree in chemistry from the University of North Carolina, Chapel Hill, NC, USA, in 1973, and the M.D. degree from the Southern Illinois University School of Medicine, Springfield, IL, USA, in 1982. He was a Helen Hay Whitney Fellow with the California Institute of Technology, Pasadena, CA, USA, from 1973 to 1975, and an Assistant Professor of chemistry and biochemistry with Southern Illinois University, USA, from 1975 to 1979. He completed a residency in anesthesiology and then a fellowship in cardiac anesthesiology with the Emory University School of Medicine, Atlanta, GA, USA. From 1986 to 2002, he was an Assistant Professor of anesthesiology and then Associate Professor of anesthesiology with Emory, where he also served as the Director of the critical care service. In September 2002, he moved his clinical practice to Northeast Georgia Medical Center, Gainesville, GA, USA, as the Director of cardiac anesthesia and Consultant in critical care medicine. He has served as Chief Medical Officer of Northeast Georgia Health Systems, Gainesville, GA, USA, since 2008. He is board certified in anesthesiology, critical care medicine, and transesophageal echocardiography. His current research interests include pharmacokinetic and pharmacodynamic modeling of anesthetic and vasoactive drugs and, more recently, applications of dynamical system theory in medicine. He is the author or co-author of more than 100 journal articles, conference publications, or book chapters. Tomohisa Hayakawa (S’00–M’04) received the B.Eng. degree in aeronautical engineering from the Kyoto University, Kyoto, Japan, in 1997, the M.S. degree in aerospace engineering from the State University of New York, Buffalo, NY, USA, in 1999, and the M.S. degree in applied mathematics and the Ph.D. degree in aerospace engineering both from the Georgia Institute of Technology, Atlanta, GA, USA, in 2001 and 2003, respectively. He has served as a Research Fellow with the Department of Aeronautics and Astronautics, Kyoto University, and the Japan Science and Technology Agency. He joined the Department of Mechanical and Environmental Informatics, the Tokyo Institute of Technology, Tokyo, Japan, in 2006, where he is an Associate Professor. His current research interests include the stability of nonlinear systems, nonnegative and compartmental systems, hybrid systems, nonlinear adaptive control, neural networks and intelligent control, stochastic dynamical systems, and applications to aerospace vehicles, multiagent systems, robotic systems, financial dynamics, and biological/biomedical systems.

A stochastic mean field model for an excitatory and inhibitory synaptic drive cortical neuronal network.

With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties ...
2MB Sizes 2 Downloads 3 Views