IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

1407

Robust Exponential Stability of Uncertain Stochastic Neural Networks With Distributed Delays and Reaction-Diffusions Jianping Zhou, Shengyuan Xu, Baoyong Zhang, Yun Zou, and Hao Shen Abstract— This paper considers the problem of stability analysis for uncertain stochastic neural networks with distributed delays and reaction-diffusions. Two sufficient conditions for the robust exponential stability in the mean square of the given network are developed by using a Lyapunov–Krasovskii functional, an integral inequality, and some analysis techniques. The conditions, which are expressed by linear matrix inequalities, can be easily checked. Two simulation examples are given to demonstrate the reduced conservatism of the proposed conditions. Index Terms— Distributed delays, reaction-diffusions, robust stability, stochastic neural networks.

I. I NTRODUCTION

I

N the past two decades, there has been a steady increase in the interest of neural networks due to their potential applications in associative memory, optimization computation, online identification, and other engineering fields [1]–[5]. In these applications, the stability of neural networks is a prerequisite. For instance, when neural networks are used as associative memories, the equilibrium points of the designed networks stand for a set of stored patterns, and the stability of each equilibrium point means that each stored pattern can be reliably retrieved [6]. In electronic implementation of neural networks, time delays and reaction-diffusions cannot be avoided [7], [8], which may induce undesirable effects such as performance degradation or even loss of stability. Hence, the stability analysis for neural networks with time delays and reaction-diffusions has received considerable attention in recent years and many stability conditions have been reported in the literature, see [9]–[14]. These conditions can be classified into two categories, namely, diffusion-dependent ones and diffusion-independent ones. The former can make Manuscript received September 4, 2011; accepted May 15, 2012. Date of publication June 29, 2012; date of current version August 1, 2012. This work was supported in part by the NSFC under Grant 61074043, Grant 61104117, and Grant 61174038, the Fundamental Research Funds for the Central Universities under Grant NUST 2011ZDJH06, the Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20113219110026, 333 under Project BRA2011143, and the Qing Lan Project. J. Zhou is with the School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China, and also with the School of Computer Science, Anhui University of Technology, Anhui 243002, China (e-mail: [email protected]). S. Xu, B. Zhang, and Y. Zou are with the School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China (e-mail: [email protected]; [email protected]; zouyun@ jlonline.com). H. Shen is with the School of Electrical Engineering and Information, Anhui University of Technology, Anhui 243002, China (e-mail: ashao19841203@ yahoo.com.cn). Digital Object Identifier 10.1109/TNNLS.2012.2203360

use of information concerning the reaction-diffusions; they are generally less conservative than the latter. In practice, the weights of neurons depend on certain resistance and capacitance values which are subject to uncertainties. So it is of importance to ensure the stability of neural networks in the presence of parameter uncertainties. For this reason, the robust stability problem of uncertain delayed reaction-diffusion neural networks was considered in [15] and [16]. However, we note that the methods in both [15] and [16] involve the absolute value of the weights and, thus, ignore the difference of the status of the neurons between excitatory and inhibitory effects [17]. This makes the results in [15] and [16] inevitably conservative. On the other hand, Haykin [18] pointed out that in real nervous systems, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and from other probabilistic causes. Faisal et al. [19] stated further that noise poses a fundamental problem for information processing and affects all aspects of nervous system function in a nervous system. Hence, it is also important to study the effects of noise perturbations on the dynamics of neural networks. It was in this spirit that the stability of stochastic delayed reaction-diffusion neural networks began to attract the interest of researchers [20]–[22]. Unfortunately, we find that the methods in [20]–[22] have the same shortcomings as those in [15] and [16]. In this paper, we will focus our attention on the problem of stability analysis for uncertain stochastic delayed reactiondiffusion neural networks. In the network considered here, the time delays are assumed to be distributed over an unbounded interval as in [23], and the boundary condition is assumed to be of Dirichlet type as in [21], [15], and [14]. By using a Lyapunov–Krasovskii functional, an integral inequality, and some analysis techniques, two conditions for the robust exponential stability in the mean square of the given network are developed, which are expressed in terms of linear matrix inequalities (LMIs). Then, in the case when there are no noise perturbations in the network, two robust exponential stability conditions are established. The proposed conditions are diffusion-dependent due to the use of the new integral inequality, which is clearly more accurate than the Poincaretype inequality in [12]. The signs of the weights in these conditions are the same as those in the neural networks. As a result, the obtained conditions may have some theoretical advantages over those previously reported. In order to illustrate this more specifically, two simulation examples are provided.

2162–237X/$31.00 © 2012 IEEE

1408

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

Notation: Throughout this paper, for a real symmetric matrix X, the notation rmin (X) denotes the minimum eigenvalue of X, and X ≥ 0 (respectively, X > 0) means that the matrix X is positive semidefinite (respectively, positive definite). The superscript T stands for matrix transposition. R n represents the n-dimensional Euclidean space; ei , i = 1, . . . , n, represent the n-dimensional column vector with the i th element equal to 1 and 0 elsewhere; i , i = 1, . . . , n 2 , denote the n 2 -dimensional column vector with the i th element equal to 1 and 0 elsewhere. C m (W ) represents the family of continuously m-times differentiable real-valued functions defined on a domain W . E{·} denotes the expectation operator with respect to some probability measure. Let  = {x = [x 1 · · · x m ]T | |x i | < li , i = 1, . . . , m} be an open set in R m , ∂ be the boundary of , and  be the closure of . Then for any vector function f (t, x) = ( f 1 (t, x), . . . , fn (t, x))T with f i (t, x) ∈ C((−∞, (i = 1, . . . , n), we  +∞) ×2 ) 1/2 define || f (t, x)|| = [ f (t, x) d x] and || f (t, x)|| = i 2 i   ( ni=1 || f i (t, x)||22 )1/2 . II. P RELIMINARIES Representing by u i (t, x) (i = 1, . . . , n) the state of the i th neuron at time t and space point x = [x 1 · · · x m ]T , the neural network model with distributed delays and reaction-diffusions can be described by   m  ∂ ∂u i (t, x) ∂u i (t, x) = aik − bi u i (t, x) ∂t ∂ xk ∂ xk

C = (ci j )n×n , I = [I1 · · · In ]T D = [d11e1 · · · d1n e1 · · · dn1 en · · · dnn en ]n×n2  

 m ∂ ∂u (t, x) a1k 1 ··· A◦u(t, x) = ∂ xk ∂ xk k=1   T m  ∂ ∂u n (t, x) . ank ∂ xk ∂ xk k=1

The boundary condition and initial condition of (2) are given by (t, x) ∈ [0, +∞) × ∂ u(t, x) = u ∗ , u(s, x) = ϕ(s, x), (s, x) ∈ (−∞, 0] × 

(3) (4)

respectively. Here, u ∗ is a column vector of constants u ∗1 , . . ., u ∗n ; ϕ(s, x) = [ϕ1 (s, x) · · · ϕn (s, x)]T , where ϕ1 , . . . , ϕn , are bounded and continuous functions on (−∞, 0] × . The activation functions f 1 , . . . , f n are assumed to be bounded and satisfy one of the following conditions. Assumption 1: There exists a positive diagonal matrix F = diag(F1 , . . . , Fn ) such that for any ε1 , ε2 ∈ R f j (ε2 ) − f j (ε1 ) ≤ F j |ε2 − ε1 | . Assumption 2: There exists a positive diagonal matrix F = diag(F1 , . . . , Fn ) such that for any ε1 , ε2 ∈ R, ε1 = ε2

where aik > 0 represents the transmission diffusion operator, bi > 0 is the neuron charging time constant, ci j and di j are the connection weights of the neurons, and f j stands for the neuron activation function. Ii represents a constant external input, the delay kernel K i j is a real-valued nonnega+∞ continuous function that satisfies 0 K i j (s)ds = 1 and tive +∞ μs e K i j (s)ds < ∞ for some positive constant μ. 0 For notational convenience, we rewrite (1) as

f j (ε2 ) − f j (ε1 ) ≤ Fj . ε2 − ε1 Remark 1: Assumptions 1 and 2 have been made in many studies, see [24]–[31]. It should be noted that Assumption 1, which does not require the monotonicity of the activation functions, is more general than Assumption 2. Based on the boundedness properties of the activation func tions, it is known that the equation system −bi u i + nj =1 (ci j + di j ) f j (u j ) + Ii = 0(i = 1, . . . , n) has at least one constant solution [32]. However, since the boundary condition is chosen to be of the Dirichlet type, the constant solution cannot be considered as an equilibrium point of (2) unless it equals to the boundary value u ∗ (see [33] for more details). Thus the following assumption may be a prerequisite: Assumption 3: The boundary value u ∗ is a constant equilibrium point of (2). Now, we shift the equilibrium point u ∗ to the origin by using a transformation v(·) = u(·) − u ∗ , which puts (2) into the following form:

∂u(t, x) = A ◦ u(t, x) − Bu(t, x) + C f (u(t, x)) ∂t 

∂v(t, x) = A ◦ v(t, x) − Bv(t, x) + Cg(v(t, x)) ∂t 

k=1

+ +

n  j =1 n 

ci j f j (u j (t, x))  di j

j =1

t −∞

K i j (t − s) f j (u j (s, x))ds

+Ii , (t, x) ∈ [0, +∞) × , i = 1, . . . , n

+D

t

−∞

K (t − s) f (u(s, x))ds + I

where u(t, x) = [u 1 (t, x) · · · u n (t, x)]T f (u(·)) = [ f 1 (u 1 (·)) · · · f n (u n (·))]T  K (·) = K 11 (·)e1 · · · K 1n (·)en · · · T K n1 (·)e1 · · · K nn (·)en n×n2 A = diag (a11 , . . . , a1m , . . . , an1 , . . . , anm ) B = diag(b1 , . . . , bn )

(1)

0≤

+D

(2)

t

−∞

K (t − s)g(v(s, x))ds

(5)

where g(v(·)) = [g1 (v 1 (·)) · · · gn (v n (·))]T with g j (v j (·)) = f j (v j (·) + u ∗j ) − f j (u ∗j ), j = 1, . . . , n. Under Assumptions 1 and 2, the transformation yields g j (v j (·)) ≤ F j v j (·) , g j (0) = 0, j = 1, . . . , n (6) g 2j (v j (·)) ≤ v j (·)F j g j (v j (·)), g j (0) = 0, j = 1, . . . , n (7) respectively.

ZHOU et al.: STABILITY OF UNCERTAIN STOCHASTIC NNs WITH DISTRIBUTED DELAYS AND REACTION-DIFFUSIONS

Obviously, the stability problem for the equilibrium point u ∗ of (2) is equivalent to that for the origin of (5). Suppose that there exists a stochastic perturbation to the neural network (5). Then the stochastically perturbed network with distributed delays and reaction-diffusions is introduced as follows:  dv(t, x) = A ◦ v(t, x) − Bv(t, x) + Cg(v(t, x)) + D   t K (t − s)g(v(s, x))ds dt +σ (v(t, x))dω(t) × −∞

(8) where ω(t) = [ω1 (t) · · · ωn (t)]T is an n-dimensional Brownian motion defined on a complete probability space (,

F , P) with a natural filtration {Ft }t ≥0 , and σ (v(·)) = σi j (v j (·)) n×n is the noise intensity matrix. Assume, as in [34] and [35], that σi j (0) = 0 and there exist positive diagonal matrices = diag( 1 , . . . , n ) and = diag( 1 , . . . , n ) such that σi j (ε2 ) − σi j (ε1 ) ≤ j |ε2 − ε1 | ∀ε1 , ε2 ∈ R (9) σi j (ε) ≤ j (1 + |ε| ) ∀ε ∈ R. (10) Note that the boundary condition and initial condition of (5) and (8) are given by v(t, x) = 0, (t, x) ∈ [0, +∞) × ∂ v(s, x) = ψ(s, x), (s, x) ∈ (−∞, 0] × 

(11) (12)

respectively, where ψ(s, x) = [ψ1 (s, x) · · · ψn (s, x)]T with ψi (s, x) = ϕi (s, x) − u ∗i (i = 1, . . . , n). In practice, the network parameters depend on certain resistance and capacitance values which are subject to uncertainties. Thus the quantities aik , bi , ci j , and di j in (5) and (8) may be considered as intervalized as [36] ⎧ A I := {A = diag(a11 , . . . , a1m , . . . , an1 , . . . , anm ) ⎪ ⎪ ⎪ ⎪ ⎪ ∈ R nm×nm : 0 < A A A, ⎪ ⎪ ⎪ ⎪ i.e., 0 < a ik ≤ aik ≤ a ik , ⎪ ⎪ ⎪ ⎪ i = 1, . . . , n, k = 1, . . . , m} , ⎪ ⎪ ⎪ ⎪ ⎪ B I := {B = diag(b1 , . . . , bn ) ⎪ ⎪ ⎪ ⎨ ∈ R n×n : 0 < B B B, (13) i.e., 0 < bi ≤ bi ≤ bi , i = 1, . . . , n}, ⎪ ⎪ ⎪ ⎪ ⎪ C I := {C = (ci j )n×n ∈ R n×n : C C C, ⎪ ⎪ ⎪ ⎪ i.e., ci j ≤ ci j ≤ ci j , i, j = 1, . . . , n}, ⎪ ⎪ ⎪ ⎪ ⎪ D I := {D = [d11 e1 · · · d1n e1 · · · dn1 en · · · dnn en ] ⎪ ⎪ ⎪ 2 ⎪ ⎪ ∈ R n×n : D D D, ⎪ ⎪ ⎩ i.e., d i j ≤ di j ≤ d i j , i, j = 1, . . . , n}. Definition 1: The origin of (8) is said to be exponentially stable in the mean square if there exist constants λ > 0 and M ≥ 1 such that for any t ≥ 0   (14) E v(t, x) 2 ≤ Me−λt sup E { ψ(s, x) 2 }. −∞ 0 and vectors x, y ∈ R n 2x T M1 LM2 y ≤ ε−1 x T M1 M1T x + εy T M2T M2 y. Lemma 3: Let h(x) = h(x 1 , . . . , x m ) be a real-valued function defined on . If h(x) ∈ C 1 () and h(x)/∂ = 0, then  2  ∂h(x) 2 2 2 h (x)d x ≤ li ∂ x d x. π i  



2

(15)

Proof: The proof can be obtained by following a similar line to that in the proof of Lemma 1 in [38].

1410

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

Remark 2: The following integral inequality:1   ∂h(x) 2 2 2 h (x)d x ≤ li ∂ xi d x  

(16)

has played an important role in the derivation of the main results of [21], [15], [12], and [14]. Obviously, the inequality in (15) is more accurate than that in (16). Thus, the results in the aforementioned papers may be improved to some extent if (16) is replaced by (15). III. M AIN R ESULTS Theorem 1: Let E = [e1 · · · en · · · e1 · · · en ]n×n2 , L π = −1 π −1 −1 −1 = 2 [l 1 e1 · · · l m e1 · · · l 1 en · · · l m en ]n×nm , and A B T L π AL π + B. Then, (8) is robustly exponentially stable in the mean square if one of the following holds. (H1) Assumption 1 is satisfied and there exist scalars ρ > 0, εC > 0, ε D > 0, and diagonal matrices S = diag(s1 , . . . , sn ) > 0, P = diag( p1 , . . . , pn ) > 0, Q = diag(q11 , . . . , q1n , . . . , qn1 , . . . , qnn ) > 0 such that ρ I −n P ≥ 0, and ⎡ ⎤ X 11 −PC0 −P D0 −P MC −P M D ⎢ −C T P X 22 0 0 0 ⎥ 0 ⎢ ⎥ T T ⎢ 0 Q − εD ND ND 0 0 ⎥ 1 =⎢ −D0 P ⎥> 0 ⎣ −M T P ⎦ 0 0 ε 0 C C T −M D P 0 0 0 εD (17) where X 11 = P A B + A TB P − ρ T − F S F X 22 = S − E Q E T − εC NCT NC . (H2) Assumption 2 is satisfied and there exist scalars ρ > 0, εC > 0, ε D > 0, and diagonal matrices S = diag(s1 , . . . , sn ) > 0, P = diag( p1 , . . . , pn ) > 0, Q = diag(q11 , . . . , q1n , . . . , qn1 , . . . , qnn ) > 0 such that ρ I −n P ≥ 0 and ⎡ ⎤ −P D0 −P MC −P M D Y11 Y12 ⎢ Y T Y22 0 0 0 ⎥ 12 ⎢ ⎥ T T ⎢ 0 0 ⎥ 2 = ⎢ −D0 P 0 Q − ε D N D N D ⎥>0 ⎣ −M T P 0 0 εC 0 ⎦ C TP 0 −M D 0 0 εD where Y11 = P A B + A TB P − ρ T , Y12 = −PC0 − F S Y22 = 2S − E Q E T − εC NCT NC . +∞ Proof: Let τi j (η) = 0 eηs K i j (s)ds, i, j = 1, . . . , n. Then, it can be seen that τi j (η) ∈ C ([0, μ]). In fact, since eμs K i j (s) ∈ C ([0, +∞)), eηs K i j (s) ∈ C ([0, μ] × [0, +∞)),  +∞ eηs K i j (s) ≤ eμs K i j (s) for η ∈ [0, μ], and 0 eμs K i j (s)ds converges, according to the Weierstrass M-test we know that τi j (η) is uniformly and absolutely convergent for η in 1 Under assumptions h(x) ∈ C 1 () and h(x)/ ∂ = 0, the proof of

is given in [12]. However,  l it should be noted that, the step h(x) = (16) xi i −li (∂h(x))/(∂xi )d xi = − xi (∂h(x))/(∂xi )d xi in the proof may not be feasible. In order to get (16), it may be appropriate to change the assumption h(x) ∈ C 1 () into h(x) ∈ C 1 ().

[0, μ], which ensures the continuity of τi j (η) on [0, μ] by [39, Th. 56]. In view of this and (17), there must exist a small constant λ ∈ (0, min{μ, rmin (A B )/2}) such that ⎡ ⎤ Z 11 −PC0 −P D0 −P MC −P M D ⎢ −C T P Z 22 ⎥ 0 0 0 0 ⎢ ⎥ ⎢ −D T P ⎥ > 0 (18) 0 Z 33 0 0 0 ⎢ ⎥ ⎣ −M T P ⎦ 0 0 εC 0 C T −M D P 0 0 0 εD where Z 11 = −λP + P A B + A TB P − ρ T − F S F   Z 22 = S − max τi j (λ) E Q E T − εC NCT NC Z 33 = Q

1≤i, j ≤n T − εD ND ND .

By the Schur complement, it can be deduced from (18) that ⎤ ⎡ Z 11 −PC0 −P D0 1 = ⎣ −C0T P Z 22 (19) 0 ⎦>0 −D0T P 0 Z 33 where T Z 11 = Z 11 − εC−1 P MC MCT P − ε−1 D P M D M D P.

Consider the Lyapunov–Krasovskii functional in (20), as shown at the bottom of the next page. Using the Ito formula [40], we have (21), as shown at the bottom of the next page, for any t ≥ 0. By the integration by parts, followed by (11) and Lemma 3, we can find  2   ∂ 2 v i (ς, x) π v i (ς, x) dx ≤ − v i2 (ς, x)d x 2 ∂ xk 2lk   which, implies that  2v(ς, x)T P A ◦ v(ς, x)dz     ≤ − v T (ς, x) P L π AL πT + L π AL πT P v(ς, x)d x. 

(22) 

2 Using inequality  2  +∞( p(s)q(s)ds) ≤  2 the Cauchy–Schwarz ( p (s)ds)( q (s)ds) and noting 0 K i j (s)ds = 1, we have    +∞ − K i j (s)g 2j (v j (ς − s, x))ds d x 

0

 

≤−



2

ς −∞

K i j (ς − s)g j (v j (s, x))ds

dx

and, hence −

n n  

  qi j

i=1 j =1

≤−



 

+∞ 0

T

ς −∞





×

 K i j (s)g 2j (v j (ς − s, x))ds d x

K (ς − s)g(v(s, x))ds ς

−∞

Q  K (ς − s)g(v(s, x))ds d x.

(23)

ZHOU et al.: STABILITY OF UNCERTAIN STOCHASTIC NNs WITH DISTRIBUTED DELAYS AND REACTION-DIFFUSIONS

In view of (6), (9), and ρ I −n P ≥ 0, the following inequalities hold true: g T (v(ς, x))Sg(v(ς, x)) ≤ v T (ς, x)F S Fv(ς, x) (24)   T T T trace σ (v(ς, x))Pσ (v(ς, x)) ≤ ρv (ς, x) v(ς, x).

From (19) and (30), we find    t λς T e v (ς, x)Pσ (v(ς, x))d x dω(ς ). V (t) ≤ V (0)+2 0



P ≤

−P L π AL πT T



L π AL πT

P

(26)

−P B − B P ≤ −P B − B P.

(27)

Moreover, by Lemmas 1 and 2, we have

max |v i (ς, x)| ≤ , (ς, x) ∈ [0, t] × .

2v (ς, x)PCg(v(ς, x)) ≤ 2v T (ς, x)PC0 g(v(ς, x)) + εC−1 v T (ς, x)P MC ×MCT Pv(ς, x)+εC g T (v(ς, x))NCT NC g(v(ς, x)) (28)  ς K (ς − s)g(v(s, x))ds 2v T (ς, x)P D −∞  ς T ≤ 2v (ς, x)P D0 K (ς − s)g(v(s, x))ds −∞ T T v (ς, x)P M +ε−1 D M D Pv(ς, x) D  ς

 ×N D

−∞ ς −∞

K (ς − s)g(v(s, x))ds

T T ND



(30) where  (ς, x) 

 ς T  . = v T (ς, x) g T (v(ς, x)) −∞ K (ς − s)g(v(s, x))ds T

 

v T (t, x)Pv(t, x)d x +

1≤i, j ≤n

(33)



0

which, by [41, Th. 2.5], gives ! t  "  λς T E e v (ς, x)Pσ (v(ς, x))d x dω(ς ) = 0. (34) 

0

Taking expectations on both sides of (31) and noting (34), we get E {V (t)} ≤ E {V (0)}. (35)   E {V (t)} ≥ rmin (P) eλt E v(t, x) 2 .

(29)

Then, it follows from (21)–(29) that    t λς T e  (ς, x)1 (ς, x)d x dς V (t) ≤ V (0) −    t 0  λς T e v (ς, x)Pσ (v(ς, x))d x dω(ς ) +2

V (t) = eλt

Thus, (10) yields max σi j (v j (ς, x)) ≤ (1 +  ) , (ς, x) ∈ [0, t] × 

By (20), we find

K (ς − s)g(v(s, x))ds.

0

n  n 

(32)

where = max1≤ j ≤n j . In view of (32) and (33), we have, for any i, j ∈ {1, 2, . . . , n} 2   t λς e dς < ∞ v (ς, x) p σ (v (ς, x))d x i i ij j

T

+ε D

1≤i≤n

Noting A A and B B, we get L π AL πT T



(31)  Since v i (ς, x) ∈ C [0, t] ×  , i = 1, . . . , n, there exists a constant  > 0 such that

(25)

−P L π AL πT

1411

  qi j

i=1 j =1



+∞

(36)

On the other hand, it can be easily verified that there must exist a constant M ≥ rmax (P) such that E {V (0)} ≤ M

sup

−∞

Robust exponential stability of uncertain stochastic neural networks with distributed delays and reaction-diffusions.

This paper considers the problem of stability analysis for uncertain stochastic neural networks with distributed delays and reaction-diffusions. Two s...
1MB Sizes 1 Downloads 3 Views