Neural Networks 77 (2016) 7–13

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Neural networks letter

Noise further expresses exponential decay for globally exponentially stable time-varying delayed neural networks Song Zhu a,∗ , Qiqi Yang a , Yi Shen b a

College of Sciences, China University of Mining and Technology, Xuzhou, 221116, China

b

School of Automation, Huazhong University of Science and Technology, Wuhan, 430074, China

article

info

Article history: Received 21 September 2015 Received in revised form 14 December 2015 Accepted 27 January 2016

Keywords: Delayed neural networks Exponential decay Noise Adjustable parameter

abstract This paper shows that the globally exponentially stable neural network with time-varying delay and bounded noises may converge faster than those without noise. And the influence of noise on global exponential stability of DNNs was analyzed quantitatively. By comparing the upper bounds of noise intensity with coefficients of global exponential stability, we could deduce that noise is able to further express exponential decay for DNNs. The upper bounds of noise intensity are characterized by solving transcendental equations containing adjustable parameters. In addition, a numerical example is provided to illustrate the theoretical result. © 2016 Elsevier Ltd. All rights reserved.

1. Introduction Neural networks (NNs) are nonlinear dynamic systems with some resemblance to biological neural networks in the brain. The stability of NNs depends mainly on their parametrical configuration. In biological neural systems, signal transmission via synapses is usually a noisy process influenced by random fluctuations from the release of neurotransmitters and other disturbances (Haykin, 1994). Moreover, in the implementation of NNs, external random disturbances and time delays of signal transmission are common and can hardly be avoided. It is known that random disturbances and time delays in the neuron activations may result in oscillation or instability of NNs (Pham, Pakdaman, & Virbert, 1998). The stability properties of delayed NNs (DNNs) and stochastic NNs (SNNs) with external random disturbances have been widely investigated in recent years (see,e.g., Arik, 2002, Cao, Yuan, & Li, 2006, Chen, 2001, Chua & Yang, 1988, Huang, Ho, & Lam, 2005, Huang, Li, Duan, & Starzyk, 2012, Liao & Wang, 2003, Liu & Cao, 2009, 2010, 2011, Shen & Wang, 2007, 2008, 2012, Wang, Liu, Li, & Liu, 2006, Zeng & Wang, 2006, Zeng, Wang, & Liao, 2005, Zhang, Wang, & Liu, 2014, Zhu, Shen, & Chen, 2010 and the references cited therein). It is well known that noise can be used to stabilize a given unstable system, and it also can make a stable system even



Corresponding author. E-mail addresses: [email protected] (S. Zhu), [email protected] (Q.Q. Yang), [email protected] (Y. Shen). http://dx.doi.org/10.1016/j.neunet.2016.01.012 0893-6080/© 2016 Elsevier Ltd. All rights reserved.

more stable. There is an extensive literature concerned with the stabilization by noise, e.g., (Appleby, Mao, & Rodkina, 2008; Luo, Zhong, Zhu, & Shen, 2014; Mao, 2005, 2007b; Zhu & Shen, 2013) and the references therein. The pioneering work in this area is given due to Hasminskii (1981), who stabilized an unstable system by using two white noise sources. Several years ago, Mao, Marion, and Renshaw (2002) showed another important fact that the environmental noise can suppress explosions in a finite time in population dynamics. Recently, Deng, Luo, Mao, and Peng (2008) revealed that the noise can suppress or express exponential growth under the linear growth condition. In Hu, Liu, Mao, and Song (2009), Hu et al. developed the theory in Deng et al. (2008) to cope with the much more general systems. In absence of the linear growth condition or only the one-sided linear growth condition, Wu and Hu (2009) further considered the problem of stochastic suppression and stabilization of nonlinear differential systems. Liu and Shen (2012) revealed that the single noise can also make almost every path of the solution of corresponding stochastically perturbed system grow at most polynomially. Noises can lead to instability and they can destabilize stable DNNs if it exceeds their limits, what is more, the instability depends on the noise intensity. For a stable DNN, is there a certain noise intensity that can make the DNN even more stable? Therefore, it is interesting to determine the upper bounds of random disturbances which express exponential decay for a stable DNN without losing its global exponential stability. Although the various stability properties of DNNs with noise have been extensively investigated by employing the Lyapunov stability theory (Arik, 2002; Chen, 2001; Chua & Yang, 1988; Liao & Wang, 2003; Shen & Wang,

8

S. Zhu et al. / Neural Networks 77 (2016) 7–13

2007, 2008), the linear matrix inequality methods (Cao et al., 2006; Huang et al., 2005; Wang et al., 2006; Xu, Lam, & Ho, 2006) and the matrix norm theory (Faydasicok & Arik, 2012), few works investigated the issue: noise makes the DNN even more stable when it is already stable, directly by estimating the upper bounds of noise level from the coefficients of global exponential stability condition. Motivated by the above discussions, we quantitatively analyze the influence of noise on global exponential stability of DNNs. Different from the traditional Lyapunov stability theory and the matrix norm theory, we investigate the exponential stability of DNNs directly from the coefficients of the DNNs. In this paper, we know that noise is able to further express exponential decay for DNNs without losing global stability, by comparing the upper bounds of noise intensity and coefficients of global exponential stability. The upper bounds of noise intensity are characterized by solving transcendental equations containing adjustable parameters.

Throughout this paper, unless otherwise specified, Rn and Rn×m denote, respectively, the n-dimensional Euclidean space and the set of n × m real matrices. Let (Ω , F , {Ft }t ≥0 , P ) be a complete probability space with a filtration {Ft }t ≥0 satisfying the usual conditions (i.e. the filtration contains all P-null sets and is right continuous). ω(t ) be a scalar Brownian motion defined on the probability space. If A is a matrix, its operator norm is denoted by ∥A∥ = sup{|Ax| : |x| = 1}, where | · | is the Euclidean norm. Denote L2F0 ([−τ¯ , 0]; Rn ) as the family of all F0 − measurable C ([−τ¯ , 0]; Rn ) valued random variables ψ = {ψ(θ ) : −τ¯ ≤ θ ≤ 0} such that sup−τ¯ ≤θ≤0 E |ψ(θ )|2 < ∞ where E {} stands for the mathematical expectation operator with respect to the given probability measure P. Consider a DNN model dz (t ) = [−Az (t ) + Bg (z (t )) + Dg (z (t − τ (t ))) + I ]dt , z (t ) = ψ(t − t0 ) ∈ C ([t0 − τ¯ , t0 ]; Rn ), t0 − τ¯ ≤ t ≤ t0 ,

(1)

where z (t ) = (z1 (t ), . . . , zn (t ))T ∈ Rn is the state vector of the neurons, t0 ∈ R+ and ψ ∈ Rn are the initial values, A = diag{a1 , . . . , an } ∈ Rn×n , ai > 0 is the self-feedback connection weight matrix, B = (bkl )n×n ∈ Rn×n , D = (dkl )n×n ∈ Rn×n are connection weight matrices, τ (t ) is a delay, which satisfies τ (t ) : [t0 , +∞) → [0, τ¯ ], τ ′ (t ) ≤ µ < 1, ψ = {ψ(s) : −τ¯ ≤ s ≤ 0} ∈ C ([−τ¯ , 0], Rn ), τ¯ is the maximum of delay, I is neuron external input (bias), and g (·) ∈ Rn is a continuous bounded vectorvalued activation function which satisfying the following Lipschitz condition; i.e.,

∀u, v ∈ Rn , g (0) = 0,

where k is a known constant. As usual, a vector z ∗ = [z1∗ , z2∗ , . . . , zn∗ ]T is said to be an equilibrium point of system (1) if it satisfies Az ∗ = (B + D)g (z ∗ ) + I For notational convenience, we will always shift an intended equilibrium point z ∗ of system (1) to the origin by letting x = z − z ∗ , f (x) = g (x + z ∗ ) − g (z ∗ ). It is easy to transform system (1) into the following form: dx(t ) = [−Ax(t ) + Bf (x(t )) + Df (x(t − τ (t )))]dt , x(t ) = ψ(t − t0 ) ∈ C ([t0 − τ¯ , t0 ]; Rn ), t0 − τ¯ ≤ t ≤ t0 .

|f (u) − f (v)| ≤ k|u − v|,

∀u, v ∈ Rn , f (0) = 0,

(3)

where k is a known constant. DNN (2) has a unique state x(t ; t0 , ψ) on t ≥ t0 for any initial value t0 , ψ . Now we define the globally exponential stability of the state of DNN (2). Definition 1. The state of DNN (2) is globally exponentially stable, if for any t0 , ψ , there exist α > 0 and β > 0 such that

|x(t ; t0 , ψ)| ≤ α∥ψ∥ exp(−β(t − t0 )),

∀t ≥ t0 ,

(4)

where x(t ; t0 , ψ) is the state of the model in (2). 3. Main results

2. Problem formulation

|g (u) − g (v)| ≤ k|u − v|,

Assumption 1. The activation function f (·) satisfies the following Lipschitz condition; i.e.,

(2)

In addition, the function f in (2) satisfies the following Lipschitz condition and f (0) = 0:

Now, the question is, for a given globally exponentially stable DNN, how much noise intensity can the DNN endure without impacting its stability? We consider the noise-induced DNNs described by the Itô stochastic differential equation (SDNNs). dy(t ) = [−Ay(t ) + Bf (y(t )) + Df (y(t − τ (t )))]dt

+ σ y(t )dω(t ), y(t ) = ψ(t − t0 ) ∈

L2F0

t0 − τ¯ ≤ t ≤ t0 ,

t > t0 ,

([t0 − τ¯ , t0 ]; Rn ), (5)

where A, B, D, f are the same as in Section 2, f satisfies Assumption 1, σ is the intensity of noise, ω(t ) is a scalar Brownian motion defined on the probability space (Ω , F , {Ft }t ≥0 , P ). Under Assumption 1, SDNN (5) has a unique state for any initial value t0 , ψ and the origin point is the equilibrium point. We calculated the largest noise intensity that SDNN (5) can bear without losing global exponential stability. Moreover, we deduce the intensity of noise that makes the DNN (2) even more stable when it is already stable. For SDNN (5), we give the following definition of global exponential stability. Definition 2 (Mao, 2007a). SDNN (5) is said to be almost surely globally exponentially stable if for any t0 ∈ R+ , ψ ∈ L2F0 ([−τ¯ , 0]; Rn ), there exist α > 0 and β > 0 such that ∀t ≥ t0 , |y(t ; t0 , ψ)| ≤ α∥ψ∥ exp(−β(t − t0 )) hold almost surely; i.e., the Lyapunov exponent lim supt →∞ (ln |y(t ; t0 , ψ)|/t ) < 0 almost surely, where y(t ; t0 , ψ) is the state of SDNN (5). SDNN (5) is said to be mean square globally exponentially stable if for any t0 ∈ R+ , ψ ∈ L2F0 ([−τ¯ , 0]; Rn ), there exist α > 0 and β > 0

such that ∀t ≥ t0 , E |y(t ; t0 , ψ)|2 ≤ α∥ψ∥ exp(−β(t − t0 )) hold; i.e., lim supt →∞ (ln(E |y(t ; t0 , ψ)|2 )/t ) < 0, where y(t ; t0 , ψ) is the state of SDNN (5). From the above definitions, the almost surely global exponential stability and the mean square global exponential stability of SDNN (5) are formally corresponding to each other. In fact, they do not imply each other and additional conditions are required in order to deduce one from the other. Therefore, if Assumption 1 holds, we have the following lemma (Mao, 2007a). Lemma 1. Let Assumption 1 hold. Then the global exponential stability in sense of mean square of SDNN (5) implies the almost surely exponential stability of SDNN (5).

Theorem 1. Let Assumption 1 hold and DNN (2) be globally exponentially √stable when the coefficient of global exponential stability α < 2/2. SDNN (5) is mean square globally exponentially

S. Zhu et al. / Neural Networks 77 (2016) 7–13

stable and also almost surely globally exponentially stable, if |σ | < σ¯ , σ¯ is a unique positive solution of the transcendental equation

+

2c2 exp(3τ¯ c1 ) + 2α 2 exp(−2β τ¯ ) = 1.

+

3τ¯ c1 + 2β τ¯ + ln

2c2



1 − 2α 2

E |y(s) − y(s − τ (s))|2 ds t0

54τ¯

ε

∥D∥2 k2 (1 + exp(2β τ¯ )) +

= 0,

(7)

2σ (∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 ) + 1 −ε    54τ¯ ∥D∥2 k2  2 2 2 2 2 2 + ∥D∥ k 6τ¯ ∥A∥ + ∥B∥ k + + 2τ¯ σ 2 , ε 1−µ   2 2 τ ¯ σ α 27τ¯ 54τ¯ ∥D∥2 k2 τ¯ + + + c2 = ε 1−µ (1 − ε)β ε  6τ¯ 3 α2 2 2 2 2 × ∥D∥ k ∥D ∥ k + 1 + exp(2β τ¯ ) 1−µ β   ∥D∥2 k2  2 2 2 2 2 + 6τ¯ ∥A∥ + ∥B∥ k + + 2τ¯ σ , 1−µ 9τ¯

t t0 +τ¯

E |y(s) − y(s − τ (s))|2 ds t



s





ds

 

6τ¯ (∥A∥2 + ∥B∥2 k2 ) + 2σ 2

s−τ¯

t0 +τ¯



t

s

 ds

s−τ¯

t0 +τ¯

E |y(r )|2 dr =



t

E |x(t ) − y(t )|2

 1  t ≤ E  [−A(x(s) − y(s)) + B(f (x(s)) − f (y(s))) ε t0 2  + D(f (x(s − τ (s))) − f (y(s − τ (s))))]ds  t 2   1 E  σ y(s)dω(s) + 1−ε t0   9τ¯ 2σ 2 2 2 2 2 2 ≤ (∥A∥ + ∥B∥ k + 3∥D∥ k ) + ε 1−ε  t × E |x(s) − y(s)|2 ds

t0 +τ¯

s−τ¯

+ +  ≤

ε

∥D∥2 k2

t0 −τ¯ ≤s≤t0



ε

2 2



t0

1−ε

ε 

t



E |x(s)|2 ds t0

(∥A∥ + ∥B∥ k + 3∥D∥ k ) + 2

2 2

t

E |x(s) − y(s)|2 ds

× t0

2 2

2σ 2 1−ε



E |y(s)|2 ) + τ¯

t t0

E |y(u)|2 du

.

(11)

t0 +τ¯



E |y(s) − y(s − τ (s))|2 ds 6τ¯ 3

2 2

1−µ

∥D∥ k

 sup

t0 −τ¯ ≤s≤t0

E |y(s)|

2



 ∥D∥2 k2  + 6τ¯ 2 ∥A∥2 + ∥B∥2 k2 + 1−µ  t + 2τ¯ σ 2 E |y(s)|2 ds. 

(12)

t0

Substituting (12) into (8), when t ≥ t0 + τ¯ , we have E |x(t ) − y(t )|2

 ≤

9τ¯

ε 

(∥A∥ + ∥B∥ k + 3∥D∥ k ) + 2

2 2

t

E |x(s) − y(s)|2 ds +

2 2

54τ¯

ε

2σ 2



1−ε

∥D∥2 k2

  27τ¯ τ¯ sup E |y(s)|2 + 1−µ ε t0 −τ¯ ≤s≤t0 +τ¯    3 6 τ ¯ × ∥D∥2 k2 ∥D∥2 k2 sup E |y(s)|2 1−µ t0 −τ¯ ≤s≤t0    ∥D∥2 k2  + 6τ¯ 2 ∥A∥2 + ∥B∥2 k2 + + 2τ¯ σ 2 1−µ   t 2 × E |y(s) − x(s) + x(s)| ds  × τ¯ +

E |y(s) − y(s − τ (s))|2 ds

(10)

t

t0

t

∥D∥ k

2σ 2 9τ¯



E |y(r )|2 dr .

So, when t ≥ t0 + τ¯ , by substituting (10) and (11) into (9), we have

×

t0

27τ¯

t



1−µ

t

E |x(s) − x(s − τ (s))|2 ds

E |y(r )|2 ds

E |y(r − τ (r ))|2 dr

τ¯ 2 ( sup

t0

27τ¯

max(t0 +τ¯ ,r )

t0

s

 ds

[−A(x(s) − y(s)) + B(f (x(s)) − f (y(s)))

When t ≤ t0 + 3τ¯ , from Assumption 1, some fundamental inequalities and Holder inequality (Mao, 2007a), we have

min(r +τ¯ ,t )

 dr

For the same reason, we have

t0

t0

t



t0

t

+ D(f (x(s − τ (s))) − f (y(s − τ (s))))]ds  t − σ y(s)dω(s).

(9)

By reversing the order of integral, we have

≤ τ¯

Proof. Fix t0 , ψ = {ψ(s) : −τ¯ ≤ s ≤ 0}; for simplicity, we write x(t ; t0 , ψ), y(t ; t0 , ψ) as x(t ), y(t ) respectively. From (2) and (5), we have



 × E |y(r )|2 + 6τ¯ ∥D∥2 k2 E |y(r − τ (r ))|2 dr .

and ε is an adjustable parameter, ε ∈ (0, 1).

+

(8)

In addition, when t ≥ t0 + τ¯ , from (5) and Assumption 1,

ε



1−ε

E |x(s)|2 ds.

×

2

x(t ) − y(t ) =



t







2σ 2

t0

where c1 =

∥D∥2 k2

ε 

(6)

Furthermore, noise expresses exponential decay for SDNN (5), if |σ | < σˆ , σˆ is a unique positive solution of the transcendental equation



27τ¯

9 t



t0

10

S. Zhu et al. / Neural Networks 77 (2016) 7–13

 +

27τ¯

∥D∥2 k2 (1 + exp(2β τ¯ )) +

ε 

×

sup

t0 −τ¯ ≤s≤t0

σ α 1−ε β 2

2



So, from (18) we have

 E |y(s)|2 .

(13)

× ε 2 + (9τ¯ c3 c4 − c4 − 3τ¯ c3 θ σ 2 )ε − 3τ¯ c3 c4 = 0,

From (13), we further have



9τ¯

ε +

(∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 ) +

54τ¯

ε



c3 = 9τ¯ (∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 ) + 54τ¯ ∥D∥2 k2    ∥D∥2 k2  2 2 2 2 2 × 6τ¯ ∥A∥ + ∥B∥ k + + 2τ¯ σ , 1−µ

2σ 2 1−ε



∥D∥2 k2 6τ¯ 2 ∥A∥2 + ∥B∥2 k2 +

+ 2τ¯ σ 2

∥D∥2 k2  1−µ

τ¯  6τ¯ 3 + ∥D∥2 k2 1−µ 1−µ   α2 + 1 + exp(2β τ¯ ) + 6τ¯ 2 ∥A∥2 + ∥B∥2 k2 β  ∥D∥2 k2  2 + + 2τ¯ σ , 1−µ

c4 = 27τ¯ ∥D∥ k

2 2

  t E |x(s) − y(s)|2 ds · t0

54τ¯

σ 2α2 ε (1 − ε)β   3 6τ¯ α2 27τ¯ ∥D∥2 k2 ∥D∥2 k2 + 1 + exp(2β τ¯ ) + ε 1−µ β   ∥D∥2 k2  2 2 2 2 2 + 6τ¯ ∥A∥ + ∥B∥ k + + 2τ¯ σ 1−µ   × sup E |y(s)|2 . 

+

τ¯ 1−µ

 ∥D∥2 k2 τ¯ +



+

t0 −τ¯ ≤s≤t0 +τ¯

 E |x(t ) − y(t )|2 ≤ c2 exp(3τ¯ c1 )

sup

t0 −τ¯ ≤s≤t0 +τ¯

 E |y(s)|2 .

(14)

sup

t0 +τ¯ ≤t ≤t0 +3τ¯

(15)

E |y(t ; t0 , ψ)|2

 ≤ exp(−γ τ¯ )

≤ 2c2 exp(3τ¯ c1 ) + 2α 2 exp(−2β(t − t0 ))   2 × sup E |y(s)| . 

(16)



t0 −τ¯ ≤s≤t0 +τ¯

 =: cˆ

E |y(s)|

When t ≥ t0 + (2m − 1)τ¯ , from (17)



sup

t0 +(2m−1)τ¯ ≤t ≤t0 +(2m+1)τ¯



sup

t0 −τ¯ ≤s≤t0 +τ¯

From the existence and uniqueness of the state of SDNN (5), we have y(t ; t0 , ψ) = y(t ; t0 + (2m − 1)τ¯ , y¯ (t0 + (2m − 1)τ¯ ; t0 , ψ)).

2c2 exp(3τ¯ c1 ) + 2α 2 exp(−2β τ¯ ) sup

E |y(s)|2 ,

(17)

where cˆ = 2c2 exp(3τ¯ c1 ) + 2α exp(−2β τ¯ ). From (6), when |σ | < σ¯ , then cˆ < 1. So, when t0 + τ¯ ≤ t ≤ t0 + 3τ¯ , let 2

∂ cˆ = 0. ∂ε

 ≤ exp(−γ τ¯ ) ...

E |y(t ; t0 , ψ)|2 sup

t0 +(2m−3)τ¯ ≤t ≤t0 +(2m−1)τ¯

≤ exp(−γ mτ¯ )

 sup

t0 −τ¯ ≤t ≤t0 +τ¯

E |y(t ; t0 , ψ)|2

E |y(t ; t0 , ψ)|2





= c¯0 exp(−γ mτ¯ ), where c¯0 = supt0 −τ¯ ≤t ≤t0 +τ¯ E |y(t ; t0 , ψ)|2 . So for any t > t0 + (2m − 1)τ¯ , there must have a positive integer m such that t0 + (2m − 1)τ¯ ≤ t ≤ t0 + (2m + 1)τ¯ , we have

Therefore, we have

∂ ln c¯ ∂ cˆ = = 0, ∂ε ∂ε

E |y(t ; t0 , ψ)| ≤ c¯0 exp 2

where c¯ = cˆ − 2α exp(−2β τ¯ ). Namely 2

∂ ln c¯ ∂ ln c2 ∂ c1 = + 3τ¯ = ∂ε ∂ε ∂ε

(20)

t0 , ψ) : −τ¯ ≤ s ≤ 0} ∈ C ([−τ¯ , 0]; Rn ).

Thus, when t0 + τ¯ ≤ t ≤ t0 + 3τ¯

×

t0 −τ¯ ≤t ≤t0 +τ¯

E |y(t ; t0 , ψ)|2 .

y¯ (t0 + (2m − 1)τ¯ ; t0 , ψ) := {y(t0 + (2m − 1)τ¯ + s;



t0 −τ¯ ≤s≤t0 +τ¯

2



sup

Then, for any positive integer m = 1, 2, · · ·, we denote y¯ as follows

E |y(t )|2 ≤ 2E |x(t ) − y(t )|2 + 2E |x(t )|2



2 τ¯ +

Therefore, Eq. (19) has a real solution ε0 , to replace ε with ε0 in cˆ , we know that cˆ is a strictly monotone function. So, for Eq. (6) exists a unique solution σ¯ , such that |σ¯ | = σmax , when ε = ε0 ∈ (0, 1). Let γ = − ln cˆ /τ¯ , from (7), when |σ | < σˆ , then γ /2 > β , namely cˆ < exp(−β τ¯ ). When t0 + τ¯ ≤ t ≤ t0 + 3τ¯ , let ∂ cˆ /∂ε = 0. Similarly, we can get the formula (19), thereby existing a unique σˆ < σ¯ in Eq. (7), such that γ /2 > β . We can conclude

Therefore



 

θ = α 2 /β.

When t0 + τ¯ ≤ t ≤ t0 + 3τ¯ , according to the Gronwall inequality (Mao, 2007a), we obtain

E |y(t )|2 ≤

(19)

where

E |x(t ) − y(t )|2



(c4 − θ σ 2 )ε4 + (θ σ 2 − 3c4 − 6τ¯ c4 σ 2 + 6τ¯ θ σ 4 + 3τ¯ c3 c4 − 3τ¯ c3 θ σ 2 )ε3 + (3c4 + 6τ¯ c4 σ 2 − 9τ¯ c3 c4 + 6τ¯ c3 θ σ 2 )



∂ c2 ∂ c1 + 3τ¯ c2 ∂ε ∂ε



/c2 .

(18)



γ τ¯ 2



 γ

exp −

2

 (t − t0 ) .

(21)

Condition (21) is also true when t0 − τ¯ ≤ t ≤ t0 + τ¯ . So SDNN (5) is mean square globally exponentially stable. According to Lemma 1, SDNN (5) is also almost surely globally exponentially stable. Moreover, we can obtain SDNN (5) even more stable.

S. Zhu et al. / Neural Networks 77 (2016) 7–13

Remark 1. Theorem 1 shows that if DNN is globally exponentially stable, the solutions of the SDNN induced by noise will decay exponentially at a faster rate, when the provided noise is lower than the given upper bounds.

When t ≤ t0 + 3τ¯ , from Assumption 1, some fundamental inequalities and Holder inequality (Mao, 2007a), we have E |x(t ) − y(t )|2

Remark 2. From the proofs of Theorem 1, we can know that the upper bounds of parameter uncertainty intensity are derived via subtle inequalities and can be estimated by solving transcendental equations containing adjustable parameters. As transcendental equations can be solved by the software of MATLAB, the derived conditions in the theorem can be verified easily.

1  E 

δ

dy(t ) = [−Ay(t ) + Bf (y(t )) + Df (y(t − τ (t )))]dt

t0

t > t0 ,

y(t ) = ψ(t − t0 ) ∈ L2F0 ([−τ¯ , 0]; Rn ),

+

t0 − τ¯ ≤ t ≤ t0 .

(22)

+

In this case, we have following theorem. Theorem 2. Let Assumption 1 hold and DNN (2) be globally exponentially stable. SDNN (21) is mean square globally exponentially stable and also almost surely globally exponentially stable when the √ ¯ , λ¯ is coefficient of global exponential stability α < 2/2, if |λ| < λ a unique positive solution of the transcendental equation

+ + 

2c6 exp(3τ¯ c5 ) + 2α exp(−2β τ¯ ) = 1. 2



(23)

Moreover, noise expresses exponential decay for SDNN (21), if |λ| < λˆ , λˆ is a unique positive solution of the transcendental equation





2c6 1 − 2α

2

= 0,

∥D∥2 k2

δ

∥D∥2 k2

δ

t



E |y(s) − y(s − τ (s))|2 ds t0

3λ2



1−δ

t

E |y(s) − y(s − τ (s))|2 ds t0

3λ2



1−δ

δ 

E |x(s) − x(s − τ (s))|2 ds t0

27τ¯

9τ¯

t



t

E |x(s)|2 ds t0

3λ2

(∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 ) +

+ +

2

1−δ

t

E |x(s) − y(s)|2 ds

27τ¯

∥D∥2 k2

δ



1−δ

+

54τ¯

δ 

E |y(s) − y(s − τ (s))|2 ds t0

3λ2



t



t

E |y(s) − y(s − τ (s))|2 ds t0

∥D∥ k (1 + exp(2β τ¯ )) +

3λ2

2 2

1−δ

E |x(s)|2 ds.

×

(25)

t0

In addition, when t ≥ t0 + τ¯ , from (21) and Assumption 1,



t t0 +τ¯

E |y(s) − y(s − τ (s))|2 ds



t

s







ds t0 +τ¯

s−τ¯

6τ¯ (∥A∥2 + ∥B∥2 k2 ) E |y(r )|2





+ (6τ¯ ∥D∥ k + 2λ )E |y(r − τ (r ))| 2 2

2

2



dr .

By reversing the order of integral, we have

Proof. Fix t0 , ψ = {ψ(s) : −τ¯ ≤ s ≤ 0}; for simplicity, we write x(t ; t0 , ψ), y(t ; t0 , ψ) as x(t ), y(t ) respectively. From (2) and (21), we have



x(t ) − y(t ) =

t

[−A(x(s) − y(s)) + B(f (x(s)) − f (y(s))) t0

+ D(f (x(s − τ (s))) − f (y(s − τ (s))))]ds  t − λy(t − τ (t ))dω(s). t0



t

and δ is an adjustable parameter, δ ∈ (0, 1).





t0

(24)

3λ (∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 ) + δ 1   − δ 54τ¯ ∥D∥2 k2 6λ2 + + 6τ¯ 2 ∥A∥2 δ 1−δ  2τ¯ λ2 ∥D∥2 k2  2 2 + ∥B∥ k + + , 1−µ 1−µ   54τ¯ ∥D∥2 k2 6λ2 τ¯ c6 = + τ¯ + δ 1−δ 1−µ   3τ¯ 3 ∥D∥2 k2 + τ¯ 2 λ2 α2 + + 3τ¯ 2 ∥A∥2 1−µ β  ∥D∥2 k2  τ¯ λ2 2 2 + ∥B∥ k + + 1−µ 1−µ   2 α 27τ¯ 3λ2 ∥D∥2 k2 (1 + exp(2β τ¯ )) + , + β δ 2(1 − δ) 9τ¯

27τ¯

×

where c5 =

[−A(x(s) − y(s)) + B(f (x(s)) − f (y(s))) t0

2  + D(f (x(s − τ (s))) − f (y(s − τ (s))))]ds 2  t   1  λy(s − τ (s))dω(s) E + 1−δ t0   9τ¯ 3λ2 ≤ (∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 ) + δ 1−δ  t E |x(s) − y(s)|2 ds ×

Let us consider another condition, if we add random disturbance with time delay to the DNN (2), we have a new SDNN, where λ stands for the intensity of noise.

3τ¯ c5 + 2β τ¯ + ln

t





+ λy(t − τ (t ))dω(t ),

11

t



s

ds t0 +τ¯

s−τ¯

E |y(r )|2 dr ≤ τ¯

(26)

t



E |y(r )|2 dr .

(27)

t0

For the same reason, we have



t



s

ds t0 +τ¯

s−τ¯

E |y(r − τ (r ))|2 dr

τ¯ 2 ( sup ≤

t0 −τ¯ ≤s≤t0

E |y(s)|2 ) + τ¯ 1−µ

t t0

E |y(u)|2 du

.

(28)

12

S. Zhu et al. / Neural Networks 77 (2016) 7–13

So, when t ≥ t0 + τ¯ , by substituting (27) and (28) into (26), we have



t t0 +τ¯



Example 1. Consider a two-state DNN

E |y(s) − y(s − τ (s))|2 ds

6τ¯ 3 ∥D∥2 k2 + 2τ¯ 2 λ2

dx(t )

 sup

1−µ



t0 −τ¯ ≤s≤t0

E |y(s)|

2



dt



 A= (29)

Substituting (29) into (25), when t ≥ t0 + τ¯ , we have E |x(t ) − y(t )|2



9τ¯

δ 

(∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 ) + t

E |x(s) − y(s)|2 ds +

×

54τ¯ ∥D∥ k

3λ2

 +

δ

∥D∥2 k2 (1 + exp(2β τ¯ )) +



3λ2



sup

t0 −τ¯ ≤s≤t0



+ +

−1 1



1 , −1

α2 2(1 − δ) β

4.57797σ 2



 · exp

1−ε 3 × 10−3 σ 2 1−ε

+

1.1475 × 10−4 + 2.187 × 10−9

ε

1.62 × 10−7 σ

 2

ε

+ 0.31998 = 1,

(34)

Eq. (7) changes as 1.1475 × 10−4 + 2.187 × 10−9

ε

(30)

+ +

 3λ ∥A∥2 + ∥B∥2 k2 + 3∥D∥2 k2 + 1−δ   54τ¯ 6λ2 2 2 ∥D∥ k + 6τ¯ 2 ∥A∥2 + ∥B∥2 k2 + δ (1 − δ)   t ∥D∥2 k2  2τ¯ λ2 + + · E |x(s) − y(s)|2 ds 1−µ 1−µ t0     6λ2 τ¯ 54τ¯ 2 2 + ∥D∥ k + τ¯ + δ (1 − δ) 1−µ   3 2 2 2 2 3τ¯ ∥D∥ k + τ¯ λ + + 3τ¯ 2 ∥A∥2 + ∥B∥2 k2 1−µ   2 τ¯ λ2 α ∥D∥2 k2 + + 1−µ 1−µ β   2 27τ¯ ∥D∥2 k2 (1 + exp(2β τ¯ )) 3λ2 α + + δ 2(1 − δ) β   2 × sup E |y(s)| .

(33)

we can obtain its curve for (ε, σ ). Fig. 1 shows the stability region with (ε, σ ) in SDNN (26). We can obtain  σmax = 0.0566.

E |x(t ) − y(t )|2 2

3 × 10−3 σ 2 1−ε 3.1403σ

2

1−ε



 + ln

+

1.62 × 10−7 σ 2

ε

4.3299 × 10−4 + 0.3392

ε

+ 6.99 × 10−5 = 0,

(35)

by solving Eq. (7), we can obtain γ /2 = 0.1645 > β = 0.0699. Fig. 2 depicts the transient states of the SDNN (26) with τ¯ = 0.0005, σ = 0.0566. It shows that the SDNN (26) is mean square globally exponentially stable and also almost surely globally exponentially stable, as the parameter |σ | <  σmax .

δ 

t0 −τ¯ ≤s≤t0 +τ¯

 D=

ε



E |y(s)|2 .

9τ¯ 

1



1 , −1

 0.108  4.580159 + 3 × 10−9 + 2.289 × 10−3 σ 2

From (30), we further have



 −1

B=

where τ (t ) is the time-varying delay. According to Theorem 1, let µ = 0, Eq. (6) changes as



×





0 , 1

+ σ y(t )dw(t ), 2

t0

27τ¯

1 0

f (xj ) = (|xj + 1| − |xj − 1|)/2, τ (t ) = 0.00025(sint + 1), x(0) = [0.4, −0.2]T . Hence, according to Theorem 2 in Shen and Wang (2012), DNN (20) is globally exponentially stable with α = 0.4, β = 0.0699. In the presence of noise, the DNN becomes a SDNN:

1−δ

2 2

δ 1−δ    τ¯ 27 τ ¯ + sup E |y(s)|2 + ∥D∥2 k2 1−µ δ t0 −τ¯ ≤s≤t0 +τ¯  3 3λ2 6τ¯ ∥D∥2 k2 + 2τ¯ 2 λ2 + ( sup E |y(s)|2 ) 1−δ 1−µ t0 −τ¯ ≤s≤t0    2 2 ∥D∥ k 2τ¯ λ2 + 6τ¯ 2 ∥A∥2 + ∥B∥2 k2 + + 1−µ 1−µ   t × E |y(s) − x(s) + x(s)|2 ds 

(32)

dy(t ) = [−Ay(t ) + Bf (y(t )) + Df (y(t − τ (t )))]dt



t0

+

= −Ax(t ) + Bf (x(t )) + Df (x(t − τ (t ))).

The parameters are as follows:

2 2

∥D∥ k  + 6τ¯ 2 ∥A∥2 + ∥B∥2 k2 + 1−µ  t 2 2 2τ¯ λ E |y(s)|2 ds. + 1−µ t0



4. Illustrative example

5. Conclusion In this paper, we just use the coefficients of global exponential stability to discuss the issue: whether noise intensity can make the globally exponentially stable DNN even more stable or not. We deduce that noise is able to further express exponential decay for the DNNs. The upper bounds of noise can be estimated by solving transcendental equations containing adjustable parameters. Acknowledgments (31)

The rest of the proof can be completed similarly to the proof of Theorem 1.

The authors would like to thank the editor and the anonymous reviewers for their insightful comments and valuable suggestions, which have helped us in finalizing the paper. This work was supported by the Fundamental Research Funds for the Central Universities of 2015QNA55.

S. Zhu et al. / Neural Networks 77 (2016) 7–13

Fig. 1. The stability region with (ε, σ ) in SDNN (26).

Fig. 2. The transient state of SDNN (26) with τ (t ) = 0.0005, σ = 0.0566.

References Appleby, J. A. D., Mao, X., & Rodkina, A. (2008). Stabilization and destabilization of nonlinear differential equations by noise. IEEE Transactions on Automatic Control, 53, 683–691. Arik, S. (2002). An analysis of global asymptotic stability of delayed cellular neural networks. IEEE Transactions on Neural Networks, 13, 1239–1242. Cao, J., Yuan, K., & Li, H. (2006). Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays. IEEE Transactions on Neural Networks, 17, 1646–1651. Chen, T. (2001). Global exponential stability of delayed Hopfield neural networks. Neural Networks, 14, 977–980. Chua, L. O., & Yang, L. (1988). Celluar neural networks: Theory. IEEE Transactions on Circuits and Systems, 35, 1257–1272. Deng, F., Luo, Q., Mao, X., & Peng, S. (2008). Noise suppresses or expresses exponential growth. Systems & Control Letters, 57, 262–270.

13

Faydasicok, O., & Arik, S. (2012). Robust stability analysis of a class neural networks with discrete time delays. Neural Networks, 29–30, 52–59. Hasminskii, R. Z. (1981). Stochastic stability of differential equations. Sijthoff and Noordhoff. Haykin, S. (1994). Neural Networks. New York: Prentice Hall. Hu, G., Liu, M., Mao, X., & Song, M. (2009). Noise expresses exponential growth under regime switching. Systems & Control Letters, 58, 691–699. Huang, H., Ho, D. W. C., & Lam, J. (2005). Stochastic stability analysis of fuzzy Hopfield neural networks with time-varying delays. IEEE Transactions on Circuits and Systems Part II: Express Briefs, 52, 251–255. Huang, T., Li, C., Duan, S., & Starzyk, J. A. (2012). Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects. IEEE Transactions on Neural Networks and Learning Systems, 23, 866–875. Liao, X., & Wang, J. (2003). Algebraic criteria for global exponential stability of cellular neural networks with multiple time delays. IEEE Transactions on Circuits and Systems. Part I. Regular Papers, 50, 268–275. Liu, X., & Cao, J. (2009). Exponential stability of anti-periodic solutions for neural networks with multiple discrete and distributed delays. IMechE, Part I: J. Systems and Control Engineering, 223, 299–308. Liu, X., & Cao, J. (2010). Complete periodic synchronization of delayed neural networks with discontinuous activations. International Journal of Bifurcation and Chaos, 20, 2151–2164. Liu, X., & Cao, J. (2011). Local synchronization of one-to-one coupled neural networks with discontinuous activations. Cognitive Neurodynamics, 5, 13–20. Liu, L., & Shen, Y. (2012). Noise suppresses explosive solutions of differential systems with coefficients satisfying the polynomial growth condition. Automatica, 48, 619–924. Luo, W., Zhong, K., Zhu, S., & Shen, Y. (2014). Further results on robustness analysis of global exponential stability of recurrent neural networks with time delays and random disturbances. Neural Networks, 53, 127–133. Mao, X. (2005). Delay population dynamics and environmental noise. Stochastic Dynamics, 5, 149–162. Mao, X. (2007a). Stochastic differential equations and applications (2nd ed.). Chichester: Harwood. Mao, X. (2007b). Stability and stabilization of stochastic differential delay equations. IET Control Theory & Applications, 1, 1551–1566. Mao, X., Marion, G., & Renshaw, E. (2002). Environmental noise suppresses explosion in population dynamics. Stochastic Processes and their Applications, 97, 95–110. Pham, J., Pakdaman, K., & Virbert, J. (1998). Noise-induced coherent oscillations in randomly connected neural networks. Physical Review E, 58, 3610–3622. Shen, Y., & Wang, J. (2007). Noise-induced stabilization of the recurrent neural networks with mixed time varying delays and Markovian-switching parameters. IEEE Transactions on Neural Networks, 18, 1857–1862. Shen, Y., & Wang, J. (2008). An improved algebraic criterion for global exponential stability of recurrent neural networks with time-varying delays. IEEE Transactions on Neural Networks, 19, 528–531. Shen, Y., & Wang, J. (2012). Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances. IEEE Transactions on Neural Networks and Learning Systems, 23, 87–96. Wang, Z., Liu, Y., Li, M., & Liu, X. (2006). Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Transactions on Neural Networks, 17, 814–820. Wu, F., & Hu, S. (2009). Suppression and stabilization of noise. International Journal of Control, 82, 2150–2157. Xu, S., Lam, J., & Ho, D. W. C. (2006). A new LMI condition for delay dependent asymptotic stability of delayed Hopfield neural networks. IEEE Transactions on Circuits and Systems Part II: Express Briefs, 53, 230–234. Zeng, Z., & Wang, J. (2006). Complete stability of cellular neural networks with timevarying delays. IEEE Transactions on Circuits and Systems. Part I: Regular Papers, 53, 944–955. Zeng, Z., Wang, J., & Liao, X. (2005). Global asymptotic stability and global exponential stability of neural networks with unbounded time-varying delays. IEEE Transactions on Circuits and Systems Part II: Express Briefs, 52, 168–173. Zhang, H., Wang, Z., & Liu, D. (2014). A comprehensive review of stability analysis of continuous-time recurrent neural networks. IEEE Transactions on Neural Networks and Learning Systems, 25, 1229–1262. Zhu, S., & Shen, Y. (2013). Robustness analysis for connection weight matrices of global exponential stability of stochastic recurrent neural networks. Neural Networks, 38, 17–22. Zhu, S., Shen, Y., & Chen, G. (2010). Exponential passivity of neural networks with time-varying delay and uncertainty. Physics Letters A, 375, 136–142.

Noise further expresses exponential decay for globally exponentially stable time-varying delayed neural networks.

This paper shows that the globally exponentially stable neural network with time-varying delay and bounded noises may converge faster than those witho...
416KB Sizes 0 Downloads 8 Views