Neural Networks 54 (2014) 57–69

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Global asymptotic stability analysis for delayed neural networks using a matrix-based quadratic convex approach Xian-Ming Zhang 1 , Qing-Long Han ∗ Centre for Intelligent and Networked Systems, Central Queensland University, Rockhampton QLD 4702, Australia

article

info

Article history: Received 30 August 2013 Received in revised form 8 January 2014 Accepted 21 February 2014 Keywords: Generalized neural networks Global asymptotic stability Interval time-varying delay Integral inequality Matrix-based quadratic convex approach

abstract This paper is concerned with global asymptotic stability for a class of generalized neural networks with interval time-varying delays by constructing a new Lyapunov–Krasovskii functional which includes some t integral terms in the form of t −h (h − t − s)j x˙ T (s)Rj x˙ (s)ds (j = 1, 2, 3). Some useful integral inequalities are established for the derivatives of those integral terms introduced in the Lyapunov–Krasovskii functional. A matrix-based quadratic convex approach is introduced to prove not only the negative definiteness of the derivative of the Lyapunov–Krasovskii functional, but also the positive definiteness of the Lyapunov–Krasovskii functional. Some novel stability criteria are formulated in two cases, respectively, where the time-varying delay is continuous uniformly bounded and where the time-varying delay is differentiable uniformly bounded with its time-derivative bounded by constant lower and upper bounds. These criteria are applicable to both static neural networks and local field neural networks. The effectiveness of the proposed method is demonstrated by two numerical examples. © 2014 Elsevier Ltd. All rights reserved.

1. Introduction During the past decades, neural networks (NNs) have found a wide range of applications in a variety of areas such as associative memory (Bao, Wen, & Zeng, 2012; Michel, Farrell, & Sun, 1990; Zeng & Wang, 2010), static image processing (Chua & Yang, 1988), pattern recognition (Wang, 1995), and combinatorial optimization (Chen & Fang, 2000). It is true that most applications of NNs are closely dependent on some dynamic behaviors, especially on global asymptotic stability. However, due to the finite switching speeds of amplifiers, time delays are frequently encountered in practical NNs and they often degrade the system performance or destabilize an NN under consideration. Therefore, in recent years, increasing attention has been paid to stability of delayed NNs and a number of delay-dependent stability criteria have been reported in the literature, see for example, Faydasicok and Arik (2012, 2013), He, Wu, and She (2006), Shao (2008a), Wang and Chen (2012), Wang, Liu, and Liu (2009) and Zhang, Tang, Fang, and Wu (2012).



Corresponding author. Tel.: +61 749309270. E-mail addresses: [email protected] (X.-M. Zhang), [email protected] (Q.-L. Han). 1 Tel.: +61 749232332. http://dx.doi.org/10.1016/j.neunet.2014.02.012 0893-6080/© 2014 Elsevier Ltd. All rights reserved.

Consider the following delayed NN, whose equilibrium point is supposed to be shifted into the origin



x˙ (t ) = −Ax(t ) + W0 f (W2 x(t )) + W1 f (W2 x(t − τ (t ))) x(θ ) = φ(θ ),

θ ∈ [−h2 , 0]

(1)

where x(t ) = col{x1 (t ), x2 (t ), . . . , xn (t )} ∈ Rn and f (x(t )) = col{f1 (x1 (t )), f2 (x2 (t )), . . . , fn (xn (t ))} ∈ Rn are the neuron state vector and the neuron activation function, respectively; A = diag {a1 , a2 , . . . , an } > 0 is a constant real matrix; W0 , W1 and W2 are the interconnection matrices representing the weighting coefficients of the neurons; φ is an initial condition and the timevarying delay τ (t ) is a continuous function satisfying 0 ≤ h1 ≤ τ (t ) ≤ h2 < ∞.

(2)

The NN model (1) includes some NNs as its special cases. If taking W2 = I, then the model (1) represents a class of delayed local field neural networks (LFNNs) (Faydasicok & Arik, 2012; Liu, Wang, & Liu, 2009; Shao, 2008b; Zeng, He, Wu, & Zhang, 2011); If taking W0 = W1 = I, then the model (1) reduces to a class of delayed static neural networks (SNNs) (Li, Gao, & Yu, 2011; Zuo, Yang, & Wang, 2010). The study on delay-dependent stability of (1) aims to derive a maximum upper bound hmax of h2 for a given h1 ≥ 0 such 2 that the NN (1) is globally asymptotically stable for any τ (t ) satisfying h1 ≤ τ (t ) ≤ hmax . The obtained hmax is thus regarded as a key 2 2

58

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

index to measure the conservatism of a delay-dependent stability criterion and the larger hmax , the less conservatism (Souza, 2013). 2 In order to formulate some less conservative stability conditions, several effective approaches have been proposed in the past decade. To mention a few, one can refer to a free weighting matrix approach, a convex delay analysis approach, a delaydecomposition approach and a reciprocally convex approach. Recently, a new ‘‘quadratic convex approach’’ has been proposed in (Kim, 2011) to study the stability of linear systems with timevarying delays. This approach is then employed to investigate the global asymptotic stability of NNs in Zhang, Yang, Liu, and Zhang (2013). The key idea is to construct a novel Lyapunov–Krasovskii functional (LKF) with the following integral term T (t ) :=



3  (h2 − t + s)j x˙ T (s)Rj x˙ (s)ds.

t

(3)

t −h2 j=1

The conspicuous feature of T (t ) is that the integrand is the sum of the quadratic terms x˙ T (s)Ri x˙ (s) multiplied by h2 − t + s with degree of i (i = 1, 2, 3). As a result, the time derivative of the chosen LKF can be bounded by a quadratic convex function with respect to τ (t ). By employing the quadratic convex approach, some stability criteria are derived in Kim (2011) and Zhang et al. (2013). However, there are several issues to be addressed, which are given in the following.

• Taking the time derivative of T (t ), we have  T˙ (t ) = x˙ T (t )(h2 R1 + h22 R2 + h32 R3 )˙x(t ) −

2. Some novel integral inequalities and a matrix-based quadratic convex approach t

Φ1 (s)ds

In this section, we first establish some novel integral inequalities, and then introduce a matrix-based quadratic convex approach to delay-dependent stability analysis for delayed NNs.

t −h2



t



First, some novel integral inequalities for the integral terms in (4) are established, where the over-bounding performed in Kim (2011) and Zhang et al. (2013) is no longer involved. Second, a matrixbased quadratic convex approach is applied to derive a sufficient condition such that the positive definiteness of the chosen LKF can be ensured. As a result, the constraint P > 0 in both Kim (2011) and Zhang et al. (2013) is removed. Third, the matrix-based quadratic convex approach is employed to formulate some less conservative stability criteria for NN (1) for two cases, respectively, where the time-varying delay τ (t ) satisfies (2) and where τ (t ) satisfies both (2) and µ1 ≤ τ˙ (t ) ≤ µ2 with µ1 and µ2 being two constants. Moreover, these stability criteria are applicable not only to LFNNs but also to SNNs. Finally, two numerical examples are given to demonstrate the effectiveness of the proposed results. Throughout this paper, the notations are standard. The superscript ‘T ’ stands for the transpose of a vector or a matrix. For an invertible matrix M, its inverse matrix is denoted by M −1 . For a real symmetric matrix P, P > 0 (P ≥ 0) means that the matrix P is positive definite (positive semi-definite), and λmax (P ) and λmin (P ) represent the maximum and minimum eigenvalues of the matrix P, respectively. I and 0 mean an identity matrix and a zero matrix of appropriate dimensions, respectively. diag{· · ·} and col{· · ·} denote a block-diagonal matrix and a block-column vector, respectively.

2(h2 − t + s)Φ2 (s)



t −h 2

2.1. Some novel integral inequalities

 + 3(h2 − t + s)2 Φ3 (s) ds

(4)

where Φi (s) = x˙ T (s)Ri x˙ (s) (i = 1, 2, 3). The estimation made by Kim (2011) and Zhang et al. (2013) on the integral terms in (4) needs to be reconsidered. The main drawbacks lie in t two aspects: one is that some useful terms, i.e. −2 t −τ (t ) (h2 −

t

τ (t ))Φ2 (s)ds and −3 t −τ (t ) [(h2 − τ (t ))2 + 2(h2 − τ (t ))(τ (t ) − t + s)]Φ3 (s)ds, are overly bounded by zero; and the other one is

the use of the so-called basic inequality, which certainly leads to conservative results; • The constraint of positive definiteness is imposed on the augmented Lyapunov matrix P (i.e. P > 0) in Kim (2011) and Zhang et al. (2013), while this constraint is not necessary for the positive definiteness of the chosen LKF; • The use of the quadratic convex approach is questionable. For instance, in Kim (2011), the quadratic convex approach is applied to a function ξtT [Ψ0 + d(t )Ψ1 + Υd ]ξt (see the proof of Theorem 1 in Kim (2011)), while this function may be not a quadratic function on the scalar d(t ) because ξt is a vector-valued function implicitly dependent on d(t ). The same case also happens in Zhang et al. (2013); • When the lower bound h1 of τ (t ) is strictly greater than zero, the conditions obtained in Kim (2011) and Zhang et al. (2013) fail to make any conclusion on the stability of the system under consideration. Therefore, based on the observation above, it is significant to establish some new integral inequalities for the integral terms in (4) and develop the quadratic convex approach to formulate some less conservative stability criteria, which motivates the current study. In this paper, we will present a matrix-based quadratic convex approach to stability of a class of generalized NNs described by (1).

To begin with, we introduce the following lemmas. Lemma 1. Let α and β be real column vectors with dimensions of n1 and n2 , respectively. For given real positive symmetric matrices M

M1 ∈ Rn1 ×n1 and M2 ∈ Rn2 ×n2 , if [ S T1

S M2

] ≥ 0, then the following

inequality holds for any scalar κ > 0 and matrix S ∈ Rn1 ×n2

− 2α T S β ≤ κα T M1 α + κ −1 β T M2 β.

(5)

Proof. The proof can be completed by noticing that



M1 ST

S M2



 ≥ 0 ⇐⇒

κ M1 S

S κ M2



−1

T

≥ 0.

Lemma 2 (Seuret & Gouaisbaut, 2013). For a given matrix R > 0, the following inequality holds for any continuously differentiable function ω : [a, b] → Rn b



ω˙ T (u)Rω( ˙ u)du ≥ a

1 b−a

(Γ1T RΓ1 + 3Γ2T RΓ2 )

(6)

where

Γ1 := ω(b) − ω(a) Γ2 := ω(b) + ω(a) −

2 b−a



b

ω(u)du.

a

It is clear to see that the inequality (6) provides a tighter b T lower bound for a ω ˙ (u)Rω( ˙ u)du than Jensen’s inequality because 3Γ2T RΓ2 > 0 for Γ2 ̸= 0. Thus, the inequality (6) is an improvement over Jensen’s inequality.

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

Before the novel integral inequalities are established, we denote

 t  1   x(s)ds ν 1 (t ) :=   h1 t −h1     t −h 1  1 ν2 (t ) := x(s)ds  τ (t ) − h1 t −τ (t )     t −τ (t )  1   ν3 (t ) := x(s)ds. h2 − τ (t ) t −h2

where κ :=

T − ψ21 diag{R1 , 3R1 }ψ21

(7)

which gives (8).

Rq×q , Fi ∈ Rq×n satisfying



− ( h2 − h1 )

t −h1

2

(h2 − t + s)˙xT (s)R2 x˙ (s)ds

T T (h2 − τ (t ))2 φ11 Z1 φ11 + 2(h2 − τ (t ))φ11 F1 φ12

1 2

T + 2φ21 F2 [(h2 − τ (t ))φ22 + (τ (t ) − h1 )φ23 ]

(12)

where

 φ12 := x(t − τ (t )) − ν3 (t ) φ22 := x(t − h1 ) − x(t − τ (t )) φ23 := x(t − h1 ) − ν2 (t )

x˙ (s)R1 x˙ (s)ds

(13)

Proof. It is clear to see that

 (8)

t −h1



(h2 − t + s)˙xT (s)R2 x˙ (s)ds = φ1 (t ) + φ2 (t )

(14)

t −h2

where R˜ 1 := diag{R1 , 3R1 }; and

where

   x(t − τ (t )) − x(t − h2 )   ψ :=  11 x(t − τ (t )) + x(t − h2 ) − 2ν3 (t )    x(t − h1 ) − x(t − τ (t ))  ψ21 := . x(t − h1 ) + x(t − τ (t )) − 2ν2 (t )

(9)

x˙ T (s)R1 x˙ (s)ds = ψ1 (t ) + ψ2 (t )

     φ ( t ) := −  1    φ2 (t ) := −

Proof. It is easy to show that the inequality (8) is true for two special cases where h1 = τ (t ) < h2 and where h1 < τ (t ) = h2 , respectively. Now, suppose that h1 < τ (t ) < h2 . In this situation, one has t −h 1

≥ 0 (i = 1, 2)

with νi (t ) (i = 2, 3) being defined in (7). T

T T ˜ T ˜ ≤ 2ψ11 S1 ψ21 − ψ11 R1 ψ11 − ψ21 R1 ψ21



Fi

R2

T Z2 φ21 + [(h2 − h1 )2 − (h2 − τ (t ))2 ]φ21

t −h2

− ( h2 − h1 )

1



≥0



Zi

FiT

t −h 2

Lemma 3. Let τ (t ) be a continuous function satisfying 0 ≤ h1 ≤ τ (t ) ≤ h2 . For any n × n real matrix R1 > 0 and a vector x˙ : [−h2 , 0] → Rn such that the integration concerned below is well defined, the following   inequality holds for any 2n × 2n real matrix S1 satisfying R˜ 1

t −h1



Thus, νi (t ) (i = 1, 2, 3) are well defined. Now, we have the following integral inequalities.

S1T



Lemma 4. Let τ (t ) be a continuous function satisfying 0 ≤ h1 ≤ τ (t ) ≤ h2 . For any n × n real matrix R2 > 0 and a vector x˙ : [−h2 , 0] → Rn such that the integration concerned below is well defined, the q following inequality holds for  any φi1 ∈ R and real matrices Zi ∈

 x(t − h1 )  = x(t ) lim ν1 (t ) = lim   + +  1 h → 0 h → 0 1 1    x(t − τ (t )) lim ν2 (t ) = lim = x(t − h1 ) + 1 τ ( t )→ h τ (t )→h+  1 1    −x(t − τ (t ))   = x(t − h2 ).  lim − ν3 (t ) = lim − −1 τ (t )→h2 τ (t )→h2

S1

By Lemma 1, one has

T T ψ1 (t ) + ψ2 (t ) ≤ −ψ11 diag{R1 , 3R1 }ψ11 + 2ψ11 S1 ψ21

Apply L’Hospital’s rule to obtain

R˜ 1

τ (t )−h1 . h2 −τ (t )

59

(10)



t −τ (t ) t −h2 t −h1 t −τ (t )

(h2 − t + s)˙xT (s)R2 x˙ (s)ds

(h2 − t + s)˙xT (s)R2 x˙ (s)ds.

Now, we first make an estimation on φ1 (t ). Set w(s) = (h2 − t + s)˙x(s). Then apply Lemma 1 to obtain

(h2 − t + s)˙xT (s)R2 x˙ (s) =

1

(h2 − t + s)

wT (s)R2 w(s)

T T ≥ −(h2 − t + s)φ11 Z1 φ11 − 2φ11 F1 w(s)

(15)

which follows that

t −h2

where

     ψ ( t ) := −( h − h ) 2 1  1    ψ2 (t ) := −(h2 − h1 )



t −τ (t ) t −h2 t −h1 t −τ (t )

φ1 (t ) ≤



x˙ (s)R1 x˙ (s)ds

= x˙ (s)R1 x˙ (s)ds. T

T ψ1 (t ) + ψ2 (t ) ≤ −(1 + κ)ψ11 diag{R1 , 3R1 }ψ11 T − (1 + κ −1 )ψ21 diag{R1 , 3R1 }ψ21

1 2

T T (h2 − τ (t ))2 φ11 Z1 φ11 + 2(h2 − τ (t ))φ11 F1 φ12 .

(16)

Similarly,

Apply Lemma 2 to obtain

where ψj1 (j = 1, 2) are defined in (9), which follows that

T T [(h2 − t + s)φ11 Z1 φ11 + 2φ11 N1 w(s)]ds

t −h2

T

 h2 − h1  T  ψ11 diag{R1 , 3R1 }ψ11 ψ1 (t ) ≤ − h2 − τ ( t ) h2 − h1   ψ2 (t ) ≤ − ψ T diag{R1 , 3R1 }ψ21 τ (t ) − h1 21

t −τ (t )

φ2 (t ) ≤ (11)

1 2

T [(h2 − h1 )2 − (h2 − τ (t ))2 ]φ21 Z2 φ21

T + 2φ21 F2 [(h2 − τ (t ))φ22 + (τ (t ) − h1 )φ23 ].

Substituting (16) and (17) into (14) yields (12).

(17)



Remark 1. When  th1 = 0, τ (t ) satisfies 0 ≤ τ (t ) ≤ h2 . The integral term ℑ(t ) := − t −h (h2 − t + s)˙xT (s)R2 x˙ (s)ds is estimated in Kim 2 (2011) (Theorem 1 therein). First, rewrite ℑ(t ) as

ℑ(t ) = ℑ1 (t ) + ℑ2 (t ) + ℑ3 (t )

60

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

where ℑ1 (t ) := −

ℑ2 (t ) := −



ℑ3 (t ) := −



 t −τ (t ) t −h2

h −τ (t )

(h2 − t + s)˙xT (s)R2 x˙ (s)ds and

where κ0 := τ 2(t )−h . Apply Lemma 1 to obtain 1

t t −τ (t )

T T T κ0 φ22 R3 φ22 + κ0−1 φ23 R3 φ23 ≥ −2φ22 S2 φ23

(τ (t ) − t + s)˙xT (s)R2 x˙ (s)ds

which leads to

t

(h2 − τ (t ))˙x (s)R2 x˙ (s)ds. T

t −τ (t )

T η2 (t ) ≤ 2(h2 − τ (t ))φ22 (S2 − R3 )φ23 .

Then ℑ1 (t ) and ℑ2 (t ) are bounded using the so-called basic inequality while ℑ3 (t ) is overly bounded by zero, which inevitably leads to a conservative upper bound of the integral term ℑ(t ). The same case occurs in Zhang et al. (2013) (Theorem 1 therein). However, such a case does not happen in Lemma 4. Lemma 5. Let τ (t ) be a continuous function satisfying 0 ≤ h1 ≤ τ (t ) ≤ h2 . For any n × n real matrix R3 > 0 and a vector x˙ : [−h2 , 0] → Rn such that the integration concerned below is well defined, inequality holds for any n × n real matrix S2 satisfying the following  R3

S2

S2T

≥0

R3 t −h1

 −

(23)

(24)

Substituting (20) and (24) into (19) yields (18). Case 2: h1 = τ (t ) < h2 . In this case, φ22 = 0. Thus, the inequality (18) reduces to t −h 1

 −

T R3 φ12 . (25) (h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds ≤ −(h2 − h1 )φ12

t −h2

On the other hand, similar to the analysis in Case 1, we have



t −h 1



T R3 φ12 (h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds ≤ −(h2 − h1 )φ12

t −h2

which gives (25). Case 3: h1 < τ (t ) = h2 . In this case, the inequality (18) becomes

(h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds



t −h2 T T ≤ −(h2 − τ (t ))[φ12 R3 φ12 + 2φ22 (R3 − S2 )φ23 ]

t −h 1

− (18)

(h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds ≤ 0

t −h2

where φ12 , φ22 and φ23 are defined in (13).

which is true as R3 > 0. The proof is thus completed.

Proof. It is true that

Remark 2. Lemma 5 provides a new upper bound of the integral  t −h term ϖ (t ) := − t −h 1 (h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds by employing 2 Jensen’s inequality and Lemma 1. The key point is that an observation of (21) gives (22), which motivates us to apply Lemma 1 to get (23). However, in Kim (2011), this integral term ϖ (t ) with

t −h1

 −

(h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds = η1 (t ) + η2 (t )

(19)

t −h2

where

     η1 (t ) := −    η2 (t ) := −

t −τ (t )

t −h2 t −h1



t −τ (t )

(h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds.



t −τ (t )

The following lemmas are also necessary for the proof of stability criteria presented in this paper. Lemma 6. For a given scalar h1 ≥ 0 and any n × n real matrices Y1 > 0 and Y2 > 0 and a vector x˙ : [−h1 , 0] → Rn such that the integration concerned below is well defined, the following inequalities hold for any vector-valued function π1 (t ) : [0, ∞) → Rk and

Notice that



t −τ (t )

t −h2

tainly conservative (see Theorem 1 in Kim (2011)). The same case can also be seen in Theorem 1 in Zhang et al. (2013).

T  t −τ (t )  w(s)ds R3 w(s)ds h2 − τ (t ) −h2 t −t −h2h1 T  tt −  h1 1 η2 (t ) ≤ − w(s)ds R3 w(s)ds . τ (t ) − h1 t −τ (t ) t −τ (t ) 1

 t −τ (t )

t ϖ1 (s)ds − t −τ (t ) [ϖ2 (s) + ϖ3 (s)]ds, where ϖ1 (s) := (h2 − t + s)2 X (s), ϖ2 (s) := (τ (t ) − t + s)2 X (s) and ϖ3 (s) := [(h2 − τ (t ))2 + 2(h2 − τ (t ))(τ (t ) − t + s)]X (s) with X (s) = x˙ T (s)R3 x˙ (s). Then ϖ1 (s) and ϖ2 (s) are bounded using the so-called basic inequality while ϖ3 (s) is overly bounded by zero. Hence the obtained upper bound of ϖ (t ) is cerh1 = 0 is rewritten as ϖ (t ) = −

(h2 − t + s) x˙ (s)R3 x˙ (s)ds 2 T

Set w(s) = (h2 − t + s)˙x(s). We consider three cases. Case 1: h1 < τ (t ) < h2 . Apply Jensen’s inequality to obtain

η1 (t ) ≤ −

w(s)ds = (h2 − τ (t ))φ12

matrices M1 ∈ Rk×k and N1 ∈ Rk×n satisfying

t −h2



t −h 1 t −τ (t )

w(s)ds = (h2 − τ (t ))φ22 + (τ (t ) − h1 )φ23 .



I1 :=

Thus

η1 (t ) ≤ −(h2 − τ (t ))φ

φ12 .

(20)

As for η2 (t ), one has

φ22 + κ0 φ23 R3 φ23 ]

≥0

(26)

(27)

where ν1 (t ) is defined in (7). Proof. Denote ω(s) = (h1 − t + s)˙x(s). Then I1 =

(h2 − τ (t ))2 T T − φ R3 φ22 − (τ (t ) − h1 )φ23 R3 φ23 τ (t ) − h1 22 = −(h2 − τ (t ))[κ0 φ

Y1

(h1 − t + s)˙xT (s)Y1 x˙ (s)ds

≥ h1 [x(t ) − ν1 (t )]T Y2 [x(t ) − ν1 (t )] (21)

−1 T

N1

≥ − π1T (t )M1 π1 (t ) − 2h1 π1T (t )N1 [x(t ) − ν1 (t )] 2  t I2 := (h1 − t + s)2 x˙ T (s)Y2 x˙ (s)ds

An observation of (21) is that

T 22 R3

N1T

t −h1

(h2 − τ (t ))2 T T φ R3 φ22 − (τ (t ) − h1 )φ23 R3 φ23 η2 (t ) ≤ − τ (t ) − h1 22 T − 2(h2 − τ (t ))φ22 R3 φ23 .

M1

t

t −h 1 h21

T 12 R3





t t −h1

(22)

I2 =



1 h1 − t + s

ωT (s)Y1 ω(s)ds

(28)

t

t −h1

ωT (s)Y2 ω(s)ds.

(29)

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

Apply Lemma 1 to (28) to get I1 ≥ −

may be non-differentiable. If it is differentiable, its derivative allows to be unbounded. For example, τ (t ) = sin2 t 2 satisfies (2) but its derivative τ˙ (t ) is unbounded.

t



[(h1 − t + s)π1T (t )M1 π1 (t ) + 2π1T (t )N1 ω(s)]ds t −h 1

h21

=−

2

For the global asymptotic stability of (1), we need the following assumption.

π (t )M1 π1 (t ) − 2h1 π (t )N1 [x(t ) − ν1 (t )] T 1

T 1

Assumption 1. The neuron activation functions fi (·) (i = 1, 2, . . . , n) in (1) satisfy fi (0) = 0 and ∀ s1 , s2 ∈ R with s1 ̸= s2

which gives (26). Apply Jensen’s inequality to (29) to obtain I2 ≥

t



1 h1

T  ω(s)ds Y2

t

 ω(s)ds

l− i ≤

t −h 1

t −h 1

= h1 [x(t ) − ν1 (t )]T Y2 [x(t ) − ν1 (t )] which gives (27).

fi (s1 ) − fi (s2 )

2.2. A matrix-based quadratic convex approach

and E1 := [I 0],

X0 + (τ − τ1 )X1 + (τ2 − τ )X2 < 0,

∀τ ∈ [τ1 , τ2 ] ⇐⇒ X0 + (τ2 − τ1 )Xi < 0 (i = 1, 2)

(30)

has been widely used in the stability analysis for linear systems with interval time-varying delays. The following lemma is called as a matrix-based quadratic convex property, which covers the one in (30). Lemma 7. Let X0 , X1 and X2 be m × m real symmetric matrices and a scalar continuous function τ satisfy τ1 ≤ τ ≤ τ2 , where τ1 and τ2 are constants satisfying 0 ≤ τ1 ≤ τ2 . If X0 ≥ 0, then

∀τ ∈ [τ1 , τ2 ]

+ τi X1 + X2 < 0 (≤ 0),

(33)

For simplicity of presentation, we set

erty that

⇐⇒ τ

i = 1, 2, . . . , n

η(t ) := col{x(t ), f (W2 x(t ))}

For real symmetric matrices X0 , X1 and X2 and a scalar τ ∈ [τ1 , τ2 ] with 0 ≤ τ1 ≤ τ2 , the matrix convex combination prop-

2 i X0

≤ l+ i ,

s1 − s2

− where l+ i , li (i = 1, 2, . . . , n) are known real scalars. Throughout + + − this paper, we denote L+ := diag{l+ := diag 1 , l2 , . . . , ln } and L − − − {l1 , l2 , . . . , ln }.



τ 2 X0 + τ X1 + X2 < 0 (≤ 0),

61

(i = 1, 2).

(31)

Proof. Suppose that χ ̸= 0 is an arbitrary vector in Rm . Let f (τ ) := χ T (τ 2 X0 + τ X1 + X2 )χ . Then f (τ ) is a convex function on τ ∈ 2 [τ1 , τ2 ] because ddτ 2 f (τ ) = 2χ T X0 χ ≥ 0 for X0 ≥ 0. By the convex function property, f (τ ) < 0 (≤ 0), ∀τ ∈ [τ1 , τ2 ] is equivalent to f (τi ) < 0 (≤ 0) (i = 1, 2). Since the arbitrariness of the vector χ , we can conclude that (31) is true. This completes the proof.  It is worth pointing out that (31) covers (30). In fact, if taking X0 = 0, X1 = X2 − X1 and X2 = X0 + τ2 X2 − τ1 X1 , then (31) reduces to (30). In the following, it will be shown that combining the established integral inequalities with the matrix-based quadratic convex approach can yield some less conservative stability criteria for the NN (1).

(34)

Then x(t ) = E1 η(t ) and f (W2 x(t )) = E2 η(t ). In this situation, the system (1) can be rewritten as



x˙ (t ) = (−AE1 + W0 E2 )η(t ) + W1 E2 η(t − τ (t )) x(θ ) = φ(θ ), −h2 ≤ θ ≤ 0.

In the following, we present two stability criteria for the NN (1) under Case 1. One is for h1 > 0 and the other one for h1 = 0. For h1 > 0, we choose a Lyapunov–Krasovskii functional (LKF) candidate as V (t , xt , x˙ t ) = V1 (t ) + V2 (t ) + V3 (t ) + V4 (t ) + V5 (t ) where V1 (t ) := ϑ T (t )P ϑ(t ) +



t

x˙ T (s)U x˙ (s)ds t −h1

V2 (t ) := 2

n 

ρi

n 

σi

W2i x(t )



[l+ i s − fi (s)]ds

0

i =1

V3 (t ) :=

[fi (s) − l− i s]ds

0

i =1

+2

W2i x(t )



t



{[xT (t ) xT (s)]Q1 [xT (t ) xT (s)]T t −h1 T

+ f (W2 x(s))X1 f (W2 x(s))}ds  t −h1 + {[xT (t ) xT (s)]Q2 [xT (t ) xT (s)]T t −τ (t )

In this section, we will formulate some novel stability criteria for the NN described by (1) based on the novel integral inequalities and the matrix-based quadratic convex approach presented in the previous section. We will consider two cases of the time-varying delay τ (t ) as follows. Case 1. τ (t ) is a continuous and differential function satisfying (2) and (32)

+ f T (W2 x(s))X2 f (W2 x(s))}ds  t −τ (t ) + {[xT (t ) xT (s)]Q3 [xT (t ) xT (s)]T t −h2

+ f T (W2 x(s))X3 f (W2 x(s))}ds  t  V4 (t ) := h1 (h1 − t + s)˙xT (s)Y1 x˙ (s) t −h1  3  j T + (h1 − t + s) x˙ (s)Yj x˙ (s) ds

where µ1 and µ2 are two constants. Case 2. τ (t ) is a continuous function satisfying (2). Remark 3. Under Case 1, more information of τ (t ) and τ˙ (t ) is known. Thus the obtained stability criteria for Case 1 are certainly of less conservatism. Under Case 2, the time-varying delay τ (t )

(35)

3.1. Stability criteria for Case 1

3. Some novel stability criteria

− ∞ < µ1 ≤ τ˙ (t ) ≤ µ2 < ∞

E2 := [0 I ].

j =2

V5 (t ) :=

t −h 1



t −h2

+

3  j =2

 (h2 − h1 )(h2 − t + s)˙xT (s)R1 x˙ (s)  (h2 − t + s)j x˙ T (s)Rj x˙ (s) ds

(36)

62

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

where U > 0, Qj > 0, Xj > 0, Wj > 0, Rj > 0 (j = 1, 2, 3) and scalars ρj ≥ 0, σi ≥ 0 (j = 1, 2, . . . , n) are to be determined; and



t



ϑ(t ) := col x(t ), x(t − h1 ),

x(s)ds, t −h1



t −h1

t −τ (t )

x(s)ds,

t −τ (t )





x(s)ds .

(37)

t −h2

Remark 4. Compared with the LKF chosen in Zhang et al. (2013), the LKF (36) has some advantages as follows: (1) Information on the lower bound h1 of the time-varying delay τ (t ) is considered; (2) The augmented vector ϑ(t ) includes not only the delayed state x(t − h1 ) but also the distributed delay terms; (3) The Lyapunov matrix P does not need to be positive definite; (4) The integral intervals of the terms in V3 (t ) are not be overlapped each other. It is clear to see that V (t , xt , x˙ t ) in (36) is a quadratic LKF closely depending on derivatives. Employing this LKF, we need the following stability theorem (Zhang & Han, 2013). Theorem 1 (Zhang & Han, 2013). The system (1) is asymptotically stable if there exists a quadratic Lyapunov–Krasovskii functional ˙ such that for some εi > 0 (i = 1, 2, 3) V (t , φ, φ)

˙ ≤ ε2 ∥φ∥2W ε1 ∥φ(0)∥2 ≤ V (t , φ, φ)

(38)

˙ ≤ −ε3 ∥φ(0)∥2 V˙ (t , φ, φ)

(39)

with

Ω3 := (Π1T U Π1 + 3Π2T U Π2 )/h1

(48)

Ω4 := h1 (¯e1 − e¯ 3 )T Y2 (¯e1 − e¯ 3 ) − (h31 /2)Π6T M1 Π6 − h21 (¯e1 − e¯ 3 )T N1T − h21 Π6T N1 (¯e1 − e¯ 3 )Π6

(49)

Π1 := e¯ 1 − e¯ 2 ,

(50)

Π2 := e¯ 1 + e¯ 2 − 2e¯ 3 ,

In the following, for simplicity of presentation, we set

ζ (t ) := col {x(t ), x(t − h1 ), ν1 (t ), ν2 (t ), ν3 (t )}

(40)

where νj (t ) (j = 1, 2, 3) are defined in (7). Denote by e¯ i (i = 1, . . . , 5) the block-row vectors of the 5n × 5n identity matrix such that x(t ) = e¯ 1 ζ (t ), x(t − h1 ) = e¯ 2 ζ (t ) and so on. Then, it is clear to see that

Q2 eT1

eT4 T

[¯ ¯ ]

Π6 := col{¯e1 , e¯ 2 , e¯ 3 }.

(51) (52)

x˙ T (s)U x˙ (s)ds ≥ ζ T (t )Ω3 ζ (t )

(53)

t −h 1

where Ω3 is defined in (48). Applying Jensen’s inequality to V3 (t ) yields t



[xT (t ) xT (s)]Q1 [xT (t ) xT (s)]T ds ≥ ζ T (t )Π3 ζ (t )

(54)

t −h1 t −h 1



t −τ (t )

[xT (t ) xT (s)]Q2 [xT (t ) xT (s)]T ds

≥ [τ (t ) − h1 ]ζ T (t )Π4 ζ (t ) t −τ (t )

(55)

[xT (t ) xT (s)]Q3 [xT (t ) xT (s)]T ds

t −h2

≥ [h2 − τ (t )]ζ T (t )Π5 ζ (t )

(56)

where Πi (i = 3, 4, 5) are defined in (50)–(52), respectively, which follows that V3 (t ) ≥ ζ T (t )[Π3 + (τ (t ) − h1 )Π4 + (h2 − τ (t ))Π5 ]ζ (t ).

(57)

Apply Lemma 6 with π1 (t ) = Π6 ζ (t ) to obtain



t

(h1 − t + s)˙xT (s)W1 x˙ (s)

h1 t −h1

where D1 := col{¯e1 , e¯ 2 , h1 e¯ 3 , −h1 e¯ 4 , h2 e¯ 5 } and D2 := col{0, 0, 0, e¯ 4 , −¯e5 }. Thus

  ϑ T (t )P ϑ(t ) = ζ T (t ) τ 2 (t )D2T P D2 + 2τ (t )D1T P D2 ζ (t ) + ζ (t )

eT4

t



ϑ(t ) = [D1 + τ (t )D2 ]ζ (t )

T

eT1

Proof. The proof of the first ‘ ≤’ in (42). Apply Lemma 2 to get

0

0

Π4 := [¯ ¯ ]

Π5 := [¯eT1 e¯ T5 ]Q3 [¯eT1 e¯ T5 ]T ,



˙ )∥2 dθ with where ∥φ∥2W = ∥φ(0)∥2 + −h ∥φ(s)∥2 ds + −h ∥φ(θ 2 2 the vector norm ∥ · ∥ denoting the Euclidean norm. 

Π3 := h1 [¯eT1 e¯ T3 ]Q1 [¯eT1 e¯ T3 ]T

D1T P D1

ζ (t ).

≥− 

(41)

h21 2

ζ T (t )Π6T [h1 M1 Π6 + 4N1 (¯e1 − e¯ 3 )]ζ (t )

t

(h1 − t + s)2 x˙ T (s)W2 x˙ (s)ds t −h 1

≥ h1 ζ T (t )(¯e1 − e¯ 3 )T Y2 (¯e1 − e¯ 3 )ζ (t )

Then we have the following result.

(58)

(59)

Lemma 8. For the chosen LKF in (36) and prescribed scalars h2 ≥ h1 > 0, there exist scalars ϵ1 > 0 and ϵ2 > 0 such that

which lead to

ϵ1 ∥x∥ ≤ V (t , xt , x˙ t ) ≤ ϵ ∥ ∥

where Ω4 is defined in (49). Based on (41), (53), (57) and (60), one has

2

2 2 xt W

(42)

if there exist real matrices M1 and N1 with appropriate dimensions such that



M1 N1T

N1 Y1



≥ 0,

Ω0 ≥ 0,

h21 Ω0 + h1 Ω1 + Ω2 ≥ 0,

¯ >0

e1 P eT1

¯

h22 Ω0 + h2 Ω1 + Ω2 ≥ 0

(43)

V4 (t ) ≥ ζ T (t )Ω4 ζ (t )

V (t , xt , x˙ t ) ≥ xT (t )¯e1 P e¯ T1 x(t ) + ζ T (t )Ω (τ (t ))ζ (t )

(60)

(61)

where

Ω (τ (t )) := τ 2 (t )Ω0 + τ (t )Ω1 + Ω2 (44)

where

where Ω0 , Ω1 and Ω2 are defined in (45)–(47), respectively. By Lemma 7, if the LMIs in (44) are satisfied, then Ω (τ (t )) ≥ 0. Thus, one can conclude from (61) that there exists ϵ1 := λmin (¯e1 P e¯ T1 ) > 0 such that ϵ1 ∥x∥2 ≤ V (t , xt , x˙ t ) if the LMIs in (43) and (44) hold.

Ω0 := D2T P D2

(45)

Ω1 := D1T P D2 + D2T P D1 + Π4 − Π5

(46)

Ω2 := Ω3 + Ω4 + Π3 − h1 Π4 + h2 Π5 + D1T P D1 − e¯ T1 e¯ 1 P e¯ T1 e¯ 1

The proof of the second ‘ ≤’ in (42). Notice from (42) that e¯ 1 P e¯ T1 > 0. Then λmax (P ) > 0. Thus

(47)

V1 (t ) ≤ max{λmax (P ), λmax (U )}∥xt ∥2W .

(62)

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

N2 , S1 , S2 , Z1 , Z2 , F1 and F2 of appropriate dimensions such that (43), (44) and

For V2 (t ), observe from (33) that 0 ≤ fi (s) − li s ≤ (li − li )s −

+



0 ≤ li s − fi (s) ≤ (li − li )s

Z1 > Z2 ,

which lead to



+

+



n  − 2 V2 ( t ) ≤ (ρi + σi )(l+ i − li )(W2i x(t )) i =1

− T 2 ≤ max (ρi + σi )(l+ i − li )λmax (W2 W2 )∥x∥ . i=1,...,n

(63)

It is clear that V3 (t ) ≤ c1



Z1 F1T

F1 R2

 

M2 N2T

N2 Y2



> 0,



Z2 F2T



> 0, F2 R2



 Υ (τ (t ), τ˙ (t ))|τ (t )=h1 ,τ˙ (t )=µ1   Υ (τ (t ), τ˙ (t ))|τ (t )=h1 ,τ˙ (t )=µ2  Υ (τ (t ), τ˙ (t ))|τ (t )=h2 ,τ˙ (t )=µ1 Υ (τ (t ), τ˙ (t ))|τ (t )=h2 ,τ˙ (t )=µ2

R˜ 1 S1T

S1 R˜ 1



> 0,



R3 S2T

>0 S2 R3

(67)



>0

i

+ C5 + C6 + (C5 + C6 )T with

Υ21 := [eT1 E1T eT7 ]Q3 E1T C0 + C0T E1 Q3 [eT1 E1T eT7 ]T

ξ (t ) := col{η(t ), η(t − τ (t )), η(t − h1 ), η(t − h2 ),

Υ24 := (E2 e3 )T (X2 − X1 )E2 e3 − (E2 e4 )T X3 E2 e4 + [eT1 E1T eT1 E1T ]Q1 [eT1 E1T eT1 E1T ]T + (E2 e1 )TX1 E2 e1 − [eT1 E1T eT4 E1T ]Q3 [eT1 E1T eT4 E1T ]T + [eT1 E1T eT3 E1T ](Q2 − Q1 )[eT1 E1T eT3 E1T ]T + h1 [eT1 E1T eT5 ]Q1 E1T C0 + h1 C0T E1 Q1 [eT1 E1T eT5 ]T

C0 := (−AE1 + W0 E2 )e1 + W1 E2 e2 .

(66)

Proposition 1. Under Case 1, for some prescribed scalars h1 , h2 (h2 ≥ h1 > 0), µ1 and µ2 (µ2 ≥ µ1 ), the origin of the NN (1) with (2), (32) and (33) is globally asymptotically stable if there exist real matrices U > 0, Qj > 0, Xj > 0, Yj > 0, Rj > 0 (j = 1, 2, 3), real diagonal matrices Λ1 ≥ 0, Λ2 ≥ 0, Ti ≥ 0 (i = 1, 2, 3, 4), Tsk ≥ 0 (s = 1, 2, 3, k = 2, 3, 4, k > s) and real matrices M1 , M2 , N1 ,

(74)

j=1

Υ22 := [eT1 E1T eT6 ]Q2 E1T C0 + C0T E1 Q2 [eT1 E1T eT6 ]T

where νj (t ) (j = 1, 2, 3) are defined in (7). Set e1 := [I 0 0 0 0 0 0 0], e2 := [0 I 0 0 0 0 0 0], . . . , e8 := [0 0 0 0 0 0 0 I ] such that η(t ) = e1 ξ (t ), η(t − τ (t )) = e2 ξ (t ), . . . , x˙ (t ) = e8 ξ (t ). In this situation, one has

(73)



In the following, we state and establish a novel stability criterion for the NN (1) with (2), (32) and (33). In doing so, we need some notations for simplicity of presentation. Let

ν1 (t ), ν2 (t ), ν3 (t ), x˙ (t − h1 )}

(70)

where

Υ3 :=

Similarly

x˙ (t ) = C0 ξ (t ),

(68)

where R˜ 1 := diag{R1 , 3R1 }; and

t



63

Υ23 := [eT1 E1T eT2 E1T ](Q3 − Q2 )[eT1 E1T eT2 E1T ]T + (E2 e2 )T (X3 − X2 )E2 e2

Υ41 := 2F1 (E1 e2 − e7 ) + 2(E1 e2 − e7 )T F1T + 2F2 E1 (e3 − e2 ) + 2(e3 − e2 )T E1T F2T − 3(E1 e2 − e7 )T R3 (E1 e2 − e7 ) − 3(E1 e3 − E1 e2 )T (R3 − S2 )(E1 e3 − e6 ) − 3(E1 e3 − e6 )T (R3 − S2 )(E1 e3 − E1 e2 ) Υ42 := 2F2 (E1 e3 − e6 ) + 2(E1 e3 − e6 )T F2T Υ43 := eT8 (h221 R1 + h221 R2 + h321 R3 )e8 + h221 Z2 + ψ1T S1 ψ2 + ψ2T S1T ψ1 − ψ1T R˜ 1 ψ1 − ψ2T R˜ 1 ψ2

(75)

64

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

where Υ3 is defined in (73). By Lemmas 3–5, it is true that

with h21 = h2 − h1 and

:= col{E1 e1 , E1 e3 , h1 e5 , −h1 e6 , h2 e7 } := col{0, 0, 0, e6 , −e7 } := col{C0 , e8 , E1 (e1 − e3 ), E1 (e3 − e2 ), E1 (e2 − e4 )} := col{0, 0, 0, E1 e2 , −E1 e2 }

C 1   C2    C3     C4      C5

V˙ 5 (t ) ≤ ξ T (t )Υ4 (τ (t ))ξ (t )

where Υ4 (τ (t )) is defined in (74). On the other hand, from (33), the nonlinear function fi (xi ) satisfies for xi ̸= 0

:= eT1 E2T (Λ1 − Λ2 )W2 C0

C6 := eT1 E1T W2T (L+ Λ2 − L− Λ1 )W2 C0     C7 := E2 − L− W2 E1     C8 := L+ W2 E1 − E2    ψ1 := col{E1 (e2 − e4 ), E1 (e2 + e4 ) − 2e7 }     ψ2 := col{E1 (e3 − e2 ), E1 (e3 + e2 ) − 2e6 } ψ3 := col{E1 (e1 − e3 ), E1 (e1 + e3 ) − 2e5 }.

(76)

V˙ 1 (t ) = ξ (t )Υ1 (τ (t ), τ˙ (t ))ξ (t )

(77)

V˙ 2 (t ) = ξ (t )[C5 + C6 + (C5 + C6 ) ]ξ (t )

(79)

V˙ 3 (t ) = ξ T (t )Υ2 (τ (t ), τ˙ (t ))ξ (t )

(80)

V˙ 4 (t ) = ξ T (t )C0T (h21 Y1 + h21 Y2 + h31 Y3 )C0 ξ (t )  t − h1 x˙ T (s)Y1 x˙ (s)ds

(81)

2(h1 − t + s)˙xT (s)Y2 x˙ (s)ds 3(h1 − t + s) x˙ (s)Y3 x˙ (s)ds 2 T

(82)

t −h1

x˙ T (s)R1 x˙ (s)ds

(83)

t −h2 t −h1

 −

2(h2 − t + s)˙xT (s)R2 x˙ (s)ds

t −h 1



x˙ (s)Y1 x˙ (s)ds ≤ −ξ (t )ψ ˜ ψ3 ξ (t ) T

T

T 3 Y1

(85)

t −h 1

where Y˜1 := diag{Y1 , 3Y1 } and ψ3 is defined in (76). From Lemma 6, one obtains



t

2(h1 − t + s)˙xT (s)Y2 x˙ (s)ds t −h1

≤ ξ (t )[ + 4h1 N2 (E1 e1 − e5 )]ξ (t )  t − 3(h1 − t + s)2 x˙ T (s)Y3 x˙ (s)ds T

eTj C7T Tj C8 ej ξ (t ).

(91)

fi (W2i x(θ1 )) − fi (W2i x(θ2 )) W2i [x(θ1 ) − x(θ2 )]

≤ l+ i .

where ℵi := fi (W2i x(θ1 )) − fi (W2i x(θ2 )), which leads to where ℵ = col{ℵ1 , ℵ2 , . . . , ℵn }. Let θ1 and θ2 take values in {t , t − h1 , t − h2 , t − τ (t )} and replace T with Tij . Then one obtains for i = 1, 2, 3, j = 2, 3, 4 with j > i

h21 M2

(86)

t −h1

≤ −3h1 ξ T (t )[E1 e1 − e5 ]T Y3 [E1 e1 − e5 ]ξ (t )

(87)

which follows that V˙ 4 (t ) ≤ ξ T (t )Υ3 ξ (t )

0 ≤ 2ξ T (t )

3 4  

(ei − ej )T C7T Tij C8 (ei − ej )ξ (t ).

(88)

(92)

i=1 j=2,j>i

From (78)–(80), (88), (89), (91) and (92), one deduces V˙ (t , xt , x˙ t ) ≤ ξ T (t )Υ (τ (t ), τ˙ (t ))ξ (t )

t



l− i ≤

(84)

where Υ1 (τ (t ), τ˙ (t )) and Υ2 (τ (t ), τ˙ (t )) are defined in (71) and (72), respectively; and C5 and C6 are defined in (76). Apply Lemma 2 to get

− h1

4 

which follows that 3(h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds

t −h2



0 ≤ 2ξ T (t )

2ξ T (t )[C7 (ei − ej )]T Tij C8 (ei − ej )ξ (t ) ≥ 0

t −h2



thus

2{ℵ − L− W2 [x(θ1 ) − x(θ2 )]}T T {L+ W2 [x(θ1 ) − x(θ2 )] − ℵ} ≥ 0

V˙ 5 (t ) = ξ T (t )eT8 (h221 R1 + h221 R2 + h321 R3 )e8 ξ (t ) t −h 1

where T = diag{t1 , t2 , . . . , tn }. Let θ be t , t −τ (t ), t − h1 and t − h2 , and replace T with T1 , T2 , T3 and T4 , respectively, then one gets for j = 1, 2, . . . , 4

+ 2ti {ℵi − l− i W2i [x(θ1 ) − x(θ2 )]}{li W2i [x(θ1 ) − x(θ2 )] − ℵi } ≥ 0

t

− h21

2[f T (W2 x(θ )) − xT (θ )W2T L− ]T [L+ W2 x(θ ) − f (W2 x(θ ))] ≥ 0

Then, for any ti > 0 (i = 1, 2, . . . , n), it follows that

t −h1



(90)

Another observation from (33) is that for i = 1, 2, . . . , n

t



i = 1, 2, . . . , n.

j =1

t −h 1



≤ l+ i ,

which leads to

(78) T



xi

2ξ T (t )eTj C7T Tj C8 ej ξ (t ) ≥ 0

T



fi (xi )

+ 2ti [fi (W2i x(θ )) − l− i W2i x(θ )][li W2i x(θ ) − fi (W2i x(θ ))] ≥ 0

where

T

l− i ≤

Thus, for any ti > 0 (i = 1, 2, . . . , n), we have

Proof. Taking the time derivative of the chosen LKF (36) along with the trajectory of (1) yields V˙ (t , xt , x˙ t ) := V˙ 1 (t ) + V˙ 2 (t ) + V˙ 3 (t ) + V˙ 4 (t ) + V˙ 5 (t )

(89)

(93)

where Υ (τ (t ), τ˙ (t )) is defined in (70). It is clear to see that

• Υ (τ (t ), τ˙ (t )) is a quadratic convex combination of matrices on τ (t ) ∈ [h1 , h2 ]; and • Υ (τ (t ), τ˙ (t )) is also a convex combination of matrices on τ˙ (t ) ∈ [µ1 , µ2 ]. Consequently, by Lemma 7, if the LMIs in (69) are true, then Υ (τ (t ), τ˙ (t )) < 0. Therefore, there exists a scalar ϵ3 > 0 such that V˙ (t , xt , x˙ t ) ≤ −ϵ3 xT (t )x(t ) < 0 for x(t ) ̸= 0. By Theorem 1, the system (1) with (2) and (33) is globally asymptotically stable.  Remark 6. Proposition 1 presents a novel stability criterion for the NN (1) by using the matrix-based quadratic convex approach in Lemma 7. It should be pointed out that there is a mistake in both Kim (2011) and Zhang et al. (2013) when the quadratic convex approach is used to prove the system stability. In Kim (2011), it

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

+ f T (W2 x(s))X2 f (W2 x(s)) ds  t  h2 (h2 − t + s)˙xT (s)R1 x˙ (s) +

is claimed that ‘‘the scalar valued function ξtT [Ψ0 + d(t )Ψ1 + Υd ]ξt is a quadratic function on the scalar d(t )’’ in the proof of Theorem 1 therein. Unfortunately, ξt is a vector-valued function implicitly dependent on d(t ). Thus, this claim is incorrect. The same claim is also made in Zhang et al. (2013).

t −h2

 3  j T + (h2 − t + s) x˙ (s)Rj x˙ (s) ds.

Remark 7. Compared with the results in Zhang et al. (2013), where the quadratic convex approach is employed, Proposition 1 has two main advantages, which are given as follows: (a) As stated in Remarks 1 and 2, when estimating the integral  t −h terms ℓ1 (t ) := t −h 1 (h2 − t + s)˙xT (s)R2 x˙ (s)ds and ℓ1 (t ) := 2

 t −h1

(h2 − t + s)2 x˙ T (s)R3 x˙ (s)ds in (84), some useful terms t −h 2

are overly bounded by zero in Zhang et al. (2013). Nevertheless, Lemmas 4 and 5 provide some new upper bounds for the terms ℓ1 (t ) and ℓ2 (t ), respectively, where the over-bounding performed in Zhang et al. (2013) is avoided; and (b) The matrix-based quadratic convex approach is used to prove not only the negative definiteness of V˙ (t , xt , x˙ t ) but also the positive definiteness of V (t , xt , x˙ t ) in Proposition 1. As a result, the constraint P > 0 required in Zhang et al. (2013) is removed from Proposition 1. As expected, Proposition 1 is of less conservatism than the results in Zhang et al. (2013).

Proposition 2. Under Case 1 with h1 = 0, for some prescribed scalars h2 > 0, µ1 and µ2 (µ2 ≥ µ1 ), the origin of the NN (1) with (2) and (33) is globally asymptotically stable if there exist real matriQ

 t −τ (t )

 t −h 1

x˙ T (s)R1 x˙ (s)ds and t −τ (t ) x˙ T (s)R1 x˙ (s)ds; (c) The diagonal matrices Tij in (92) are introduced to ‘enhance’ the feasibility of the obtained LMIs. t −h 2

It may be expected that Proposition 1 is less conservative than those in He et al. (2007), Shao (2008a, 2008b), Zhang and Han (2011), Zhang et al. (2010), Zhu and Yang (2008), and Zuo et al. (2010). Notice that by replacing Ω3 in (48) with Ω3 = 0, Proposition 1 can also be used to test the stability of (1) in the case of h1 = 0. However, from the proof of Proposition 1, one can see clearly that a great number of redundant matrices are introduced for the case of h1 = 0. In the sequel, we present a simplified stability criterion for h1 = 0. In doing so, we choose the following LKF candidate as

 t



T

x(t )

 

t

x(t )



      x(s)ds  x(s)ds   P  t −τ (t )  ˆV (t , xt , x˙ t ) =  t −τ (t )   t −τ (t )   t −τ (t )     x(s)ds x(s)ds t −h2

+2

n 

P11

⋆ ⋆

ρi

+2

σi

W2i x(t )



[fi (s) − l− i s]ds [l+ i s − fi (s)]ds

0

i =1



t



+ t −τ (t )

[xT (t ) xT (s)]Q1 [xT (t ) xT (s)]T

 + f T (W2 x(s))X1 f (W2 x(s)) ds  t −τ (t )  T + [x (t ) xT (s)]Q2 [xT (t ) xT (s)]T t −h2



P13 P23 P33

P

with P11 > 0 and [ ⋆22 Z

dimensions such that Z1 ≥ Z2 , [ ⋆i R˜ 1



⋆ ⋆

S1 R˜ 1



S2 R3



≥ 0, ≥ 0,



h2 Q21





h2 Q11



Fi ] R2

P23 ] P33

≥ 0 of appropriate

≥ 0 (i = 1, 2) and

h2 (P13 + Q22 ) h22 P33 + h2 Q23



h2 (P12 + Q12 ) h22 P22 + h2 Q13



≥0

(95)

≥0

(96)

H (0, µ1 ) + h2 Ξ1 + (1 − µ1 )Ξ3 + Ξ4 < 0

(97)

H (0, µ2 ) + h2 Ξ1 + (1 − µ2 )Ξ3 + Ξ4 < 0

(98)

H (h2 , µ1 ) + h2 Ξ2 + (1 − µ1 )Ξ3 + Ξ4 < 0

(99)

H (h2 , µ2 ) + h2 Ξ2 + (1 − µ2 )Ξ3 + Ξ4 < 0

(100)

where R˜ 1 := diag{R1 , 3R1 }

H (τ (t ), τ˙ (t )) := H1T (τ (t ))P H2 (τ˙ (t )) + H2T (τ˙ (t ))P H1 (τ (t ))

Ξ1 := h2 (Z1 − Z2 ) + 2F1 (E1 ϱ2 − ϱ5 ) + 2(E1 ϱ2 − ϱ5 )T F1T − 3(E1 ϱ2 − ϱ5 )T R3 (E1 ϱ2 − ϱ5 ) + 2(ϱ1 − ϱ2 )T E1T F2T − 3(E1 ϱ1 − ϱ4 )T (R3 − S2 )T E1 (ϱ1 − ϱ2 ) + 2F2 E1 (ϱ1 − ϱ2 ) − 3(E1 (ϱ1 − ϱ2 ))T (R3 − S2 )(E1 ϱ1 − ϱ4 ) + [(E1 ϱ1 )T ϱ5T ]Q2 E1T C0 + C0T E1 Q2 [(E1 ϱ1 )T ϱ5T ]T Ξ2 := 2F2 (E1 ϱ1 − ϱ4 ) + 2(E1 ϱ1 − ϱ4 )T F2T + [(E1 ϱ1 )T ϱ4T ]Q1 E1T C0 + C0T E1 Q1 [(E1 ϱ1 )T ϱ4T ]T Ξ3 := [(E1 ϱ1 )T (E1 ϱ2 )T ](Q2 − Q1 )[(E1 ϱ1 )T (E1 ϱ2 )T ]T + (E2 ϱ2 )T (X2 − X1 )E2 ϱ2 Ξ4 := −C1T R˜ 1 C1 − C2T R˜ 1 C2 − (E2 ϱ3 )T X2 E2 ϱ3 + C3 + C3T + [(E1 ϱ1 )T (E1 ϱ1 )T ]Q1 [(E1 ϱ1 )T (E1 ϱ1 )T ]T + C1T S1 C2 − [(E1 ϱ1 )T (E1 ϱ3 )T ]Q2 [(E1 ϱ1 )T (E1 ϱ3 )T ]T + C2T S2T C1 + h22 C0T (R1 + R2 + h2 R3 )C0 + (E2 ϱ1 )T X1 E2 ϱ1 + h22 Z2 3   T T  + ϱj C5 Tj C6 ϱj + ϱjT C6T Tj C5 ϱj + C4 + C4T j =1

+

2 3  

(ϱi − ϱj )T C5T Tij C6 (ϱi − ϱj )

i=1 j=2,j>i

0

i =1 n 

P12 P22

t −h 2 W2i x(t )



Q

ces Qi = [ ⋆i1 Qi2 ] > 0, Xi > 0 (i = 1, 2), Rj > 0, real diagonal i3 matrices Tj ≥ 0 (j = 1, 2, 3), T12 ≥ 0, T13 ≥ 0, T23 ≥ 0, Λ1 ≥ 0, T Λ  2 ≥ 0, andreal matrices S1 , S2 , Z1 , Z2 , F1 , F2 and P = P =

R3

(a) A different LKF is introduced, which contains some integral  t −h terms as t −h 1 (h2 − t + s)i x˙ T (s)Ri x˙ (s)ds (i = 1, 2, 3); 2 (b) An improved inequality in (6) is used to bound the terms

(94)

j =2

 Remark 8. Although we cannot prove theoretically that Proposition 1 covers some existing results, for example, He, Liu, Rees, and Wu (2007), Shao (2008a, 2008b), Zhang and Han (2011), Zhang, Liu, Huang, and Wang (2010), Zhu and Yang (2008) and Zuo et al. (2010), Proposition 1 has some conspicuous features.

65



+

2 3  

(ϱi − ϱj )T C6T Tij C5 (ϱi − ϱj )

i=1 j=2,j>i

with ϱ1 = [I 0 0 0 0], . . . , ϱ5 = [0 0 0 0 I ]; and

H1 (τ (t )) := col{E1 ϱ1 , τ (t )ϱ4 , (h2 − τ (t ))ϱ5 } H2 (τ˙ (t )) := col{C0 , E1 (ϱ1 − (1 − τ˙ (t ))ϱ2 ), E1 ((1 − τ˙ (t ))ϱ2 − ϱ3 )}

C0 := (−AE1 + W0 E2 )ϱ1 + W1 E2 ϱ2

66

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

C1 := col{E1 (ϱ2 − ϱ3 ), E1 (ϱ2 + ϱ3 ) − 2ϱ5 }

˜ t ) = ℘ T (t )Π ˜ 1T P Π ˜ 1 ℘(t ). It follows that Then ϑ(

C2 := col{E1 (ϱ1 − ϱ2 ), E1 (ϱ1 + ϱ2 ) − 2ϱ4 }

˜ 1T P Π ˜1 +Ω ˜ 1 )℘(t ) V˜ 1 (t ) ≥ ℘ T (t )(Π

C3 := ϱ1T E2T (Λ1 − Λ2 )W2 C0

(107)

C4 := ϱ1T E1T W2T (L+ Λ2 − L− Λ1 )W2 C0

˜ 1 is defined in (105). Apply Jensen’s inequality to get where Ω

C5 := E2 − L− W2 E1

˜ 2T Q1 Π ˜ 2 + (h2 − h1 )Π ˜ 3T Q2 Π ˜ 3 )℘(t ). V˜ 3 (t ) ≥ ℘ T (t )(h1 Π

C6 := L W2 E1 − E2 . +

(108)

Notice that

Proof. The proof is omitted because it is quite similar to that of Proposition 1. 

˜ 2 ℘(t ) V4 (t ) ≥ ℘ T (t )Ω

(109)

˜ 2 is defined in (106). From (107)–(109), one has where Ω

3.2. Stability criteria for Case 2

˜ 0 ℘(t ) V˜ (t , xt , x˙ t ) ≥ ℘ T (t )Ω

In this subsection, we build some stability criteria for the NN (1) under Case 2. In this case, the Lyapunov–Krasovskii functional candidate is chosen as

˜ 0 is defined in (104). Let ϵ˜1 = λmin (Ω ˜ 0 ). Then V˜ (t , xt , x˙ t ) where Ω ≥ ϵ˜1 ∥x∥2 if the LMIs in (103) are satisfied. The rest of the proof is

V˜ (t , xt , x˙ t ) = V˜ 1 (t ) + V2 (t ) + V˜ 3 (t ) + V4 (t ) + V5 (t )

˜ t ) P ϑ( ˜ t) + V˜ 1 (t ) := ϑ( T





x˙ (s)U x˙ (s)ds T

t

 T [x (t ) xT (s)]Q1 [xT (t ) xT (s)]T t −h1  + f T (W2 x(s))X1 f (W2 x(s)) ds  t −h 1  T + [x (t ) xT (s)]Q2 [xT (t ) xT (s)]T t −h2  + f T (W2 x(s))X2 f (W2 x(s)) ds   ˜ t ) := col{x(t ), x(t − h1 ), t x(s)ds, t −h1 x(s)ds}. with ϑ( t −h1 t −h2

M1 N1T

N1 Y1

≥ 0,

where Υ4 (τ (t )), Υ3 and Υ5 are defined in (74), (73) and (75) , respectively; and

(102)

˜0 > 0 Ω

(103)

where

Υ˜ 1 (τ (t )) := [C˜1 + τ (t )C˜2 ]T P C˜3 − eT8 Ue8 + C0T U C0 + C˜3T P [C˜1 + τ (t )C˜2 ]

(112)

Υ˜ 2 (τ (t )) := (h2 − τ (t ))Υ˜ 21 + (τ (t ) − h1 )Υ˜ 22 + Υ˜ 23

(113)

where

Υ˜ 21 := [eT1 E1T eT7 ]Q2 E1T C0 + C0T E1 Q2 [eT1 E1T eT7 ]T Υ˜ 22 := [eT1 E1T eT6 ]Q2 E1T C0 + C0T E1 Q2 [eT1 E1T eT6 ]T Υ˜ 23 := (E2 e3 )T (X2 − X1 )E2 e3 − (E2 e4 )T X2 E2 e4

− [eT1 E1T eT4 E1T ]Q2 [eT1 E1T eT4 E1T ]T + [eT1 E1T eT1 E1T ]Q1 [eT1 E1T eT1 E1T ]T + [eT1 E1T eT3 E1T ](Q2 − Q1 )[eT1 E1T eT3 E1T ]T + (E2 e1 )TX1 E2 e1 + h1 [eT1 E1T eT5 ]Q1 E1T C0 + h1 C0T E1 Q1 [eT1 E1T eT5 ]T

˜ 0 := Ω ˜1 +Ω ˜2 +Π ˜ 1T P Π ˜ 1 + h1 Π ˜ 2T Q1 Π ˜2 Ω ˜ 3T Q2 Π ˜3 + ( h2 − h1 ) Π

(104)

˜ 1 := col{ˆe1 , eˆ 2 , h1 eˆ 3 , (h2 − h1 )ˆe4 }, Π ˜ 2 := col{ˆe1 , eˆ 3 }, Π ˜ 3 := with Π col{ˆe1 , eˆ 4 }, eˆ 1 = [I 0 0 0], eˆ 2 = [0 I 0 0], eˆ 3 = [0 0 I 0] and eˆ 4 = [0 0 0 I ]; and ˜ 1 := (Π4T U Π4 + 3Π5T U Π5 )/h1 Ω

(105)

˜ 2 := h1 (ˆe1 − eˆ 3 )T Y2 (ˆe1 − eˆ 3 ) − h21 Π6T N1 (ˆe1 − eˆ 3 ) Ω − h21 (ˆe1 − eˆ 3 )T N1T Π6 − (h31 /2)Π6T M1 Π6

(106)

˜ 4 := eˆ 1 − eˆ 2 , Π ˜ 5 := eˆ 1 + eˆ 2 − 2eˆ 3 , Π ˜ Π6 := col{ˆe1 , eˆ 2 , eˆ 3 }.

with C˜1 := col{E1 e1 , E1 e3 , h1 e5 , h2 e7 − h1 e6 } C˜2 := col{0, 0, 0, e6 − e7 } C˜3 := col{C0 , e8 ,E1(e1 − e3 ), E1 (e3 − e4 )}.

Proof. First, we prove that

˙

V˜ 3 (t ) = ξ T (t )Υ˜ 2 (τ (t ))ξ (t )

Proof. Denote



℘(t ) := col x(t ), x(t − h1 ), 1 h2 − h1



t −h1

t −h 2

1



h1



x(s)ds .

t

x(s)ds, t −h1

(111)

Υ˜ (τ (t )) = Υ˜ 1 (τ (t )) + Υ˜ 2 (τ (t )) + Υ4 (τ (t )) + Υ3 + Υ5

if there exist real matrices M1 and N1 with appropriate dimensions such that



Υ˜ (τ (t ))|τ (t )=h2 < 0

where

Lemma 9. For the chosen LKF in (101) and prescribed scalars h2 ≥ h1 > 0, there exist scalars ϵ˜1 > 0 and ϵ˜2 > 0 such that



Now we state a stability criterion for Case 2.

Υ˜ (τ (t ))|τ (t )=h1 < 0,

Similar to Lemma 8, we have

ϵ˜1 ∥x∥2 ≤ V˜ (t , xt , x˙ t ) ≤ ϵ˜2 ∥xt ∥2W



Proposition 3. Under Case 2, for some prescribed scalars h1 and h2 (h2 ≥ h1 > 0), the origin of the NN (1) with (2) and (33) is globally asymptotically stable if there exist real matrices U > 0, Qi > 0, Xi > 0 (i = 1, 2), Yj > 0, Rj > 0 (j = 1, 2, 3), real diagonal matrices Λ1 ≥ 0, Λ2 ≥ 0, Ti ≥ 0 (i = 1, 2, 3, 4), Tsk ≥ 0 (s = 1, 2, 3, k = 2, 3, 4, k > s) and real matrices M1 , M2 , N1 , N2 , S1 , S2 , Z1 , Z2 , F1 and F2 of appropriate dimensions such that (67), (68), (103) and

t t −h1

V˜ 3 (t ) :=

quite similar to that in Lemma 8 and thus it is omitted.

(101)

where V2 (t ), V4 (t ) and V5 (t ) are defined in (36); and

(110)

(114)

where Υ˜ 2 (τ (t )) is defined in (113). In fact, taking the time derivative of V˜ 3 (t ) yields

˙ V˜ 3 (t ) = ξ T(t )Υ˜ 23 ξ (t ) + 2



t −h 1 t −h2



T

x(t ) x(s)

Q2 E1T x˙ (t )ds.

(115)

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

Notice that

Remark 9. Propositions 1–4 present some stability criteria for the NN (1) with (33) for two cases where the time-varying delay τ (t ) is a differentiable function satisfying (2) and (32), and where τ (t ) is a continuous function satisfying (2), respectively. When τ (t ) satisfies (2) and (32), Propositions 1 and 2 can be used to check the stability of the NN (1). When τ (t ) is non-differentiable or τ (t ) is differentiable but its derivative is unbounded from above, Propositions 3 and 4 are available. It is worth pointing out that these stability criteria are applicable not only to LFNNs but also to SNNs. However, most results reported in the literature are just applicable either to LFNNs or to SNNs, not both.

T x(t ) Q2 E1T x˙ (t )ds 2 x ( s ) t −h2   (h2 − h1 )x(t ) T  t −h 1  Q2 E1T x˙ (t ) = 2 x(s)ds 

t −h 1



h−h2

 =2

T [(h2 − τ (t )) + (τ (t ) − h1 )]x(t ) Q2 E1T x˙ (t ) (h2 − τ (t ))ν3 (t ) + (τ (t ) − h1 )ν2 (t )

= ξ T (t )[(h2 − τ (t ))Υ˜ 21 + (τ (t ) − h1 )Υ˜ 22 ]ξ (t ).

(116)

Substituting (116) into (115) yields (114). Then, following the same line of the proof of Proposition 1, we can draw the conclusion that the origin of the NN (1) subject to (2) and (33) is globally asymptotically stable if the LMIs in (67), (68), (103) and (111) are satisfied.  For h1 = 0, choose the LKF candidate as

  Vˇ (t , xt , x˙ t ) = 

t

T

x(t ) x(s)ds

   P

t −h2

+2



ρi

W2i x(t )



 4. Numerical examples

[fi (s) − l− i s]ds

In this section, two numerical examples are given to demonstrate the effectiveness of the method proposed in this paper.

0

n 

σi

W2i x(t )



[l+ i s − fi (s)]ds

Example 1. Consider the NN (1) with (33), where W2 = 0, L− = 0, L+ = diag{0.4, 0.8} and

0

i=1 t



 T [x (t ) xT (s)]Q1 [xT (t ) xT (s)]T

+

A = 2I ,

t −h

2  + f T (W2 x(s))X1 f (W2 x(s)) ds   t + h2 (h2 − t + s)˙xT (s)R1 x˙ (s)

 3  j T + (h2 − t + s) x˙ (s)Rj x˙ (s) ds.

(117)

j =2

Proposition 4. Under Case 2 with h1 = 0, for a given scalar h2 > 0, the origin of the NN (1) subject to (2) and (33) is globally asymptotically stable if there exist real matrices Q1 > 0, X1 > 0, Rj > 0, and real diagonal matrices Λ1 ≥ 0, Λ2 ≥ 0, Tj ≥ 0 (j = 1, 2, 3), T12 ≥ 0, T13 ≥ 0, T23 ≥ 0, and real matrices S1 , S2 , Z1 , Z2 , F1 , F2 and Z

P = P T of appropriate dimensions such that Z1 ≥ Z2 , [ ⋆i

h2 Q1 + [

E1T h2 E2T



R3



]P [

S2 R3

Fi ] R2

≥0

 ≥0

] >0 ˜4 < 0 H˜ (h2 ) + h2 Ξ2 + Ξ

E1T h2 E2T T

˜ 1 + Ξ˜ 4 < 0, H˜ (0) + h2 Ξ



1 W0 = −1



1 , −1

0.88 W1 = 1





1 . 1

It is assumed that the time-varying delay τ (t ) satisfies (2) and (32).

t −h2

(i = 1, 2) and   R˜ 1 S1 ≥ 0, ⋆ R˜ 1

Remark 10. By employing some novel integral inequalities and a matrix-based quadratic convex approach, it is expected that using Propositions 1–4 one can obtain some less conservative results than the results using some existing methods such as Zhang and Han (2011) and Zhang et al. (2013). However, Propositions 1–4 are of higher computational complexity than those in Zhang and Han (2011) and Zhang et al. (2013) because more Lyapunov matrix variables are introduced in the chosen Lyapunov–Krasovskii functional (36).

x(s)ds t −h 2

n  i=1

+2

t

x(t )

67

˜ 1 = Ξ1 |Q2 =Q1 , Ξ˜ 4 = Ξ4 |Q2 =Q1 ,X2 =X1 and R˜ 1 , Ξ1 , Ξ2 and Ξ4 where Ξ are defined in Proposition 2; and H˜ (τ ) := H˜ 1T (τ (t ))P H˜ 2 + H˜ 2T P H˜ 1 (τ (t )) H˜ 1 (τ (t )) := col{E1 ϱ1 , τ (t )ϱ4 + (h2 − τ (t ))ϱ5 } H˜ 2 := col{C0 , E1 (ϱ1 − ϱ3 )} where C0 and ϱk (k = 1, . . . , 5) are defined in Proposition 2. Proof. The proof is quite similar to that of Proposition 2, and thus it is omitted. 

For comparison, first, we set h1 = 0. In this case, we now calculate the admissible maximum upper bounds (AMUBs) of h2 for different values of µ (µ2 = −µ1 = µ). Table 1 lists the calculation results obtained by Theorem 1 in He et al. (2007), Theorem 2 in Zhu and Yang (2008), Theorem 2 in Zhang et al. (2010), Theorem 1 in Zhang et al. (2013) and Proposition 2 in this paper. It is clear to see that Proposition 2 in this paper achieves larger AMUBs for this example than He et al. (2007); Zhang et al. (2010, 2013); Zhu and Yang (2008). On the other hand, for h2 = 3.6567 and µ = 0.8, solve the LMIs in Proposition 2 to obtain the augmented Lyapunov matrix P as   293.95 −259.33 −87.477 −36.116 −88.964 −39.046 −259.33 702.30 72.465 26.952 73.225 27.760   −87.477 72.465 8.3319 −1.2110 17.219 1.3397  . −36.116 26.952 −1.2110 4.9410 −0.9318 8.6603   −88.964 73.225 17.219 −0.9318 36.125 5.6161

−39.046

27.760

1.3397

8.6603

5.6161

16.893

Notice that the eigenvalues of P are −26.219, 0.0646, 10.479, 23.173, 196.04 and 859.00. Clearly, P is not a positive definite matrix. Second, we set h1 = 1. In this case, apply Theorem 1 in He et al. (2007), Proposition 2 in Zhang and Han (2011) and Proposition 1 in this paper to yield respective AMUBs, which are listed in Table 2, where µ = µ2 = −µ1 . Apparently, the AMUBs obtained by Proposition 1 are much larger than those by He et al. (2007) and Zhang and Han (2011). It should be pointed out that if we set Tij = 0 (i = 1, 2, 3, j = 2, 3, 4, j > i) in Proposition 1, the obtained AMUBs are 4.2319 for µ = 0.8 and 3.0353 for µ = 0.9, which are smaller than those by Proposition 1. Therefore, it is evident that the introduction of Tij can reduce the conservatism of Proposition 1.

68

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69

Table 1 The obtained AMUBs of h2 for different values of µ for Example 1.

µ

Methods

Theorem 1 (He et al., 2007) Theorem 2 (Zhu & Yang, 2008) Theorem 2 (Zhang et al., 2010) Theorem 1 (Zhang et al., 2013) Proposition 2 in this paper

0.8

0.9

2.3534 2.3571 2.3961 3.1409 3.6567

1.6050 1.6050 1.6270 1.6375 2.5088

Table 2 The obtained AMUBs of h2 for h1 = 1 and different values of µ for Example 1.

µ

Methods

Theorem 1 (He et al., 2007) Proposition 2 (Zhang & Han, 2011) Proposition 1 in this paper

0.8

0.9

3.2575 3.3071 4.3618

2.4769 2.4853 3.2667

Table 3 The obtained AMUBs of h2 for h1 = 0 and different values of µ for Example 2.

µ

Methods

Theorem 2 (Shao, 2008a) Corollary 2 (Zuo et al., 2010) Theorem 2 (Li et al., 2011) Proposition 2 in this paper

0.00

0.10

0.50

1.3323 1.3323 1.5330 1.8898

0.8246 0.8402 0.9331 1.1114

0.3733 0.4264 0.4268 0.4514

Table 4 The obtained AMUBs of h2 for h1 = 0.5 and different values of µ for Example 2.

µ

Methods

Theorem 2 (Zuo et al., 2010) Proposition 2 (Zhang & Han, 2011) Proposition 1 in this paper

0.2

0.3

0.8402 1.0430 1.1313

0.6551 0.7236 0.8230

0.5879 0.5886 0.6509

A = diag{7.3458, 6.9987, 5.5949} W2 =

13.6014 7.4736 0.7920

−2.9616 21.6810 −2.6334

5. Conclusion A matrix-based quadratic convex approach has been developed to investigate the global asymptotic stability of a class of generalized NNs with interval time-varying delays. Some novel integral inequalities  t have been established for the integral terms in the form of t −h (h − t − s)j x˙ T (s)Rj x˙ (s)ds (j = 1, 2). Based on the novel integral inequalities, a matrix-based quadratic convex approach has been used to prove not only the negative definiteness of the derivative of the LKF but also the positive definiteness of the chosen LKF. As a result, some novel global asymptotic stability criteria have been derived for two cases, respectively, where the time-varying delay is continuous uniformly bounded, and where the time-varying delay is differentiable uniformly bounded with its derivative bounded by two constants from above and below. Two numerical examples have been given to demonstrate the effectiveness of the proposed results. Acknowledgments

0.1

Example 2. Consider the NN (1) subject to (33), where W0 = 0, W1 = I, L− = 0 and



improved by 13.14%. For h1 = 0, by the methods proposed in Li et al. (2011), Shao (2008a) and Zuo et al. (2010), the AMUBs of h2 can be calculated as 0.2313, 0.3209, and 0.3215, respectively. However, apply Proposition 4 in this paper to yield an AMUB 0.3691 of h2 , which is larger than those by Li et al. (2011), Shao (2008a) and Zuo et al. (2010). From the above two numerical examples, it is clear that Propositions 1–4 outperform the methods proposed in He et al. (2007), Shao (2008a), Zhang and Han (2011), Zhang et al. (2010, 2013), Zhu and Yang (2008) and Zuo et al. (2010).

 −0.6936 3.2100 −20.1300

L+ = diag{0.368, 0.1795, 0.2876}. As shown in Zhang and Han (2011), the NN described by this example is an SNN, whose stability was studied in Li et al. (2011), Shao (2008a), Zhang and Han (2011) and Zuo et al. (2010). In the following, we make a comparison between the methods proposed in Li et al. (2011), Shao (2008a), Zhang and Han (2011) and Zuo et al. (2010) and this paper. Case 1: The time-delay τ (t ) satisfies (2) and (32). In this situation, for h1 = 0 and various µ, where µ2 = −µ1 = µ, the AMUBs of h2 calculated by Theorem 2 in Shao (2008a), Corollary 2 in Zuo et al. (2010) and Theorem 2 in Li et al. (2011) are listed in Table 3. However, applying Proposition 2 yields larger AMUBs than those in Li et al. (2011), Shao (2008a) and Zuo et al. (2010), which are also shown in this table. For h1 = 0.5 and µ = 0.1, 0.2, 0.3, the AMUBs obtained by Theorem 2 in Zuo et al. (2010), Proposition 2 in Zhang and Han (2011) and Proposition 1 are given in Table 4. From this table, Proposition 1 is of less conservatism than Li et al. (2011), Shao (2008a) and Zuo et al. (2010) for this example. Case 2: The time-delay τ (t ) satisfies (2). In this case, for h1 = 0.2, the AMUBs of h2 obtained by Corollary 3 in Zuo et al. (2010) and Proposition 3 are 0.3738 and 0.4229, respectively. Clearly, compared with Zuo et al. (2010), the AMUB obtained in this paper is

This work was supported in part by the Australian Research Council Discovery Projects under Grant DP1096780, and the Research Advancement Awards Scheme Program (January 2010–December 2012) at Central Queensland University, Australia. References Bao, G., Wen, S., & Zeng, Z. (2012). Robust stability analysis of interval fuzzy Cohen–Grossberg neural networks with piecewise constant argument of generalized type. Neural Netw., 33, 32–41. Chen, Y. H., & Fang, S. C. (2000). Neurocomputing with time delay analysis for solving convex quadratic programming problems. IEEE Transactions on Neural Networks, 11(1), 230–240. Chua, L. O., & Yang, L. (1988). Cellular neural networks: applications. IEEE Transactions on Circuits Systems, CAS-35(10), 1273–1290. Faydasicok, O., & Arik, S. (2012). Robust stability analysis of a class of neural networks with discrete time delays. Neural Netw., 29-30, 52–59. Faydasicok, O., & Arik, S. (2013). A new upper bound for the norm of interval matrices with application to robuststability analysis of delayed neural networks. Neural Netw., 44, 64–71. He, Y., Liu, G., Rees, D., & Wu, M. (2007). Stability analysis for neural networks with time-varying interval delay. IEEE Transactions on Neural Networks, 18(6), 1850–1854. He, Y., Wu, M., & She, J. H. (2006). An improved global asymptotic stability criterion for delayed cellular neural networks. IEEE Transactions Neural Networks, 17(1), 250–252. Kim, J. H. (2011). Note on stability of linear systems with time varying delay. Automatica, 47(9), 2118–2121. Li, X., Gao, H., & Yu, X. (2011). A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays. IEEE Transactions Systems, Man, Cybernetics. Part B, Cybernetics, 41(5), 1275–1286. Liu, Y., Wang, Z., & Liu, X. (2009). Asymptotic stability for neural networks with mixed time-delays: the discrete-time case. Neural Netw., 22(1), 67–74. Michel, A., Farrell, J., & Sun, F. (1990). Analysis and synthesis techniques for hopfield type synchronous discrete-time neural networks with applications to associative memory. IEEE Transactions on Circuits Systems, 37(11), 1356–1366. Seuret, A., & Gouaisbaut, F. (2013). Wirtinger-based integral inequality: application to time-delay systems. Automatica, 49(9), 2860–2866. Shao, H. (2008a). Delay-dependent stability for recurrent neural networks with time-varying delays. IEEE Transactions on Neural Networks, 19(9), 1647–1651. Shao, H. (2008b). Delay-dependent approaches to globally exponential stability for recurrent neural networks. IEEE Transactions on Circuits and Systems II: Express Briefs, 55(6), 591–595.

X.-M. Zhang, Q.-L. Han / Neural Networks 54 (2014) 57–69 Souza, F. (2013). Further improvement in stability criteria for linear systems with interval time-varying delay. IET Control Theory & Applications, 7(3), 440–446. Wang, D. L. (1995). Emergent synchrony in locally coupled neural oscillators. IEEE Transactions on Neural Networks, 6(4), 941–948. Wang, L., & Chen, T. (2012). Complete stability of cellular neural networks with unbounded time-varying delays. Neural Netw., 36, 11–17. Wang, Z., Liu, Y., & Liu, X. (2009). State estimation for jumping recurrent neural networks with discrete and distributed delays. Neural Netw., 22(1), 41–48. Zeng, H., He, Y., Wu, M., & Zhang, C. (2011). Complete delay-decomposing approach to asymptotic stability for neural networks with time-varying delays. IEEE Transactions Neural Networks, 22(5), 806–812. Zeng, Z., & Wang, J. (2010). Associative memories based on continuous-time cellular neural networks designed using space-invariant cloning templates. Neural Netw., 22(5–6), 651–657. Zhang, X.-M., & Han, Q.-L. (2011). Global asymptotic stability for a class of generalized neural networks with interval time-varying delays. IEEE Transactions on Neural Networks, 22(8), 1180–1192.

69

Zhang, X.-M., & Han, Q.-L. (2013). Novel delay-derivative-dependent stability criteria using new bounding techniques. International Journal of Robust and Nonlinear Control, 23(13), 1419–1432. Zhang, H., Liu, Z., Huang, G., & Wang, Z. (2010). Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay. IEEE Transactions on Neural Networks, 21(1), 91–106. Zhang, W., Tang, Y., Fang, J., & Wu, X. (2012). Stability of delayed neural networks with time-varying impulses. Neural Netw., 36, 59–63. Zhang, H., Yang, F., Liu, X., & Zhang, Q. (2013). Stability analysis for neural networks with time-varying delay based on quadratic convex combination. IEEE Transactions on Neural Networks and Learning Systems, 24(4), 513–521. Zhu, X. L., & Yang, G. H. (2008). New delay-dependent stability results for neural networks with time-varying delay. IEEE Transactions on Neural Networks, 19(10), 1783–1791. Zuo, Z., Yang, C., & Wang, Y. (2010). A new method for stability analysis of recurrent neural networks with interval time-varying delay. IEEE Transactions on Neural Networks, 21(2), 339–344.

Global asymptotic stability analysis for delayed neural networks using a matrix-based quadratic convex approach.

This paper is concerned with global asymptotic stability for a class of generalized neural networks with interval time-varying delays by constructing ...
514KB Sizes 0 Downloads 3 Views