Accepted Manuscript Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays R. Rakkiyappan, N. Sakthivel, Jinde Cao PII: DOI: Reference:

S0893-6080(15)00045-3 http://dx.doi.org/10.1016/j.neunet.2015.02.011 NN 3454

To appear in:

Neural Networks

Received date: 23 June 2014 Revised date: 21 February 2015 Accepted date: 22 February 2015 Please cite this article as: Rakkiyappan, R., Sakthivel, N., & Cao, J. Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Neural Networks (2015), http://dx.doi.org/10.1016/j.neunet.2015.02.011 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Neural Networks Neural Networks 00 (2015) 1–21

Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays R. Rakkiyappana , N. Sakthivela , Jinde Caob,c,∗ a Department

b Department

of Mathematics, Bharathiar University, Coimbatore - 641 046, Tamilnadu, India. of Mathematics, and Research Center for Complex Systems and Network Sciences, Southeast University, Nanjing 210096, Jiangsu, China. c Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia.

Abstract This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen’s inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially meansquare stable are derived. When applying Jensen’s inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Keywords: Complex dynamical networks, Stochastic sampled-data, Control packet loss, Reciprocally convex approach, Additive time-varying delays

1. Introduction During the past few years, complex dynamical networks (CDNs) have become an interesting research topic and appeal to have more attention in different fields from mathematics, biology, engineering sciences, etc. (Wang & Chen,2003; Newman 2003; Boccaletti, Latora, Marenu, Chavez, & Huang, 2006). A complex network is a large set of interconnected nodes, where the nodes and connections can be anything, examples are Internet, Transportation networks, coupled biological and chemical engineering systems, neural networks in human brains and so on. The synchronization is one of the most important dynamical properties of complex networks, number of real world complex networks frequently display the synchronization behaviors among their components, such as the synchronous occurance on the Internet, synchronous transfer of digital or analog signals in communication networks and biological neural networks are also relating with synchronization. Therefore the synchroniza∗ Corresponding

author at: Department of Mathematics, and Research Center for Complex Systems and Network Sciences, Southeast University, Nanjing 210096, Jiangsu, China. Email: [email protected] (Jinde Cao).

tion problem for CDNs has received increasing research attention and a number of results have been made available in the literature ( Li & Chen,2004; Zhou & Chen,2006; Gao, Lam & Chen, 2006; Cao & Li, 2009; Lu & Ho, 2008). Li et al.(2004) established the synchronization criteria for CDN models with coupling delays for both continuous and discrete-time cases. Zhou et al. (2006) analyzed the synchronization dynamics of a general model of complex delayed networks with time delays. Gao et al. (2006) discussed the new delay-dependent conditions for a general CDN model with coupling delays, which guarantee the synchronized states to be asymptotically stable. Lu et al. (2008) investigated local and global synchronization of CDNs with coupling delay and some criteria ensuring delay-independent and delay-dependent synchronization have been derived in terms of linear matrix inequalities. Time delay is an elementary realism in physical systems. In practice, time delays occur naturally due to the finite speed of signal transmission, which may reduce the synchronization performance of the network ( Liang et al. 2008; Kinzel et al. 2009). The typical time-delayed coupling is very common in biological and physical systems. Some of the time delays are trivial

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

and so can be untrained, while some others cannot be disregard such as in long distance communication, traffic congestions and so on. A great number of synchronization results for CDNs with time delays have been reported in the literature, see (Li et al. 2008; Xu & Yang 2009; Yue & Li 2009; Zhou et al. 2013; Ji et al. 2011). Xu & Yang (2009), studied the synchronization problem for a class of CDNs with time delay. Yue & Li (2009) investigated the synchronization problem for continuous/discrete general CDNs with time-varying delays and the delays are assumed to vary in an interval where the lower and upper bounds are known. Zhou et al. (2013) examined the synchronization problem for CDNs with interval time-varying coupling delays. Ji et al. (2011) investigated the synchronization problem for a class of neutral CDNs with coupling time-varying delays. The stability analysis for neural networks with two additive time-varying delay components has been carried out in (Shao & Han (2011); Shao & Han (2012); Zhu et al. (2014); Xiao & Jia (2013); Liu (2014); Cheng et al. (2014)). Shao & Han (2012), derived delay-dependent stability criteria for neural networks with two additive time-varying delay components. Zhu et al. (2014), analyzed the stability of continuous-time systems with additive delay components. Xiao & Jia (2013) investigated the stability problem for neural networks with additive time-varying delay components and some stability criteria have been obtained by considering the relationship between timevarying delays and their upper bounds. Liu (2014) proposed new conditions for the delay-range-dependent stability analysis of time-varying delay systems by using Lyapunov-Krasovskii framework. Cheng et al. (2014) derived the delay-dependent stability criteria for continuous system with two additive timevarying delay components. In recent years, use of sampled-data control scheme has been increasing as the digital hardware and communication technologies are quickly developing. Most of the controllers are digital controller or networked to the system and these control systems can be modeled by sampled-data systems whose control signals are kept constant during the sampling period and are allowed to change only at the sampling instant. Due to this reason, the control signals have discontinuous form and may cause extremity to control or analyze the system. It is worth pointing out that in Fridman et al. (2004) and Fridman et al. (2005), a new approach called the input delay approach has been introduced to deal with the sampled-data control problems. In sampled-data controllers selecting proper sampling interval is important to design suitable controllers. Traditionally, many researchers have analyzed sampled-data control systems with constant sampling period. Therefore the necessity of the controller with varying sampling interval has been discussed in Ozdemir & Townley 1988; Hu & Michel 2000; Li et al. 2011; Lee et al. 2012; Wu et al. 2012; Wu et al. 2013. Lee et al. (2012) analyzed the synchronization of CDNs with coupling time-varying delays via sampled-data controller. Further, Wu et al. (2012), proposed the exponential synchronization problem for CDNs with time-varying coupling delay via a sampleddata controller with variable sampling. Recently, Wu et al. (2013) studied the problem of sampled-data exponential synchronization of CDNs with time-varying coupling delay. Apart

2

from these facts, taking into account the random change in sampling intervals, a further extension of time-varying case called the stochastically varying sampling intervals has been considered in the literature and the results have been discussed in ( Mikheev et al. (1988); Astrom & Wittenmark (1989); Gao et al. (2008); Li et al. (2009); Gao et al.(2009); Kim et al. (2010); Lee et al. (2013); Lee et al. (2013); Shen et al. (2012); Wen & Zeng (2013)). Gao et al.(2009) investigated the robust H∞ control problem for sampled-data with stochastic sampling. Lee et al.(2013) discussed the synchronization of chaotic system with randomly occurring uncertainties using stochastic sampled-data control. Leeet al. (2013) introduced the stochastic sampleddata control for state estimation of time-varying delayed neural networks and a delay-dependent stability criteria has been derived. It is usually assumed that the control packet from the controller to the actuator is transmitted in a perfect way, that is, there is no loss in the control data. However, in practical systems, the control packet can be lost due to several factors, for instance, actuator failures, actuator suspensions for power saving, communication interference or congestion and so on. When the control packet from the controller to the actuator is lost, the actuator input to the plant may set to zero. The linear sampled-data system in the presence of ineffective sampling can be viewed as a switched system consisting of stable sampled-data subsystem with control packet loss or otherwise an unstable linear subsystem. It is clear that frequent control packet loss will inevitably lead to instability and poor performance of systems and it is therefore necessary and important to consider the effect of control packet loss in the sampled-data control systems. Zhang & Yu (2010) used the switched system approach for the stabilization of sampled-data control systems with control inputs missing has been employed and sufficient conditions for the existence of exponential stabilizing state feedback controllers have been derived. Recently, Chen & Zheng (2012) established an improved stabilization method for sampled-data control systems with control packet loss has been developed and the obtained results are proven to be theoretically less conservative than existing ones. Moreover, stability and synchronization analysis of networks using reciprocal convex technique have been discussed in the literature, see Park et al. (2011); Jiang & Li (2012); Li et al. (2012); Zhang et al.(2012); Wang et al. (2012); Lee& Park (2014). Park et al. (2011) introduced reciprocally convex approach to study the stability of systems with time-varying delays. Jiang& Li (2012) analyzed the synchronization of CDNs with interval time-varying delay via pinning control approach and less conservative criteria have been established based on reciprocal convex technique. Zhang et al. (2012) investigated the exponential synchronization in arrays of coupled delayed chaotic neural networks with nonlinear hybrid coupling. Lee & Park (2014) focused on the stability analysis of systems with interval time-varying delays by using second-order reciprocally convex approach, where some triple integral terms in the LKF have been considered. To the best of authors’ knowledge, stochastic sampled-data synchronization control problem for CDNs with control packet loss and additive time-varying delays has not yet

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

been studied in the literature. Motivated by the above discussion, in this paper, a design problem for stochastic sampled-data synchronization of complex dynamical networks with control packet loss and additive time-varying delays is investigated. Unlike, the other studies, the proposed CDNs are studied using the sampled-data controller with stochastic sampling. By constructing a new LKF and by using reciprocal convex technique and free-weighting matrix method, delay-dependent conditions under which the complex networks can achieve desired synchronization are derived. Finally, a numerical example and its simulation results are given to illustrate the effectiveness and reduced conservatism of the proposed results. Notations: Rn denotes the n-dimensional Euclidean space and Rn×m be the set of all n × m real matrices. For real symmetric matrices X and Y, X > Y (respectively, X ≥ Y) means that the matrix X − Y > 0 (X − Y ≥ 0) is a positive definite (respectively, positive-semi definite)matrix. Let AT stands for the transpose of matrix A. The superscript ”T” denotes the transposition of the matrix of appropriate dimension. I is the identity matrix. diag{. . . } stands for block diagonal matrix. λ3 max(.), λ4 max(.) and λ3 min(.), λ4 min(.) denote the largest and smallest eigenvalue of a given matrix, respectively. A ⊗ B means the Kronecker product of matrices A and B. For" an arbitrary matrix # X Y Y and two symmetric matrices X and Z, denotes a ∗ Z symmetric matrix, where ∗ denotes the symmetric terms in a symmetric matrix. E{.} stands for the mathematical expectation operator. 2. Problem formulation and preliminaries

N X j=1

Gi j Ax j (t − d1 (t) − d2 (t)) + ui (t), i = 1, 2, · · · , N,

(1)

where xi (t) is the state variable of the node i and ui (t) is the control input of the node i, c is a constant denoting coupling strength. A = (ai j )n×n ∈ Rn×n is the constant inner coupling matrix between two connected nodes and G = (Gi j )N×N is the outer coupling configuration matrix representing the topological structure of the network, where Gi j is defined as follows: If there is a connection from node j to node i (i , j), then Gi j = 1, otherwise Gi j = 0 (i , j). The diagonal elements of matrix G are defined by Gii = −

N X

j=1, j,i

Gi j ,

i = 1, 2, · · · , N,

and d1 (t) and d2 (t) are two time-varying delays satisfying 0 ≤ d11 ≤ d1 (t) ≤ d12 , 0 ≤ d21 ≤ d2 (t) ≤ d22 ,

d˙2 (t) ≤ ρ2 ,

where d12 ≥ d11 , d22 ≥ d21 and ρ1 , ρ2 are constants. Here, let us denote d(t) = d1 (t) + d2 (t), d1 = d11 + d21 , d2 = d12 + d22 , ρ = ρ1 + ρ2 , g1 = d12 − d11 , g2 = d22 − d21 . Assumption 1: Let f : Rn → Rn be a continuous vector-valued function and satisfies the following sector-bounded condition: [ f (x) − f (y) − U(x − y)]T [ f (x) − f (y) − V(x − y)] ≤ 0,

(2)

(3)

for all x, y ∈ Rn , where U and V are constant matrices of appropriate dimensions. Let zi (t) = xi (t) − s(t) be the synchronization error, where s(t) ∈ Rn is the state trajectory of the unforced isolate node s˙(t) = f (s(t)). It can be shown that z˙i (t) = x˙i (t) − f (s(t)). Then, error dynamics of CDN (1) can be obtained as follows z˙i (t) = g(zi (t)) + c

N X j=1

Gi j Az j (t − d1 (t) − d2 (t)) + ui (t), i = 1, 2, · · · , N,

(4)

where g(zi (t)) = f (xi (t)) − f (s(t)). It is assumed that the state variables of the error system (4) are measurable at the discrete time instants 0 = t0 < t1 < · · · < tk < · · · , lim tk = +∞ and only zi (tk ) are k→∞

available for the interval tk ≤ t < tk+1 . From (4), we design a set of sampled-data feed-back controllers in the form of ui (t) = ui (tk ) = Ki zi (tk ),

Consider the following CDN with additive time-varying delays consisting of N identical coupled nodes with each node being an n-dimensional dynamical system x˙i (t) = f (xi (t)) + c

d˙1 (t) ≤ ρ1 ,

3

tk ≤ t < tk+1 ,

(5)

where Ki , is sampled-data feedback controller gain matrix to be determined, tk is the updating instant time of the Zero-OrderHold (ZOH). The sampling is not required to be periodic and the only assumption is that the distance between any two consecutive sampling instants belong to an interval, specifically it is assumed that tk+1 − tk = h ∈ [h1 , h2 ] for all k ≥ 0, where h1 and h2 satisfying h2 ≥ h1 > 0 represents the upper and lower bounds of sampling periods respectively. When h1 = h2 , the considered sampling reduces to the constant sampling. The sampling intervals h were assumed to take mth values such that tk+1 − tk = h p , where sampling interval integer p occurs stochastically in a set {1, 2, · · · , m} with a value of 0 = h0 < h1 < · · · < hm and the probability of the occurance of each can be expressed as Pr{h = h p } = β p , p = 1, 2, · · · , m, m P where β p ∈ [0, 1] are the known constants and β p = 1. p=1

To design the controller using the sampled-data with stochastic sampling, the concept of the time-varying delayed control input introduced by Astrom and Wittenmark (1989) and Mikheev et al. (1988) has been employed. Therefore by defining τ(t) =

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

t − tk , tk ≤ t < tk+1 , the controller (5) can be represented as follows ui (t) = Ki zi (tk ) = Ki zi (t − τ(t)),

tk ≤ t < tk+1 .

h1 , hm h2 − h1 , Pr{h1 ≤ τ(t) < h2 } = hm .. . hm − hm−1 Pr{hm−1 ≤ τ(t) < hm } = . hm Pr{0 ≤ τ(t) < h1 } =

where p = 1, 2, · · · , m,

m P

To describe the synchronization error system with control packet loss, we introduce the notation σ(t) to specify the control packet loss status, the notation σ(t) : [0, +∞) → S = {1, 2} is a piecewise constant function and right continuous. With the usage of σ(t), the synchronization error system with control packet loss can be described as

(11)

where K1 = K and K2 = 0. Noting that when σ(t) = 1 for t ∈ [tk , tk+1 ), the control packet is not missing during [tk , tk+1 ) and system (9) is active, otherwise, the control packet loss occurs during [tk , tk+1 ) and system (10) is active. Thus σ(t) can be referred to as a switching signal and system (11) is a switched system consisting of two subsystems, that is stable and unstable subsystem (9) and (10), respectively. The following lemmas and definition will be used to derive our main results.

(7) (8)

α p = 1.

p=1

Since α p (t), β p (t) are satisfies the Bernoulli distributed sequences, we have E{α p (t)} = α p , E{(α p (t) − α p )2 } = α p (1 − α p ),

Lemma 1 (Lee and Park,2014). (Lower bound lemma) Let f1 , f2 , · · · , fN : Rm → R have positive values in an open subset D of Rm . Then, the reciprocally convex combination of fi over D satisfies X X X 1 fi (t) = fi (t) + max gi, j (t) min P gi, j (t) {αi |αi >0, i αi =1} αi i i, j i subject to



gi, j : Rm → R, g j,i (t) =gi, j (t),

"

fi (t) gi, j (t) gi, j (t) f j (t)

#

 ≥0 .

Lemma 2 (Lee and Park, et al. 2014). For a positive definite matrix M and any differentiable function ω in [a, b] → Rn , the following inequality holds:

E{β p (t)} = β p , E{(β p (t) − β p ) } = β p (1 − β p ). 2

Z

Furthermore, the system (4) with m sampling intervals can be defined as follows: z˙(t) = g(z(t)) + c(G ⊗ A)z(t − d(t)) m X + α p (t)Kz(t − τ p (t)),

(10)

p=1

Then we have the following probability:

Pr{β p (t) = 1} = Pr{h = h p } = β p ,

z˙(t) = g(z(t)) + c(G ⊗ A)z(t − d(t)), tk ≤ t < tk+1 ,

z˙(t) = g(z(t)) + c(G ⊗ A)z(t − d(t)) m X + α p (t)Kσ(t) z(t − τ p (t)), tk ≤ t < tk+1 ,

Now we introduce the stochastic variables, α p (t) and β p (t) such that ( 1 h p−1 ≤ τ(t) < h p α p (t) = 0 otherwise ( 1 h = hp β p (t) = 0 otherwise, Pr{α p (t) = 1} = Pr{h p−1 ≤ τ(t) < h p } m X h p − h p−1 βq = = αp, hq q=p

(ie) ui (tk ) = 0 and the synchronization error system (9) reduces to the following system

(6)

Here the time-varying delay τ(t) satisfies τ˙ (t) = 1 and the following probability rule:

4

b

a

˙ T (u)M ω(u)du ˙ ω ≥

1 ωT (a, b)Mω(a, b) b−a

where (9)

p=1

where h p−1 ≤ τ p (t) < h p , K = diag{K1 , K2 , . . . , KN } and h iT z(t) = z1 (t)T z2 (t)T · · · zN (t)T , " #T g(z1 (t))T g(z2 (t))T · · · g(zN (t))T g(z(t)) = . The control packet from the controller to the actuator is possibly lost at a sampling time tk . In this case, the actuator does nothing

   ω(a, b) =  

ω(b) ω(a) Rb 1 b−a a ω(u)du

    , 

  M −M  M M =  ∗  ∗ ∗   M M π2  +  ∗ M 4  ∗ ∗

0 0 0

   

−2M −2M 4M

    .

Definition 1 (Lee and Park, 2014). Let Φ1 , Φ2 , · · · ΦN : Rm → R be a given finite number of functions that have positive values

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

in an open subset D of Rm . Then, a second order reciprocally convex combination of these functions over D is a function of the form 1 Φ + α12 Φ2 + · · · + α12 ΦN : D → Rn , α21 1 N 2 P where the real numbers αi satisfy αi > 0 and i αi = 1.

Definition 2 (Liberzon, 2003). The switching signal σ(t) is said to have average dwell-time τα if there exist two scalars N0 > 0 and τα > 0 such that Nσ (T, t) ≤ N0 +

t−T , τα

t≥T ≥0

where Nσ (T, t) denotes the switching numbers of σ(t) over the interval [T, t) and N0 is called the chatter bound respectively. Definition 3 (Wu and Shi et al., 2013). The CDN (1) without control packet loss are said be exponentially synchronized if the error system (9) is exponentially stable, i.e., there exist two constants α > 0 and β > 0 such that

1, 2, · · · , 9), R1d > 0 (d = 1, 2, 3, 4), B f > 0 ( f = 1, 2, · · · , 12), N11 > 0, N12 > 0, N13 > 0, N14 > 0, M11 > 0, M12 > 0, M13 > 0, M14 > 0, U p , W p , L p , S p , T p , J p (p = 1, 2, · · · , m), symmetric matrices C p , D p (p = 1, 2, · · · , m) and any matrices H, E p (p = 1, 2, · · · , m) and a scalar χ > 0 such that the following conditions hold:  1 2 3 m   Φ1 Φ2 Φ2 Φ2 · · · · · · Φ2   ∗ Φ1 Φ2 0 · · · · · · 0  4 3   2 3 ∗ Φ3 Φ4 · · · · · · 0   ∗   . ..  < 0 .. .. .. .. (12) Π =  . . . .  . .  .    ..  ∗  . Φm ∗ ∗ ∗ 4    ∗ ∗ ∗ ∗ ··· Φm 3 "

e−2γ1 h p W p ∗

−d2 ≤θ≤0

where α and β are the decay rate and decay co-efficient respectively.

−αt

2

ke(t)k ≤ βe

e−2γ1 h p W p

Φ1 = Θ − ΓT1a − ΓT2a

Definition 4 (Wu and Shi et al., 2013). The CDN (1) with control packet loss are said be exponentially synchronized if the switched system (11) is exponentially stable, i.e., there exist two constants α > 0 and β > 0 such that

− ΓT3a − ΓT4a

2

sup ke(θ)k ,

#

Ep

and

ke(t)k2 ≤ βe−αt sup ke(θ)k2 ,

5

"

"

# S p Cp ≥ 0, > 0, ∗ Tp " # S p Dp > 0, ∗ Tp

e−2γ1 λ1 M11 ∗

"

B5 + B6 e−2γ1 d12 M11

e−2γ1 λ2 M12 ∗

"

B7 + B8 e−2γ1 d22 M12

e−2γ1 λ1 M13 ∗

"

B9 + B10 e−2γ1 d12 M13

e−2γ1 λ2 M14 ∗

B11 + B12 e−2γ1 d22 M14

#

#

#

#

(13)

Γ1a Γ2a Γ3a Γ4a ,

(a = 1, 2)

−d2 ≤θ≤0

(14)

where α and β are the decay rate and decay co-efficient respectively.  1  Ω1 + α1 X  09n Φ12 =   α1 Xϕ 07n

3. Main results In this section, we derive sufficient conditions for the exponential stability in the mean-square for error system (9). Then, we discuss the design problem for exponentially mean-square synchronization of CDN (1). Before deriving the main results, the following notations are introduced: (IN ⊗ U)T (IN ⊗ V) (IN ⊗ V)T (IN ⊗ U) + , 2 2

U

=

V

= −

λ1

(

λ

= =

 l  Ω1 + αl X  09n l Φ2 =   αl Xϕ 07n

(

(IN ⊗ U)T + (IN ⊗ V)T , 2 d12 , if ρ1 < 1 0, if ρ1 ≥ 1.

λ2 =

(

d22 , if ρ2 < 1 0, if ρ2 ≥ 1.

d2 , if ρ < 1 0, if ρ ≥ 1.

Theorem 1 Given scalars γ1 > 0, ϕ > 0, 0 ≤ d11 ≤ d12 , and 0 ≤ d21 ≤ d22 , the error system (9) is exponentially meansquare stable, if there exist matrices P1 > 0, G1e > 0 (e =

Φ3p

 p  Ω4  =  ∗  ∗

 −2γ1 λ1 M11  2e  ∗   ∗  ∗

Ω7p Ω5p ∗

0 Ω8p Ω6p

Ω2 09n 0 07n

Ω13 09n 0 07n

0 09n 0 07n

Ωl3 09n 0 07n

    0   l l  , Φ4 =  Ω9 0

    , 

    ,  

0 Ωl10 0

0 Ωl11 0

    ,

(l = 2, 3, · · · , m)

0 −2γ1 λ1

e

∗ ∗

M11

B5 0 2e−2γ1 d12 M11 ∗

0 B6 0 e−2γ1 d12 M11

     

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

 −2γ1 λ2 M12  2e  ∗   ∗  ∗

 −2γ1 λ1 M13  2e  ∗   ∗  ∗

 −2γ1 λ2 M14  2e  ∗   ∗  ∗   e−2γ1 λ1 R11 + 

  e−2γ1 λ2 R12 +  "

"

−2γ1 λ1

e

R13



e−2γ1 λ2 R14 ∗ Ωl1 Ω2 Ωl3

0

B7 0

e−2γ1 λ2 M12 ∗ ∗

2e−2γ1 d22 M12 ∗

0 e−2γ1 λ1 M13 ∗ ∗

2e−2γ1 d12 M13 ∗

2e−2γ1 d22 M14 ∗

g21 −2γ λ e 1 1 M11 2

e−2γ1 d12 R11 +

g22 −2γ λ e 1 2 M12 2

e−2γ1 d22 R14

e−2γ1 d12 M13 0 B12 0 e−2γ1 d22 M14

B1



B4

0 B10 0

B11 0

e−2γ1 λ2 M14 ∗ ∗

B3 e−2γ1 d12 R13

e−2γ1 d22 M12

B9 0

0



0 B8 0

g21 −2γ d e 1 12 M13 2

B2

#

#

−2γ1 d22

e

R12 +

g22 −2γ d e 1 22 M14 2

≥ 0,

> 0,      

> 0,      

> 0,      

   ≥ 0,    ≥ 0,

= α1 (e−2γ1 h1 W1 − E1 ) + 2αl Xϕ, 2

π L1 ), 4 1 2 π2 + αl 2 = α1 e−2γ1 h1 L1 e−2γ1 hl 2 ω1 (hl − h2l−1 )

p=1



Θ310

=

Θ44

=

Θ55 Θ56

= −e−2γ1 d1 G14 − e−2γ1 d2 N12 , = e−2γ1 d2 N12 , Θ66 = −e−2γ1 d2 G15 − e−2γ1 d2

=

−α p+1 e−2γ1 h p+1 (L p+1 +

Θ99

=

Θ910

=

π2 1 Lp , 2 ωp

= −α p E p + e−2γ1 h p W p , Ω8p = α p e−2γ1 h p

Ωl9

= αl e−2γ1 hl Wl − El , Ωl10 = αl El + αl e−2γ1 hl

π2 π2 1 Ll ), Ωl11 = αl e−2γ1 hl Ll , 4 2 ωl < 0, where

× (Ll − Θ = (Θi j )18×18

−α1 e−2γ1 h1 W1 + α1 e−2γ1 h1 C1 − α1 e−2γ1 h1 π2 ×(L1 + L1 ), Θ14 = Hc(G ⊗ A), 4 e−2γ1 d2 N11 , Θ111 = P1 − H, Θ118 = H − χV, −(1 − ρ1 )e−2γ1 λ1 G11 − (e−2γ1 λ1 R11 g2 + 1 e−2γ1 λ1 M11 ) + B1 + BT1 − (e−2γ1 d12 R11 2 g21 −2γ1 d12 + e M13 ), Θ27 = (e−2γ1 λ1 R11 2 g2 + 1 e−2γ1 λ1 M11 ) − BT1 , Θ28 = −B1 2 g2 +(e−2γ1 d12 R11 + 1 e−2γ1 d12 M13 ), 2 −(1 − ρ2 )e−2γ1 λ2 G12 − (e−2γ1 λ2 R12 g2 + 2 e−2γ1 λ2 M12 ) + BT2 + B2 − (e−2γ1 d22 R12 2 g22 −2γ1 d22 M14 ), Θ39 = (e−2γ1 λ2 R12 + e 2 g2 + 2 e−2γ1 λ2 M12 ) − BT2 , 2 g2 −B2 + (e−2γ1 d22 R12 + 2 e−2γ1 d22 M14 ), 2 −(1 − ρ)e−2γ1 λG13 , Θ411 = H T cϕ(G ⊗ A),

=

Θ88

Ω7p

2 e−2γ1 h p ω2p J p + α1 U1 h2p − h2p−1

Θ33

π2 L p ) + α p+1 e−2γ1 h p U p+1 4 −α p+1 e−2γ1 h p+1 W p+1 + α p+1 e−2γ1 h p+1 C p+1

π2 L p+1 ), 4 1 2 = −α p π2 L p e−2γ1 h p Jp, − αp 2 ωp h p − h2p−1

p=1

αp

= =

=

= −α p U p e−2γ1 h p + α p e−2γ1 h p W p − α p e−2γ1 h p D p

m X

Θ16 Θ22

Θ77

+E Tp + α p e−2γ1 h p (D p − C p ), −α p e−2γ1 h p (L p −

Ω6p

+G17 + g21 R13 + g22 R14 − e−2γ1 d2 N11 + d22 N13 m X αpωpS p +(d2 − d1 )2 N14 − χU +

(16)

×Jl (hl − hl−1 ), Ω4p = −2α p e−2γ1 h p W p + E p

Ω5p

= 2γ1 P1 + G11 + G12 + G13 + G14 + G15 + G16

> 0, (15)

≥ 0,

= α1 E1 + α1 e−2γ1 h1 (L1 −

Θ11

6

×N11 − e−2γ1 d2 N12 , −(e−2γ1 d11 G16 + e−2γ1 d11 G18 − (e−2γ1 λ1 R11 g2 + 1 e−2γ1 λ1 M11 ), Θ78 = B1 , 2 −e−2γ1 d12 G17 − e−2γ1 d12 G18 − (e−2γ1 d12 R11 g2 +e−2γ1 d12 1 M13 ), 2 g2 e−2γ1 d21 G19 − (e−2γ1 d12 R12 + 2 e−2γ1 λ2 M12 ), 2 B2 , Θ1010 = −e−2γ1 d22 G19 − (e−2γ1 d22 R12 g2 +e−2γ1 d22 2 M14 ), Θ1111 = g21 R11 + g22 R12 2 g4 g4 +d22 N11 + (d2 − d1 )2 N12 + 1 M11 + 2 M12 4 4 m X g41 g42 + M13 + M14 + α p ω2p (W p + L p ) 4 4 p=1

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

+

m X

αpωpT p +

p=1

Θ1118 Θ1313 Θ1415 Θ1616 Θ1717 ωp Γ11

=

m X

αp Jp[

p=1

−2γ1 d2

Hϕ, Θ1212 = −e

=

=

=

=

=

=

=

+ +

9 elements

 0 .....0 −In 0n .....0n  |n {z }n | {z }   8 elements 15 elements  0n 0n g2 In 0n .....0n −In 0n .....0n  | {z } | {z } 7 elements

    ,  

 0n .....0n g2 In 0n .....0n −In 0n .....0n  | | {z } | {z }  {z }  8 elements 8 elements 6 elements  0 .....0 −In 0n .....0n  |n {z }n | {z } 7 elements

 0 .....0 I 0 .....0  |n {z }n n |n {z }n   13 elements 10 elements  0n .....0n −g1 In 0n .....0n In 0n .....0n  | {z } | {z } | {z } 9 elements

 0n .....0n In 0n .....0n  0n − g1 In | {z } | {z }   11 elements 10 elements  0 .....0 I 0 .....0  |n {z }n n |n {z }n 9 elements

    ,  

 0 .....0 I 0 .....0  |n {z }n n |n {z }n   8 elements 15 elements  0n .....0n −g2 In 0n .....0n In 0n .....0n  | {z } | {z } | {z } 9 elements

Γ42

+

9 elements

8 elements

6 elements

7 elements

 0n .....0n In 0n .....0n  0n 0n − g2 In | {z } | {z }   12 elements 8 elements  0 .....0 I 0 .....0 n n n n n  | {z } | {z } 16 elements

7 elements

    ,  

+ + + +

    ,  

    ,  

V13 (t) =g1

    ,  

V1 (t) ≤ e

V1 (tk ),

t ∈ [tk , tk+1 ).

(17)

e2γ1 s zT (s)G13 z(s)ds

t−d(t) Z t

e2γ1 s zT (s)G14 z(s)ds

t−d1 Z t

11 X b=1

V1b (t),

e2γ1 s zT (s)G17 z(s)ds

t−d12 Z t−d11

t−d12 Z t−d21

t−d22 Z −d11 −d12

V14 (t) =g1

Z

−d11

−d12

+ g2

V16 (t) =d2

Z

0

−d2

Z

e2γ1 s zT (s)G18 z(s)ds e2γ1 s zT (s)G19 z(s)ds, Z

t

e2γ1 s z˙T (s)R11 z˙(s)dsdϑ

t+ϑ Z −d21 −d22 Z t

Z

Z

0

−d2

Z

g21 2

Z

V18 (t) =

g21 2

Z

V19 (t) =

m X p=1

t

−d22 t+ϑ t 2γ1 s T

e

e2γ1 s zT (s)R14 z(s)dsdϑ,

z˙ (s)N11 z˙(s)dsdϑ

t+ϑ

Z

t

−d1 −d2

Z

t

e2γ1 s z˙T (s)N12 z˙(s)dsdϑ,

t+ϑ

e2γ1 s zT (s)N13 z(s)dsdϑ t+ϑ

−d11 −d12 g22

+

e2γ1 s z˙T (s)R12 z˙(s)dsdϑ, t+ϑ

Z

Z

+ (d2 − d1 )

V17 (t) =

t

e2γ1 s zT (s)R13 z(s)dsdϑ

t+ϑ Z −d21

+ (d2 − d1 )

2 −d11

−d12 g22

+ (18)

e2γ1 s zT (s)G16 z(s)ds

Z

Proof: Consider the following LKF for system (9): V1 (t) =

e2γ1 s zT (s)G15 z(s)ds

t−d2 Z t

+ g2

V15 (t) =d2

    ,  

t−d2 (t) Z t

t−d11 t

and the remaining terms of Θi, j are zero. The desired control gain is given by K = H −1 X. Moreover, the LKF V1 (t) satisfies −2γ1 (t−tk )

e2γ1 s zT (s)G12 z(s)ds

+

 0n .....0n g1 In 0n .....0n −In 0n .....0n  | | {z } | {z }  {z }  6 elements 10 elements 6 elements  0 .....0 −In 0n .....0n  |n {z }n | {z }

14 elements

Γ41

V11 (t) =e2γ1 t zT (t)P1 z(t), Z t V12 (t) = e2γ1 s zT (s)G11 z(s)ds t−d1 (t) Z t

= h p − h p−1 ,   0 .....0 −In 0n .....0n   |n {z }n | {z }     , 13 elements 10 elements =  0n .....0n −In 0n .....0n   0n g1 In | {z } | {z } 

7 elements

Γ32

where

= −e−2γ1 λ2 R14 , , Θ1617 = −B4 , = −e−2γ1 d22 R14 Θ1818 = −χI,

16 elements

Γ31

] − 2Hϕ,

2

= −e N14 , Θ1414 = −e−2γ1 λ1 R13 , = −B3 , Θ1515 = −e−2γ1 d12 R13 ,

13 elements

Γ22

h2p−1

N13 ,

14 elements

Γ21

2



−2γ1 d2

12 elements

Γ12

h2p

7

2

α p (t)

Z

Z

−d11 ν −d21

−d22 Z ν

Z

−d1

−d2 Z t

Z

Z

Z

−d12 t+ϑ −d21 Z ν

e2γ1 s zT (s)N14 z(s)dsdϑ,

t+ϑ

e2γ1 s z˙T (s)M11 z˙(s)dsdϑdν

t+ϑ −d21

ν t

t

Z

t

e2γ1 s z˙T (s)M13 z˙(s)dsdϑdν Z

t

−d22 −d22 t+ϑ  Z t−h p−1 2γ1 s T

e

t−h p

e2γ1 s z˙T (s)M12 z˙(s)dsdϑdν,

t+ϑ

e2γ1 s z˙T (s)M14 z˙(s)dsdϑdν,

z (s)U p z(s)ds

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

+ ωp

Z

t−h p−1

t−h p

V110 (t) =

m X

α p (t)

m X

α p (t)

t−h p−1

t−h p

p=1

V111 (t) =

Z

p=1

Z

Z

−h p−1

−h p

t s

Z

Z

ν

 e2γ1 u z˙T (u)(W p + L p )˙z(u)duds ,

t s

0

 e2γ1 u zT (u)S p z(u)

t

e2γ1 u z˙T (u)J p z˙(u)dudϑdν. t+ϑ

Setting d12 − d1 (t) d2 (t) − d21 d1 (t) − d11 , ψ= , φ= , g1 g1 g2 d22 − d2 (t) ζ= . g2

δ=

Define the infinitesimal operator L of V(z(t)) defined as follows: 1 LV(z(t)) = lim+ {E{V(z(t + c))|z(t)} − V(z(t))}. c→0 c Then 11 X

E{LV1 (t)} =

E{LV1b (t)},

(19)

b=1

  E{LV11 (t)} = E 2e2γ1 t zT (t)P1 z˙(t) + 2γ1 e2γ1 t zT (t)P1 z(t) ,

(20)



E{LV12 (t)} ≤ E e2γ1 t zT (t)G11 z(t) − (1 − ρ1 )e2γ1 t e−2γ1 λ1 × zT (t − d1 (t))G11 z(t − d1 (t)) + e2γ1 t zT (t)G12

× z(t) − (1 − ρ2 )e2γ1 t e−2γ1 λ2 zT (t − d2 (t))G12 2γ1 t T

× z(t − d2 (t)) + e

z (t)G13 z(t) − (1 − ρ)

2γ1 t −2γ1 λ T

e

z (t − d(t))G13 z(t − d(t))

+ e2γ1 t zT (t)G14 z(t) − e2γ1 t e−2γ1 d1 zT (t − d1 ) 2γ1 t T

× G14 z(t − d1 ) + e

2γ1 t

z (t)G15 z(t) − e

× e−2γ1 d2 zT (t − d2 )G15 z(t − d2 ) + e2γ1 t zT (t) 2γ1 t −2γ1 d11 T

× G16 z(t) − e

e

z (t − d11 )G16 z(t − d11 )

+ e2γ1 t zT (t)G17 z(t) − e2γ1 t e−2γ1 d12 zT (t − d12 )

× G17 z(t − d12 ) + e2γ1 t e−2γ1 d11 zT (t − d11 )G18

z(t − d11 ) − e2γ1 t e−2γ1 d12 zT (t − d12 )G18 z(t − d12 ) + e2γ1 t e−2γ1 d21 zT (t − d21 )G19 z(t − d21 ) − e2γ1 t  × e−2γ1 d22 zT (t − d22 )G19 z(t − d22 ) , 

t−d11

1 z˙(s)ds − e2γ1 t e−2γ1 d12 ψ t−d1 (t) Z t−d1 (t) Z t−d1 (t) × z˙T (s)dsR11 z˙(s)ds + g22 e2γ1 t t−d12

Z 1 2γ1 t −2γ1 λ2 t−d21 T T z˙ (s)ds × z˙ (t)R12 z˙(t) − e e φ t−d2 (t) Z t−d21 Z t−d2 (t) 1 × R12 z˙(s)ds − e2γ1 t e−2γ1 d22 ζ t−d2 (t) t−d22 Z t−d2 (t)  × z˙T (s)dsR12 z˙(s)ds , (22)

E{LV14 (t)} ≤ E



t−d22

g21 e2γ1 t zT (t)R13 z(t) Z

× zT (s)dsR13 ×

Z

t−d1 (t)

1 − e2γ1 t e−2γ1 λ1 δ

t−d11 t−d1 (t)

T

z(s)ds −

z (s)dsR13 t−d12

t−d1 (t) t−d12

1 2γ1 t −2γ1 λ2 e e φ

× zT (t)R14 z(t) − Z

Z

t−d21

t−d11

t−d1 (t)

1 2γ1 t −2γ1 d12 e e ψ z(s)ds + g22 e2γ1 t Z

t−d21

zT (s)ds

t−d2 (t)

1 z(s)ds − e2γ1 t e−2γ1 d22 ζ t−d2 (t) Z t−d2 (t)  × zT (s)dsR14 z(s)ds ,

× R14

Z

Z

t−d2 (t)

t−d22

(23)

t−d22

where

×e

Z

× z˙T (s)dsR11 t−d12

  + z˙T (u)T p z˙(u) duds ,

Z

8

1 E{LV13 (t)} ≤ E g21 e2γ1 t z˙T (t)R11 z˙(t) − e2γ1 t e−2γ1 λ1 δ

Z

(21)

t−d11 t−d1 (t)

 E{LV15 (t)} ≤ E d22 e2γ1 t z˙T (t)N11 z˙(t) − e2γ1 t e−2γ1 d2 [zT (t)

− zT (t − d2 )]N11 [z(t) − z(t − d2 )] + (d2 − d1 )2

× e2γ1 t z˙T (t)N12 z˙(t) − e2γ1 t e−2γ1 d2 [zT (t − d1 )  − zT (t − d2 )]N12 [z(t − d1 ) − z(t − d2 )] , (24) Z  E{LV16 (t)} ≤ E d22 e2γ1 t zT (t)N13 z(t) − e2γ1 t e−2γ1 d2 × dsN13

Z

t−d2

× N14 z(t) − e Z

t−d1 t−d2

 g4

zT (s) t−d2

t

z(s)ds + (d2 − d1 )2 e2γ1 t zT (t)

2γ1 t −2γ1 d2

×

t

e

 z(s)ds ,

Z

t−d1

zT (s)dsN14

t−d2

(25)

g21 (d12 − d1 (t)) 4 2 Z Z t−d11 g2 −d11 e2γ1 s z˙T (s)M11 z˙(s)dsdν − 1 × 2 −d1 (t) t−d1 (t) Z Z t−d11 g21 −d1 (t) 2γ1 s T e z˙ (s)M11 z˙(s)dsdν − × 2 −d12 t+ν Z t−d1 (t) g4 × e2γ1 s z˙T (s)M11 z˙(s)dsdν + 2 e2γ1 t z˙T (t) 4 t+ν

E{LV17 (t)} = E

1 2γ1 t T

e

z˙ (t)M11 z˙(t) −

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

Z t−d21 g22 (d22 − d2 (t)) e2γ1 s z˙T (s) 2 t−d2 (t) Z Z g22 −d21 t−d21 2γ1 s T × M12 z˙(s)ds − e z˙ (s)M12 2 −d2 (t) t+ν Z Z g2 −d2 (t) t−d2 (t) 2γ1 s T × z(s)dsdν − 2 e z˙ (s)M12 2 −d22 t+ν  × z˙(s)dsdν Z  g4 g21 δ 2γ1 t −2γ1 λ1 t−d11 1 2γ1 t T e z˙ (t)M11 z˙(t) − e e ≤E 4 2 ψ t−d1 (t) Z −d11 Z t−d11 1 × z˙T (s)dsM11 z˙(s)ds − 2 e−2γ1 λ1 ψ −d1 (t) t−d1 (t) Z t−d11 Z −d11 Z t−d11 z˙(s)dsdν z˙T (s)dsdνM11 × × M12 z˙(t) −

t+ν

1 − 2 e−2γ1 d12 δ Z −d1 (t) Z ×

Z

−d1 (t)

−d12 t−d1 (t)

Z

−d1 (t) t+ν t−d1 (t) T

z˙ (s)dsdνM11

t+ν

g4 z˙(s)dsdν + 2 e2γ1 t z˙T (t)M12 4 −d12 t+ν Z t−d21 2 g φ z˙T (s)dsM12 × z˙(t) − 2 e2γ1 t e−2γ1 λ2 2 ζ t−d2 (t) Z −d21 Z t−d21 Z t−d21 1 z˙T (s) z˙(s)ds − 2 e−2γ1 λ2 × ζ −d2 (t) t+ν t−d2 (t) Z −d21 Z t−d21 1 × dsdνM12 z˙(s)dsdν − 2 e−2γ1 d22 φ −d2 (t) t+ν Z −d2 (t) Z t−d2 (t) Z −d2 (t) Z t−d2 (t) × z˙T (s)dsdνM12 −d22 t+ν −d22 t+ν  × z˙(s)dsdν , (26) E{LV18 (t)} = E

 g4

× − −

1 2γ1 t T

4 Z

e

z˙ (t)M13 z˙(t) −

t−d1 (t)

g21 (d1 (t) − d11 ) 2

e2γ1 s z˙T (s)M13 z˙(s)dsdν

t−d12 2 Z −d11 g1

2

Z

e2γ1 s z˙T (s)M13 z˙(s)dsdν

e

−d12

z˙ (s)M13 z˙(s)dsdν

t−d12

g22

g42 2γ1 t T

e z˙ (t)M14 z˙(t) − (d2 (t) − d21 ) 4 2 Z t−d2 (t) × e2γ1 s z˙T (s)M14 z˙(s)dsdν +



t−d22 Z g22 −d21

2

−d2 (t)

Z

t+ν

t−d2 (t)

e2γ1 s z˙T (s)M14 z˙(s)dsdν

Z Z  g2 −d2 (t) t+ν 2γ1 s T − 2 e z˙ (s)M14 z˙(s)dsdν 2 −d22 t−d22  g4 g2 ψ ≤ E 1 e2γ1 t z˙T (t)M13 z˙(t) − 1 e2γ1 t e−2γ1 d12 4 2 δ

Z

t−d1 (t)

z˙T (s)dsM13

t−d12

Z

t−d1 (t)

z˙(s)ds

t−d12 Z t+ν

Z 1 2γ1 t −2γ1 λ1 −d11 z˙T (s)dsdν e e ψ2 −d1 (t) t−d1 (t) Z −d11 Z t+ν 1 × M13 z˙(s)dsdν − 2 e2γ1 t δ −d1 (t) t−d1 (t) Z −d1 (t) Z t+ν × e−2γ1 d12 z˙T (s)dsdνM13 −

×

Z

−d1 (t)

−d12

−d12 t+ν

Z

t−d12

z˙(s)dsdν + t−d12

g42 2γ1 t T e z˙ (t)M14 4

Z t−d2 (t) g2 ζ z˙T (s)ds × z˙(t) − 2 e2γ1 t e−2γ1 d22 2 φ t−d22 Z −d21 Z t−d2 (t) 1 × M14 z˙(s)ds − 2 e2γ1 t e−2γ1 λ2 ζ −d2 (t) t−d22 Z t+ν Z −d21 Z t+ν × z˙T (s)dsdνM14 z˙(s) −d2 (t)

t−d2 (t)

t−d2 (t)

Z −d2 (t) Z t+ν 1 × dsdν − 2 e2γ1 t e−2γ1 d22 z˙T (s) φ −d22 t−d22 Z −d2 (t) Z t+ν  × dsdνM14 z˙(s)dsdν , (27) −d22

E{LV19 (t)} ≤ E

m X p=1

t−d22

 α p e2γ1 t e−2γ1 h p−1 zT (t − h p−1 )U p

× z(t − h p−1 ) − e2γ1 t e−2γ1 h p zT (t − h p )U p

× z(t − h p ) + ω2p e2γ1 t z˙T (t)(W p + L p )˙z(t) Z t−h p−1 2γ1 t −2γ1 h p z˙T (s)(W p + L p ) − ωpe e t−h p



× z˙(s)ds ,

(28)

m X

  α p e2γ1 t ω p zT (t)S p z(t) + ω p z˙T (t)T p z˙(t)



α p e2γ1 t e−2γ1 h p

E{LV110 (t)} ≤ E

t+ν

−d1 (t) t−d1 (t) Z Z g21 −d1 (t) t+ν 2γ1 s T

2

×

9

p=1



E{LV111 (t)} ≤ E

m X p=1

Z

t−h p−1

t−h p

m X

×

t−h p−1

−h p−1

−h p

zT (s)S p z(s)ds

t−h p

 z˙T (s)T p z˙(s)ds ,

α p e2γ1 t z˙T (t)J p z˙(t)

p=1

Z

Z

Z

t

t+ν

 h2

p

2

(29)



h2p−1  2



m X

αp

p=1

 e−2γ1 h p z˙T (s)J p z˙(s)dsdν . (30)

From Lemma 1, we can find that there exist matrices B1 , B2 , B3 , B4 such that (16) holds Z Z t−d11 1 2γ1 t −2γ1 λ1 t−d11 T z˙ (s)dsR11 z˙(s)ds − e e δ t−d1 (t) t−d1 (t)

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21



1 2γ1 t −2γ1 d12 e e ψ

Z

t−d1 (t)

Z

z˙T (s)dsR11

t−d12

t−d1 (t)

"

t−d12

Z

t−d11 g2 δ × z˙(s)ds − 1 e2γ1 t e−2γ1 λ1 z˙T (s)dsM11 2 ψ t−d1 (t) Z Z t−d11 g21 ψ 2γ1 t −2γ1 d12 t−d1 (t) e e z˙(s)ds − × 2 δ t−d12 t−d1 (t) Z t−d1 (t) × z˙T (s)dsM13 z˙(s)ds

t−d12

"

g2 e−2γ1 λ1 R11 + 21 e−2γ1 λ1 M11

T      B1 g2 e−2γ1 d12 R11 + 21 e−2γ1 d12 M13



 Z t−d11  z˙(s)ds   1 (t) ×  Z t−d t−d1 (t)   z˙(s)ds t−d12

and

Z

1 − e2γ1 t e−2γ1 λ2 φ

1 − e2γ1 t e−2γ1 d22 ζ −

t−d21

     

T

z˙ (s)dsR12

t−d2 (t) Z t−d2 (t)

g22 φ 2γ1 t −2γ1 λ2 e e 2 ζ

Z

T

t−d2 (t)

"



zT (s)dsR13

t−d1 (t) Z t−d1 (t)

z˙(s)ds

t−d22 Z t−d21

− z˙(s)ds

Z

B4 e−2γ1 d22 R14

−d11 −d1 (t)

1 −2γ1 d12 e δ2

−d12

B2

Z

z(s)ds t−d2 (t)

 Z t−d21  #   t−d (t) z(s)ds  Z t−d22 (t)   z(s)ds

Z

g22 −2γ1 d22 M14 2e

×

  

(32)

t+ν −d1 (t)

−d12

z˙T (s)dsdνM11

Z

t−d1 (t)

"

and

z(s)ds

Z

−d11

−d1 (t)

z˙T (s)dsdνM11

t+ν

e−2γ1 λ1 M11 ∗

T     

B5 + B6 e−2γ1 d12 M11

     

(34)

1 − 2 e−2γ1 λ2 ζ

Z

−d21 −d2 (t)

t+ν

Z

t−d21

T

z˙ (s)dsdνM12 t+ν

Z

−d2 (t)

1 −2γ1 d22 e φ2 −d22 Z −d2 (t) Z t−d2 (t) × M12 z˙(s)dsdν

× z˙(s)dsdν − −d22

t+ν

Z

Z

t−d11

z˙(s)dsdν

t+ν −d1 (t)

Z

−d12

t−d1 (t)

t+ν

#

 Z −d11 Z t−d11  z˙(s)dsdν   1 (t) Zt+ν ×  Z −d −d1 (t) t−d1 (t)   z˙(s)dsdν −d12

z(s)ds

t−d12

t−d11

t+ν

t−d11

t−d1 (t) Z t−d1 (t)

Z

× z˙(s)dsdν  Z −d11 Z t−d11  z˙(s)dsdν   1 (t) Zt+ν ≤ −  Z −d −d1 (t) t−d1 (t)   z˙(s)dsdν

t−d2 (t)

1 − e2γ1 t e−2γ1 d12 zT (s)dsR13 ψ t−d12  Z t−d11 T   z(s)ds    Z t−d1 (t)  ≤ −   t−d1 (t)   z(s)ds  t−d12

e−2γ1 λ2 R14 ∗

1 −2γ1 λ1 e ψ2

t−d22

1 − e2γ1 t e−2γ1 λ1 δ

t−d2 (t)

t−d21

t−d22

z˙(s)ds

t−d2 (t) Z t−d2 (t)

∗ e−2γ1 d22 R12 +  Z t−d21    z˙(s)ds    Z t−d2 (t)  ×   t−d2 (t)   z˙(s)ds  t−d11

z (s)dsR14

Z

(33)

We can find the upper bounds of the second order reciprocally convex combinations in (26) and (27) for the matrices B5 , B6 , B7 , B8 , B9 , B10 , B11 , B12 satisfying (15) as

g22 −2γ1 λ2 M12 2e

Z

T

t−d22

t−d22

and

t−d21

     

Z t−d2 (t) Z t−d2 (t) 1 zT (s)dsR14 z(s)ds − e2γ1 t e−2γ1 d22 ζ t−d22 t−d22 T  Z t−d21   z(s)ds    2 (t) ≤ −  Z t−d  t−d2 (t)   z(s)ds 

#

Z t−d2 (t) Z t−d2 (t) g2 ζ − 2 e2γ1 t e−2γ1 d22 z˙T (s)dsM14 z˙(s)ds 2 φ t−d22 t−d22 T  Z t−d21   z˙(s)ds     2 (t) ≤ −  Z t−d  t−d2 (t)   z˙(s)ds 

  e−2γ1 λ2 R12 + 

Z

1 − e2γ1 t e−2γ1 λ2 φ

t−d21

z˙T (s)dsM12

 Z t−d11  #   t−d (t) z(s)ds  Z t−d11 (t)  z(s)ds t−d12

(31)

z˙ (s)dsR12

t−d22 Z t−d21

B3 e−2γ1 d12 R13

and

t−d12

 Z t−d11  z˙(s)ds   1 (t) ≤ −  Z t−d t−d1 (t)   z˙(s)ds

e−2γ1 λ1 R13 ∗

10

Z

      Z

(35)

−d21

−d2 (t) t−d2 (t) T

t+ν

Z

t−d21

t+ν

z˙ (s)dsdν

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

 Z −d21 Z t−d21  z˙(s)dsdν   Z −d2 (t) Zt+ν ≤ −  −d2 (t) t−d2 (t)   z˙(s)dsdν −d22

t+ν

"

e−2γ1 λ2 M12 ∗

×

T     

 Z −d21 Z t+ν  z˙(s)dsdν   t−d2 (t) Z ×  Z−d−d2 (t) t+ν 2 (t)   z˙(s)dsdν −d22

#

B7 + B8 e−2γ1 d22 M12

 Z −d21 Z t−d21  z˙(s)dsdν   2 (t) Zt+ν ×  Z −d −d2 (t) t−d2 (t)   z˙(s)dsdν −d22

and

1 − 2 e−2γ1 λ1 ψ

Z

−d11

−d1 (t)

× z˙(s)dsdν − ×

Z

−d1 (t) Z

−d12

t+ν T

z˙ (s)dsdνM13

t−d1 (t)

1 −2γ1 d12 e δ2

t+ν

Z

−d12

×

−d1 (t) −d12

Z

t+ν

Z

(36)

−d11

−d1 (t)

Z

z˙T (s)dsdνM13

t−d12

and

t−d12

e−2γ1 λ1 M13 ∗

T     

B9 + B10 e−2γ1 d12 M13



Z

−d21

−d2 (t)

Z

t−d12

t+ν T

z˙ (s)dsdνM14

     

(37)

t−d2 (t)

Z

−d21

−d2 (t)

Z

t+ν t−d2 (t)

Z −d2 (t) Z t+ν 1 × z˙(s)dsdν − 2 e−2γ1 d22 z˙T (s)dsdν φ −d22 t−d22 Z −d2 (t) Z t+ν × M14 z˙(s)dsdν −d22

t−d22

 Z −d21 Z t+ν  z˙(s)dsdν   t−d2 (t) Z ≤ −  Z−d−d2 (t) t+ν 2 (t)   z˙(s)dsdν −d22

×

"

t−d22

e−2γ1 λ2 M14 ∗

T     

B11 + B12 e−2γ1 d22 M14

Z

t−h p−1

Ep −E p + e−2γ1 h p W p −2γ 1hp Wp −e

# (39)

e−2γ1 h p z˙T (s)L p z˙(s)ds

t−h p

p=1

m X p=1

α p e−2γ1 h p N pT (t)H p N p (t)

N pT (t) = [zT (t − h p−1 ) zT (t − h p )

and 1 − 2 e−2γ1 λ2 ζ

αpωp

where

 Z −d11 Z t+ν  z˙(s)dsdν   t−d1 (t) Z ×  Z−d−d1 (t) t+ν 1 (t)   z˙(s)dsdν −d12

m X

≤ #

(38)

t−h p

 T  z(t − h p−1 )    ≤ αi  z(t − τ p (t))    z(t − h p ) " −2γ1 h p −e Wp e−2γ1 h p W p − E p ∗ −2e−2γ1 h p W p + E p + E T × p ∗ ∗    z(t − h p−1 )  !   ×  z(t − τ p (t))  .   z(t − h p )

t−d1 (t)

z˙(s)dsdν

t−d22

     . 

According to Park, Ko and Jeong (2011), then we get Z t−h p−1 −α p ω p z˙(s)T e−2γ1 h p W p z˙(s)ds

t+ν

t−d12

 Z −d11 Z t+ν  z˙(s)dsdν   Z−d1 (t) Zt−d1 (t) ≤ −  −d1 (t) t+ν   z˙(s)dsdν "

     

t+ν

Z

11

  L p  H p = −  ∗  ∗

−L p Lp ∗

0 0 0

Z

t−h p−1

(40)

zT (s)ds],

t−h p

   2   L p π   ∗  − 4  ∗

Lp Lp ∗

−2L p −2L p 4L p

    .

In the existing work proposed by Kim, Park, and Jeong (2010) , the following zero equalities with any symmetric matrices C p and D p were considered  0 = α p eγ1 t zT (t − h p−1 )e−2γ1 h p C p z(t − h p−1 ) − zT (t − τ p (t)) Z t−h p−1  e−2γ1 h p zT (s)C p z˙(s)ds , × e−2γ1 h p C p z(t − τ p (t)) − 2 t−τ p (t)



0 = α p eγ1 t zT (t − τ p (t))e−2γ1 h p D p z(t − τ p (t)) − zT (t − h p )e−2γ1 h p Z t−τ p (t)  × D p z(t − h p ) − 2 e−2γ1 h p zT (s)D p z˙(s)ds . t−h p

It follows from the above zero equalities that if (13) hold, then the upper bound of LV110 (t) can be expressed as #

E{LV110 (t)} m X  α p e−2γ1 t ω p zT (t)S p z(t) + zT (t − h p−1 ) ≤E p=1

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

× e−2γ1 h p C p z(t − h p−1 ) + zT (t − τ p (t))e−2γ1 h p −2γ1 h p

T

× (D p − C p )z(t − τ p (t)) − z (t − h p )e  × z(t − h p ) + ω p z˙T (t)T p z˙(t) .

It is noted that Z −h p−1 Z m X − αp −h p

p=1

with

Dp (41)

e−2γ1 h p z˙T (s)J p z˙(s)dsdν

 −2 −2γ1 h p e ω2p zT (t)J p z(t) − ω p zT (t)J p ≤ αp 2 2 − h h p p−1 p=1 Z t−h p−1 Z t−h p−1 zT (s)dsJ p z(t) z(s)ds − ω p × +

Z

t−h p

t−h p−1

zT (s)dsJ p

t−h p

Z

t−h p−1

t−h p

 z(s)ds .

(42)

Moreover, for any appropriately dimensioned matrix H, the following equation holds E{2e

T

T

[z (t)H + ϕ˙z (t)H][−˙z(t) + g(z(t)) + c(G ⊗ A) m X × z(t − d(t)) + Kz(t − τ p (t))] = 0. (43) p=1

Based on (3) ,we have that for any χ > 0 " #T " # #" z(t) z(t) U V r(t) = χ ≤ 0. g(z(t)) g(z(t)) ∗ I

E{LV1 (t)}

" −2γ λ  e 1 1 M11 B5 + B6 T ≤ E{ξ (t) Θ − Γ1 (t) ∗ e−2γ1 d12 M11 " −2γ λ e 1 2 M12 B7 + B8 − ΓT2 (t) ∗ e−2γ1 d22 M12 " −2γ λ e 1 1 M13 B9 + B10 − ΓT3 (t) ∗ e−2γ1 d12 M13 " −2γ λ e 1 2 M14 B11 + B12 − ΓT4 (t) ∗ e−2γ1 d22 M14  − r(t) ξ(t)}, T

(44)

#

#

#

#

Γ1 (t) Γ2 (t) Γ3 (t) Γ4 (t) (45)

 ξT (t) = zT (t) zT (t − d1 (t)) zT (t − d2 (t)) zT (t − d(t)) T

z (t − d1 )

T

t−d1

t−d2 Z t−d21

z (t − d2 )

T

T

z (t − d11 ) z (t − d12 ) Z t zT (t − d21 ) zT (t − d22 ) z˙T (t) zT (s)ds Z

zT (s)ds

t−d2 (t)

Z

zT (s)ds

Z

t−d11

zT (s)ds

t−d1 (t) Z t−d2 (t) t−d22

t−d2 t−d1 (t)

Z

t−d12

zT (s)ds gT (z(t)) ηTm (t)

t−h1 t−h1

zT (s)ds · · · · · ·

t−h2 Z t−hm−1

zT (s)ds

t−hm

| {z }



| {z }

12 elements

9 elements

 n (d2 (t) − d21 )In 0n .....0n −In 0n .....0n  0|n .....0 | {z } | {z }  {z } 8 elements 8 elements 6 elements Γ2 (t) =   0n 0n (d22 − d2 (t))In 0n .....0n −In 0n .....0n | {z }

| {z }

13 elements

7 elements

 0n .....0n In 0n .....0n  0n − (d1 (t) − d11 )In | {z } | {z }  11 elements 10 elements Γ3 (t) =   0n .....0n −(d12 − d1 (t))In 0n .....0n In 0n .....0n | {z }

| {z }

| {z }

9 elements

6 elements

7 elements

6 elements

    , 

    , 

9 elements

    , 

  n In 0n .....0n   0n 0n − (d2 (t) − d21 )In 0|n .....0  {z } | {z }   . 12 elements 8 elements  Γ4 (t) =   n −(d22 − d2 (t))In 0n .....0n In 0n .....0n   0|n .....0 {z } | {z }  | {z }

Moreover, the following condition holds: " −2γ λ e 1 1 M11 B5 + B6 Θ−ΓT1 (t) ∗ e−2γ1 d12 M11 " −2γ λ e 1 2 M12 B7 + B8 −ΓT2 (t) ∗ e−2γ1 d22 M12 " −2γ λ e 1 1 M13 B9 + B10 −ΓT3 (t) ∗ e−2γ1 d12 M13 " −2γ λ e 1 2 M14 B11 + B12 −ΓT4 (t) ∗ e−2γ1 d22 M14

#

#

#

#

Γ1 (t) Γ2 (t) Γ3 (t) Γ4 (t) < 0

(46)

From (45), we have E{LV1 (t)} ≤ e2γ1 t E{ξT (t)Rξ(t)},

(47)

where R= Θ−



zT (s)ds

 n (d1 (t) − d11 )In 0n .....0n −In 0n .....0n  0|n .....0 | {z } | {z }  {z } 10 elements 6 elements 6 elements Γ1 (t) =   0n (d12 − d1 (t))In 0n .....0n −In 0n .....0n



zT (s)ds

t

and

7 elements

Subsitituting (20)-(42) into (19), combining (43) and subtracting (44) from (19), we get

where

Z

 = zT (t − τ1 (t)) zT (t − h1 )

zT (t − τm (t)) zT (t − hm )

t+ν

m X

2γ1 t

ηTm (t)

zT (t − τ2 (t)) zT (t − h2 )

t

t−h p

12

ΓT1 (t)

− ΓT2 (t) − ΓT3 (t)

"

"

"

e−2γ1 λ1 M11 ∗

e−2γ1 λ2 M12 ∗

e−2γ1 λ1 M13 ∗

B5 + B6 e−2γ1 d12 M11 B7 + B8 e−2γ1 d22 M12 B9 + B10 e−2γ1 d12 M13

#

#

#

Γ1 (t) Γ2 (t) Γ3 (t)

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

− ΓT4 (t)  − r(t) .

"

e−2γ1 λ2 M14 ∗

B11 + B12 e−2γ1 d22 M14

#

From the definition of V1 (z(t)), we have

Γ4 (t)

e2γ1 t Ekz(t)k2 {λmin (P1 )} ≤ E{V1 (z(t))} ≤ E{V1 (z(0))},

(49)

E{V1 (z(0))}

From (12), (15), (16), we have E{LV1 (t)} ≤ 0,

13

t ∈ [tk , tk+1 ).

It follows from (48) and the generalized Ito’s formula that Z t E{V1 (z(t))} − E{V1 (z(0))} = E{LV1 (z(s))ds} ≤ 0,

(48)

≤ [λmax (P1 ) + λmax (G14 )X1 + λmax (G15 )X2 + λmax (G16 )X3 + λmax (G17 )X4 + λmax (G18 )X5 + λmax (G19 )X6 + g1 λmax (R11 )X7 + λmax (R12 )X8 + λmax (R13 )X7

+ λmax (R14 )X8 + λmax (N11 )X9 + λmax (N12 )X10 + λmax (N13 )X9 + λmax (N14 )X10 + λmax (M11 )X11 + λmax (M12 )X12 + λmax (M13 )X13 + λmax (M14 )X14 ]

0

which implies E{V1 (z(t))} ≤ E{V1 (z(0))} < ∞, t ≥ 0. # " # " 1 − e−2γ1 d2 1 − e−2γ1 d1 , X2 = , Let X1 = 2γ1 2γ1 " # " # 1 − e−2γ1 d11 1 − e−2γ1 d12 X3 = , X4 = , 2γ1 2γ1 " −2γ1 d11 # e − e−2γ1 d12 X5 = , 2γ1 " −2γ1 d21 # e − e−2γ1 d22 X6 = , 2γ1    d12 e−2γ1 d12 − e−2γ1 d11 d11   , + − X7 =  2γ1 2γ1 4γ12    d22 e−2γ1 d22 − e−2γ1 d21 d21  − X8 =  +  , 2γ1 2γ1 4γ12    d2 e−2γ1 d2 − 1   , + X9 =  2γ1 4γ12    d2 − d1 e−2γ1 d2 − e−2γ1 d1   , X10 =  + 2γ1 4γ12   2  d11 − d11 d12 e−2γ1 d11 d11 − e−2γ1 d11 d12   + X11 =  2γ1 4γ12  2  2  d − d11 e−2γ1 d11 − e−2γ1 d12   , + +  12 4γ1 8γ13   2  d − d21 d22 e−2γ1 d21 d21 − e−2γ1 d21 d22   + X12 =  21 2γ1 4γ12  2  2  d22 − d21 e−2γ1 d21 − e−2γ1 d22  + +   , 4γ1 8γ13   2 2  d11 − d12 e−2γ1 d12 d12 − e−2γ1 d12 d11  + X13 =   2γ1 4γ12   2  d12 − d12 d11 e−2γ1 d12 − e−2γ1 d11   ,  + + 4γ1 8γ13  2   d − d21 d22 e−2γ1 d22 d21 − e−2γ1 d21 d22   X14 =  22 + 2γ1 4γ12   2 2  d22 − d21 e−2γ1 d21 − e−2γ1 d22  + +   , 4γ1 8γ13

sup Ekz(s)k2 .

−d2 ≤s≤0

By using the Definition 3, we get E{kz(t)k2 } ≤ e−γ1 t M sup Ekz(s)k2 , −d2 ≤s≤0

where M =[λmax (P1 ) + λmax (G14 )X1 + λmax (G15 )X2 + λmax (G16 )X3 + λmax (G17 )X4 + λmax (G18 )X5 + λmax (G19 )X6 + g1 λmax (R11 )X7 + λmax (R12 )X8 + λmax (R13 )X7 + λmax (R14 )X8 + λmax (N11 )X9 + λmax (N12 )X10 + λmax (N13 )X9 + λmax (N14 )X10 + λmax (M11 )X11 + λmax (M12 )X12 + λmax (M13 )X13 + λmax (M14 )X14 ]. Hence, it can be concluded that the error system (9) is exponentially stable in the mean square without control packet loss. This completes the proof.  Remark 1 In general, the sampling periods may vary in probabilistic way, such a phenomena of sampling process is called stochastic sampling. In this paper, we study the controller design problem using sampled-data with stochastically varying sampling period. Moreover, the sampling period is allowed to randomly switch between m different values. In equations (9) and (11) variables α p (t) are stochastic variables representing the probabilistic changes of the sampling periods. Theorem 2 Given scalars γ2 < 0, ϕ > 0, 0 ≤ d11 ≤ d12 , and 0 ≤ d21 ≤ d22 , if there exist matrices P2 > 0, G2e > 0 (e = 1, 2, · · · , 9), R2d > 0 (d = 1, 2, 3, 4), B f > 0 ( f = 1, 2, · · · , 12), N21 > 0, N22 > 0, N23 > 0, N24 > 0, M21 > 0, M22 > 0, M23 > 0, M24 > 0, and any matrix H with appropriate dimension and a scalar χ > 0 such that the following conditions hold: ˜1 = Θ ˜ − Γ˜ T1a Φ − Γ˜ T2a

"

"

e−2γ2 λ1 M21 ∗

e−2γ2 λ2 M22 ∗

B5 + B6 e−2γ2 d12 M21 B7 + B8 e−2γ2 d22 M22

#

#

Γ˜ 1a Γ˜ 2a

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

− Γ˜ T3a − Γ˜ T4a

 −2γ2 λ1 M21  2e  ∗   ∗

−2γ2 λ2

e

 −2γ2 λ1 M23  2e  ∗   ∗

 −2γ2 λ2 M24  2e  ∗   ∗

"

e−2γ2 λ2 R24 ∗

M22

B9 + B10 e−2γ2 d12 M23 B11 + B12 e−2γ2 d22 M24 (a = 1, 2)

B5 0

0

B9 0

−2γ2 d22

e

e−2γ2 d12 M23 0 B12 0

2e−2γ2 d22 M24 ∗

e−2γ2 d22 M24

B1

g22 −2γ λ e 2 2 M22 2

e−2γ2 d22 R22 +

B4

e−2γ2 d22 R24

g21 −2γ d e 2 12 M23 2

B2



#

≥ 0,

#

≥ 0,

Γ˜ 21 (50)

    > 0, 

    > 0, 

    > 0, 

    > 0, 

g22 −2γ d e 2 22 M24 2

   ≥ 0,    ≥ 0,

14

  0 .....0 −In 0n 0n   |n {z }n    ,  15 elements =  0n .....0n −In 0n   0n 0n g2 In | {z }  13 elements

Γ˜ 22

  0n .....0n g2 In 0n .....0n −In 0n 0n   |  | {z }  {z }  6 elements =  8 elements  , 0 .....0 −I 0   n n |n {z }n 16 elements

Γ˜ 31

  0 .....0 I 0 .....0   |n {z }n n |n {z }n    , 13 elements 4 elements =  0n .....0n −g1 In 0n .....0n In 0n   | {z } | {z }  7 elements

Γ˜ 32

8 elements

 0n .....0n In 0n .....0n  0n − g1 In | {z } | {z }   11 elements 4 elements  =  0n .....0n In 0n .....0n  | {z } | {z } 14 elements

Γ˜ 41

Γ˜ 42

3 elements

    ,  

  0 .....0 I 0 0   |n {z }n n n n    , 15 elements =  0n .....0n −g2 In 0n .....0n In 0n   | {z } | {z }  9 elements

(51)

e−2γ2 d12 R21 +

e−2γ2 d12 R23

M22

0 B10 0

B11 0



B3

Γ˜ 4a ,

0 B8 0

2e−2γ2 d12 M23 ∗

g21 −2γ λ e 2 1 M21 2

#

Γ˜ 3a

e−2γ2 d12 M21

B7 0 2e−2γ2 d22 M22 ∗

#

0 B6 0

2e−2γ2 d12 M21 ∗

∗ ∗

0 e−2γ2 λ2 M24 ∗ ∗



M24



e−2γ2 λ1 M23 ∗ ∗



e−2γ2 λ1 R23 ∗

e

0



"

−2γ2 λ2

e−2γ2 λ1 M21 ∗ ∗

 −2γ2 λ2 M22  2e  ∗   ∗

  e−2γ2 λ2 R22 + 

"

e−2γ2 λ1 M23 ∗

0



  e−2γ2 λ1 R21 + 

"

6 elements

  0n .....0n In 0n 0n   0n 0n − g2 In |  {z }    12 elements  =   , 0 .....0 I 0   |n {z }n n n 16 elements

where ˜ = (Θ ˜ i j )18×18 , with Θ ˜ 11 = 2γ2 P2 + G21 + G22 + G23 + G24 + G25 + G26 + G27 Θ + g21 R23 + g22 R24 + d22 N23 + (d2 − d1 )2 N24 − χU

(52)

− e−2γ2 d2 N21 , ˜ 14 = Hc(G ⊗ A), Θ ˜ 16 = e−2γ2 d2 N21 , Θ ˜ 111 = P2 − H, Θ −2γ λ ˜ 118 = H − χV, Θ ˜ 22 = −(1 − ρ1 )e 2 1 G21 − (e−2γ2 λ1 R21 Θ g2 g21 −2γ2 λ1 e M21 ) + B1 + BT1 − (e−2γ2 d12 R21 + 1 2 2 × e−2γ2 d12 M23 ),

+

Γ˜ 11

  0n .....0n −In 0n .....0n   | {z } | {z }    , 13 elements 4 elements =  0n .....0n −In 0n .....0n   0n g1 In | {z } | {z }  12 elements

Γ˜ 12

3 elements

 0n .....0n g1 In 0n .....0n −In 0n .....0n  | | {z } | {z }  {z }  4 elements 6 elements =  6 elements 0 .....0 −In 0n .....0n  |n {z }n | {z } 14 elements

3 elements

g21 −2γ2 λ1 e M21 ) − BT1 , 2 g2 = −B1 + (e−2γ2 d12 R21 + 1 e−2γ2 d12 M23 ), 2 g2 = −(1 − ρ2 )e−2γ2 λ2 G22 − (e−2γ2 λ2 R22 + 2 e−2γ2 λ2 M22 ) 2 2 g + BT2 + B2 − (e−2γ2 d22 R22 + 2 e−2γ2 d22 M24 ), 2

˜ 27 = (e−2γ2 λ1 R21 + Θ ˜ 28 Θ     ,  

˜ 33 Θ

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

g22 −2γ2 λ2 M22 ) − BT2 , e 2 g2 ˜ 310 = −B2 + (e−2γ2 d22 R22 + 2 e−2γ2 d22 M24 ), Θ 2 ˜ 44 = −(1 − ρ)e−2γ2 λG23 , Θ ˜ 411 = H T c(G ⊗ A)ϕ, Θ ˜ 55 = −e−2γ2 d1 G24 − e−2γ2 d2 N22 , Θ ˜ 56 = e−2γ2 d2 N22 , Θ

˜ 39 = (e−2γ2 λ2 R22 + Θ

˜ 66 = −e−2γ2 d2 G25 − e−2γ2 d2 N21 − e−2γ2 d2 N22 , Θ

˜ 77 = −(e−2γ2 d11 G26 + e−2γ2 d11 G28 − (e−2γ2 λ1 R21 + Θ ˜ 78 = B1 , × M21 ), Θ −2γ2 d12

˜ 88 = −e Θ

−2γ2 d12

G27 − e

−2γ2 d12

G28 − (e

+ + +

˜ 1111 Θ

˜ 1118 Θ ˜ 1414 Θ

g21 −2γ2 λ1 e 2

V23 (t) = g1

R21 + e

˜ 1616 = −e−2γ2 λ2 R24 , Θ ˜ 1617 = −B4 , Θ ˜ 1717 = −e−2γ2 d22 R24 , Θ ˜ 1818 = −χI, Θ

V24 (t) = g1

t ∈ [tk , tk+1 ).

(53)

Proof: Consider the following LKF for system (10): V2 (t) =

V25 (t) = d2

V26 (t) = d2

V21 (t) = e2γ2 t zT (t)P2 z(t), Z t V22 (t) = e2γ2 s zT (s)G21 z(s)ds t−d1 (t) Z t

e2γ2 s zT (s)G22 z(s)ds

+ +

+ + +

t−d2 (t) Z t

e2γ2 s zT (s)G23 z(s)ds

t−d(t) Z t

e2γ2 s zT (s)G24 z(s)ds

Z

t−d1 t

e2γ2 s zT (s)G25 z(s)ds

t−d2 Z t

t−d11

e2γ2 s zT (s)G26 z(s)ds

Z

e2γ2 s zT (s)G28 z(s)ds e2γ2 s zT (s)G29 z(s)ds,

Z

Z

Z

−d11 −d12

Z

t

e2γ2 s z˙T (s)R21 z˙(s)dsdϑ t+ϑ

Z

−d21

−d22

−d12

Z

−d2

Z

0 −d2

g2 V27 (t) = 1 2

(54)

Z

e2γ2 s zT (s)R23 z(s)dsdϑ

t+ϑ −d21 Z t

Z

t

e2γ2 s z˙T (s)N21 z˙(s)dsdϑ

t+ϑ

Z

−d2

Z

Z

g2 + 2 2

Z

−d11

−d12 −d21

−d22

−d2

t

e2γ2 s z˙T (s)N22 z˙(s)dsdϑ,

t+ϑ

e2γ2 s zT (s)N23 z(s)dsdϑ

Z

−d22

Z

−d1

t

Z

g2 V28 (t) = 1 2

Z

t+ϑ Z −d1

−d11

e2γ2 s zT (s)R24 z(s)dsdϑ,

t+ϑ

−d12 Z g22 −d21

2

e2γ2 s z˙T (s)R22 z˙(s)dsdϑ,

t

−d22

0

t

t+ϑ

Z

−d11

+ (d2 − d1 )

b=1

where

t−d12 Z t−d21

+ (d2 − d1 )

+ V2b (t),

e2γ2 s zT (s)G27 z(s)ds

t−d12 Z t−d11

+ g2

˜ i, j are zero. Then Lyapunov funcand the remaining terms of Θ tional V2 (t) satisfies

8 X

t

+ g2

−2γ2 d12

g21 ˜ 99 = e−2γ2 d21 G29 − (e−2γ2 λ2 R22 M23 ), Θ 2 g2 ˜ 910 = B2 , + 2 e−2γ2 λ2 M22 ), Θ 2 g2 = −e−2γ2 d22 G29 − (e−2γ2 d22 R22 + e−2γ2 d22 2 M24 ), 2 g4 = −g21 R21 + g22 R22 + d22 N21 + (d2 − d1 )2 N22 + 1 M21 4 g42 g41 g42 + M22 + M23 + M24 − 2Hϕ, 4 4 4 ˜ 1212 = −e−2γ2 d2 N23 , Θ ˜ 1313 = −e−2γ2 d2 N24 , = Hϕ, Θ ˜ 1415 = −B3 , Θ ˜ 1515 = −e−2γ2 d12 R23 , = −e−2γ2 λ1 R23 , Θ

V2 (t) ≤ e−2γ2 (t−tk ) V2 (tk ),

15

t−d22

×

˜ 1010 Θ

Z

Z

Z

−d11 ν −d21

ν

t

e2γ2 s zT (s)N24 z(s)dsdϑ, t+ϑ

Z

t

e2γ2 s z˙T (s)M21 z˙(s)dsdϑdν

t+ϑ Z t

e2γ2 s z˙T (s)M22 z˙(s)dsdϑdν,

t+ϑ

ν

Z

t

−d12 t+ϑ ν Z t

−d22

e2γ2 s z˙T (s)M23 z˙(s)dsdϑdν e2γ2 s z˙T (s)M24 z˙(s)dsdϑdν,

t+ϑ

Setting d12 − d1 (t) d2 (t) − d21 d1 (t) − d11 , ψ= , φ= , g1 g1 g2 d22 − d2 (t) ζ= . g2 δ=

Define the infinitesimal operator L of V(z(t)) defined as follows: 1 LV(z(t)) = lim+ {{V(z(t + c))|z(t)} − V(z(t))}. c→0 c

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

Then LV2 (t) =

8 X b=1

− LV1b (t),

(55)

16

Z

1 2γ2 t −2γ2 λ2 e e φ

1 − e2γ2 t e−2γ2 d22 ζ

where

t−d21

zT (s)dsR24

t−d2 (t) Z t−d2 (t)

Z

zT (s)dsR24

t−d22

t−d21

z(s)ds

t−d2 (t) Z t−d2 (t)

z(s)ds, (59)

t−d22

LV25 (t)

LV21 (t)

= 2e2γ2 t zT (t)P2 z˙(t) + 2γ2 e2γ2 t zT (t)P2 z(t),

(56)

≤ d22 e2γ2 t z˙T (t)N21 z˙(t) − e2γ2 t e−2γ2 d2 [zT (t) − zT (t − d2 )]

× N21 [z(t) − z(t − d2 )] + (d2 − d1 )2 e2γ2 t z˙T (t)N22 z˙(t)

LV22 (t)

− e2γ2 t e−2γ2 d2 [zT (t − d1 ) − zT (t − d2 )]N22 [z(t − d1 ) − z(t − d2 )],

≤ e2γ2 t zT (t)G21 z(t) − (1 − ρ1 )e2γ2 t e−2γ2 λ1

× zT (t − d1 (t))G21 z(t − d1 (t)) + e2γ2 t zT (t)

× G22 z(t) − (1 − ρ2 )e2γ2 t e−2γ2 λ2 zT (t − d2 (t))

LV26 (t)

× G22 z(t − d2 (t)) + e2γ2 t zT (t)G23 z(t) − (1 − ρ) 2γ2 t −2γ2 λ T

×e

≤ d22 e2γ2 t zT (t)N23 z(t) − e2γ2 t e2γ2 d2

2γ2 t

z (t − d(t))G23 z(t − d(t)) + e

e

× zT (t)G24 z(t) − e2γ2 t e−2γ2 d1 zT (t − d1 )G24

× N23

× z(t − d1 ) + e2γ2 t zT (t)G25 z(t) − e2γ2 t e−2γ2 d2

× zT (t − d2 )G25 z(t − d2 ) + e2γ2 t zT (t)G26 z(t)

+ e2γ2 t zT (t)G27 z(t) − e2γ2 t e−2γ2 d12 zT (t − d12 )

× z(t − d11 ) − e2γ2 t e−2γ2 d12 zT (t − d12 )G28

=

× z(t − d12 ) + e2γ2 t e−2γ2 d21 zT (t − d21 )G29

(57)

LV23 (t)

Z 1 2γ2 t 2γ2 λ1 t−d11 T 2 2γ2 t T z˙ (s)ds ≤ g1 e z˙ (t)R21 z˙(t) − e e δ t−d1 (t) Z t−d11 Z t−d1 (t) 1 z˙(s)ds − e2γ2 t e−2γ2 d12 × R21 z˙T (s)ds ψ t−d1 (t) t−d12 Z t−d1 (t) × R21 z˙(s)ds + g22 e2γ2 t z˙T (t)R22 z˙(t) t−d12

1 − e2γ2 t e−2γ2 λ2 φ

1 − e2γ2 t e−2γ2 d22 ζ LV24 (t)

Z

t−d21

T

z˙ (s)dsR22

t−d2 (t) Z t−d2 (t) t−d22

Z

z˙T (s)dsR22

t−d21

z˙(s)ds

t−d2 (t) Z t−d2 (t)

z˙(s)ds, (58)

t−d22

Z t−d11 1 ≤ g21 e2γ2 t zT (t)R23 z(t) − e2γ2 t e−2γ2 λ1 zT (s)ds δ t−d1 (t) Z Z t−d11 1 2γ2 t −2γ2 d12 t−d1 (t) T z (s)ds × R23 z(s)ds − e e ψ t−d12 t−d1 (t) Z t−d1 (t) × R23 z(s)ds + g22 e2γ2 t zT (t)R24 z(t) t−d12

t−d2

t

zT (s)ds

t−d2

z(s)ds + (d2 − d1 )2 e2γ2 t zT (t)N24 z(t) Z

t−d1

t−d2

zT (s)dsN24

Z

t−d1

z(s)ds, t−d2

LV27 (t)

× G27 z(t − d12 ) + e2γ2 t e−2γ2 d11 zT (t − d11 )G28

× z(t − d22 ),

t

− e2γ2 t e2γ2 d2

− e2γ2 t e−2γ2 d11 zT (t − d11 )G26 z(t − d11 )

× z(t − d21 ) − e2γ2 t e−2γ2 d22 zT (t − d22 )G29

Z

Z

Z t−d11 g2 g41 2γ2 t T e z˙ (t)M21 z˙(t) − 1 (d12 − d1 (t)) 4 2 t−d1 (t) 2 Z −d11 Z t−d11 g × e2γ2 s z˙T (s)M21 z˙(s)dsdν − 1 2 −d1 (t) t+ν Z Z g2 −d1 (t) t−d1 (t) × e2γ2 s z˙T (s)M21 z˙(s)dsdν − 1 2 −d12 t+ν

g4 × e2γ2 s z˙T (s)M21 z˙(s)dsdν + 2 e2γ2 t z˙T (t)M22 z˙(t) 4 Z t−d21 g22 − (d22 − d2 (t)) e2γ2 s z˙T (s)M22 z˙(s)ds 2 t−d2 (t) Z Z g22 −d21 t−d21 2γ2 s T e z˙ (s)M22 z˙(s)dsdν − 2 −d2 (t) t+ν Z Z g2 −d2 (t) t−d2 (t) 2γ2 s T − 2 e z˙ (s)M22 z˙(s)dsdν 2 −d22 t+ν Z g41 2γ2 t T g21 δ 2γ2 t −2γ2 λ1 t−d11 ≤ e z˙ (t)M21 z˙(t) − e e 4 2 ψ t−d1 (t) Z t−d11 1 × z˙T (s)dsM21 z˙(s)ds − 2 e−2γ2 λ1 ψ t−d1 (t) Z −d11 Z t−d11 Z −d11 Z t−d11 T z˙(s) z˙ (s)dsdνM21 × −d1 (t)

(60)

t+ν

−d1 (t)

t+ν

Z −d1 (t) Z t−d1 (t) 1 × dsdν − 2 e−2γ2 d12 z˙T (s)dsdν δ −d12 t+ν Z −d1 (t) Z t−d1 (t) g4 × M21 z˙(s)dsdν + 2 e2γ2 t z˙T (t) 4 −d12 t+ν

(61)

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

× M22 z˙(t) − Z

g22 φ 2γ2 t −2γ2 λ2 e e 2 ζ

t−d21

Z

Z

t−d21

Z Z 1 2γ2 t −2γ2 d22 −d2 (t) t+ν e e φ2 −d22 t−d22 Z −d2 (t) Z t+ν × z˙T (s)dsdνM24 z˙(s)dsdν. × z˙(s)dsdν −

z˙T (s)dsM22

t−d2 (t) −d21 Z t−d21

1 −2γ2 λ2 e 2 ζ −d2 (t) t+ν t−d2 (t) Z −d21 Z t−d21 × z˙T (s)dsdνM22 z˙(s)dsdν ×

− ×

z˙(s)ds −

−d2 (t) t+ν −d2 (t) Z t−d2 (t)

Z 1 −2γ2 d22 e φ2 −d22 Z −d2 (t) Z t−d2 (t) −d22

z˙T (s)dsdνM22

LV28 (t) = e z˙ (t)M23 z˙(t) − (d1 (t) − d11 ) 4 2 Z t−d1 (t) × e2γ2 s z˙T (s)M23 z˙(s)dsdν



t−d12 Z g21 −d11

2

Z

e2γ2 s z˙T (s)M23 z˙(s)dsdν

e

−d12

Based on (3) ,we have that for any χ > 0 " #T " #" # z(t) z(t) U V r˜(t) = χ ≤ 0. g(z(t)) g(z(t)) ∗ I

Substituting (56)-(63) into (55), combining (64) and subtracting (65) from (55), we get LV2 (t) ˜ − Γ˜ T1 (t) ≤ξ˜T (t){Θ

z˙ (s)M23 z˙(s)dsdν

t−d12

g2 g4 + 2 e2γ2 t z˙T (t)M24 z˙(t) − 2 (d2 (t) − d21 ) 4 2 Z t−d2 (t) × e2γ2 s z˙T (s)M24 z˙(s)dsdν t−d22

g2 − 2 2

− ≤

g22 2

Z

−d21

Z

−d22

− Γ˜ T2 (t) − Γ˜ T3 (t)

t+ν

−d2 (t) t−d2 (t) −d2 (t) Z t+ν

Z

− Γ˜ T4 (t)

e2γ2 s z˙T (s)M24 z˙(s)dsdν e

z˙ (s)M24 z˙(s)dsdν

Z

−d11

−d12 t+ν

−d1 (t) Z

−d2 (t)

t−d2 (t)

B7 + B8 e−2γ2 d22 M22

e−2γ2 λ1 M23 ∗

B9 + B10 e−2γ2 d12 M23

e−2γ2 λ2 M24 ∗

B11 + B12 e−2γ2 d22 M24

| {z }

# #

#

Γ˜ 1 (t) Γ˜ 2 (t) Γ˜ 3 (t) Γ˜ 4 (t)

| {z } 3 elements

| {z }

13 elements

 I 0 .....0  0n − (d1 (t) − d11 )In 0|n .....0 {z }n n |n {z }n  11 elements 4 elements Γ˜ 3 (t) =   0n .....0n −(d12 − d1 (t))In 0n .....0n In 0n .....0n

z˙(s)dsdν +

t−d2 (t)

#

 n (d2 (t) − d21 )In 0n .....0n −In 0n .....0n  0|n .....0 | {z } | {z }  {z } 8 elements 2 elements 6 elements Γ˜ 2 (t) =  0n 0n (d22 − d2 (t))In 0n .....0n −In 0n 

t−d12

−d2 (t)

"

e−2γ2 λ2 M22 ∗

12 elements

t−d12 Z t+ν

g42 2γ2 t T e z˙ (t) 4 −d12 t−d12 Z t−d2 (t) g2 ζ z˙T (s) × M24 z˙(t) − 2 e2γ2 t e−2γ2 d22 2 φ t−d22 Z t−d2 (t) 1 × dsM24 z˙(s)ds − 2 e2γ2 t e−2γ2 λ2 ζ t−d22 Z −d21 Z t+ν Z −d21 Z t+ν T × z˙ (s)dsdνM24 ×

"

B5 + B6 e−2γ2 d12 M21

 n (d1 (t) − d11 )In 0n .....0n −In 0n .....0n  0|n .....0 | {z } | {z }  {z } 4 elements 6 elements 6 elements Γ˜ 1 (t) =   0n (d12 − d1 (t))In 0n .....0n −In 0n .....0n

1 2γ2 t −2γ2 λ1 z˙T (s)dsdν e e ψ2 −d1 (t) t−d1 (t) Z −d11 Z t+ν 1 × M23 z˙(s)dsdν − 2 e2γ2 t δ −d1 (t) t−d1 (t) Z −d1 (t) Z t+ν × e−2γ2 d12 z˙T (s)dsdνM23 −

"

e−2γ1 λ1 M21 ∗

(66)

where

g21

ψ 2γ2 t −2γ2 d12 e z˙ (t)M23 z˙(t) − e e 4 2 δ Z t−d1 (t) Z t−d1 (t) × z˙T (s)dsM23 z˙(s)ds t−d12

"

˜ − r˜(t)}ξ(t),

2γ2 s T

t−d22

g41 2γ2 t T

Z

(65)

t+ν

−d1 (t) t−d1 (t) 2 Z −d1 (t) Z t+ν g1 2γ2 s T

2

2e2γ2 t [zT (t)H + ϕ˙zT (t)H][−˙z(t) + g(z(t)) + c(G ⊗ A) × z(t − d(t))] = 0. (64)

(62)

g21

(63)

t−d22

The remaining proof follows from Theorem 1. Furthermore for any appropriately dimensioned matrix H, the following equation holds:

t+ν

g41 2γ2 t T



−d22

t+ν

z˙(s)dsdν,

17

| {z } 7 elements

6 elements

| {z } 3 elements

 I 0 .....0  0n 0n − (d2 (t) − d21 )In 0|n .....0 {z }n n |n {z }n  12 elements 2 elements Γ˜ 4 (t) =   0n .....0n −(d22 − d2 (t))In 0n .....0n In 0n | {z } 9 elements

and

| {z }

| {z } 6 elements

    , 

    , 

    , 

    . 

 ξ˜T (t) = zT (t) zT (t − d1 (t)) zT (t − d2 (t)) zT (t − d(t))

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

zT (t − d1 )

zT (t − d11 ) zT (t − d12 ) Z t zT (t − d21 ) zT (t − d22 ) z˙T (t) zT (s)ds Z

t−d1

zT (s)ds

t−d2 t−d21

Z

zT (t − d2 ) Z

t−d11

zT (s)ds

t−d1 (t) t−d2 (t)

Z

T

z (s)ds

t−d2 (t)

t−d2 t−d1 (t)

Z

T

T

z (s)ds g (z(t))

t−d22

M22 ≤ µ2 M12 ,



τγ > τ∗γ =

Integrating the above inequality, it follows (53). This completes the proof.  In the following, based on Theorem 1 and Theorem 2, we will propose a condition to guarantee that CDN (1) are exponentially synchronous with control packet loss. For the sake of convenience, denote by T u (T, t) the total activation time of subsystem (10) and by T s (T, t) the total activation time of subsystem (9) during the time interval [T, t), where 0 ≤ T < t. It is clear that (67)

Moreover, the loss rate of control packet over the time interval [T, t) is defined by T u (T, t) θ= >0 t−T

Theorem 3 Given scalars γ1 > 0, γ2 < 0, µ1 , µ2 ≥ 1, 0 ≤ d11 ≤ d12 , and 0 ≤ d21 ≤ d22 , if there exist matrices P2 > 0, G2e > 0 (e = 1, 2, · · · , 9), R2d > 0 (d = 1, 2, 3, 4), B f > 0 ( f = 1, 2, · · · , 12), N21 > 0, N22 > 0, N23 > 0, N24 > 0, M21 > 0, M22 > 0, M23 > 0, M24 > 0, and a scalar χ > 0, the diagonal matrices V1 > 0 and V2 > 0 such that (12), (15), (16), (50)-(52) and the following LMIs hold:

V(t) = Vσ(t) (t),

G27 ≤ µ2G17 , G19 ≤ µ1G29 , R21 ≤ µ2 R11 , R13 ≤ µ1 R23 ,

R24 ≤ µ2 R14 , N12 ≤ µ1 N22 ,

G18 ≤ µ1G28 , G29 ≤ µ2G19 ,

R12 ≤ µ1 R22 , R23 ≤ µ2 R13 ,

N11 ≤ µ1 N21 , N22 ≤ µ2 N12 ,

G14 ≤ µ1G24 , G25 ≤ µ2G15 , G17 ≤ µ1G27 ,

G28 ≤ µ2G18 , R11 ≤ µ1 R21 ,

R22 ≤ µ2 R12 , R14 ≤ µ1 R24 ,

N21 ≤ µ2 N11 , N13 ≤ µ1 N23 ,

N23 ≤ µ2 N13 , N14 ≤ µ1 N24 , N24 ≤ µ2 N14 , M11 ≤ µ1 M21 , M21 ≤ µ2 M11 , M12 ≤ µ1 M22 ,

(69)

(70)

t ∈ [tk , tk+1 )

(71)

where V1 (t) and V2 (t) are given as in (18) and (54) respectively. By (17), (53) and (69), we have if σ(t) = 1, then V(t) ≤ e−2γ1 (t−tk ) V(tk ),

t ∈ [tk , tk+1 )

(72)

and V1 (tk ) ≤ µ1 V2 (tk− )

(73)

and if σ(t) = 2, then V(t) ≤ e−2γ2 (t−tk ) V(tk ),

t ∈ [tk , tk+1 )

(74)

and V2 (tk ) ≤ µ2 V1 (tk− )

(75)

Noting that for any t ≥ 0 there must exist a scalar k ≥ 0 such that t ∈ [tk , tk+1 ). Denote by T 1 , T 2 , · · · , T j the switching instants of σ(t) on the interval [0, t). It is assumed that 0 < T 1 < T 2 < · · · < T j . Then for each T i , there exists ς ∈ {1, 2, · · · , k} such that T i = tς . Thus for any t ∈ [tk , tk+1 ), we have V(t) ≤ e−2γσ(T j ) (t−T j ) Vσ(T j ) (T j )

≤ e−2γσ(T j ) (t−T j ) (µ1 µ2 )Vσ(T −j ) (T −j ) .. .

P1 ≤ µ1 P2 , P2 ≤ µ2 P1 , G11 ≤ µ1G21 , G21 ≤ µ2G11 , G12 ≤ µ1G22 , G22 ≤ µ2G12 , G23 ≤ µ2G13 , G15 ≤ µ1G25 , G26 ≤ µ2G16 ,

ln(µ1 µ2 ) γ1 , θ < θ∗ = 2(γ1 − (γ1 − γ2 )θ) γ1 − γ2

(68)

Then, we have the following Theorem

G13 ≤ µ1G23 , G24 ≤ µ2G14 , G16 ≤ µ1G26 ,

M23 ≤ µ2 M13 ,

then the CDNs (1) with control packet loss are exponentially synchronous for all h ∈ [h1 , h2 ] and the sampled-data controller gain matrix K = H −1 X. Proof: Consider the following LKF for system (11):

t ∈ [tk , tk+1 ).

T s (T, t) + T u (T, t) = t − T

M24 ≤ µ2 M14

and the switching signal σ(t) has average dwell-time τγ satisfying

From (50)-(52), we have LV2 (t) ≤ 0,

M13 ≤ µ1 M23 ,

M14 ≤ µ1 M24 ,

zT (s)ds

t−d12

18

≤ e−2γ1 T

s

(0,t)−2γ2 T u (0,t)

(µ1 µ2 )Nσ (0,t) Vσ(0) (0)

(N0 + τtγ )ln(µ1 µ2 )

≤ e−2γ1 (1−θ)t−2γ2 θt e

= eN0 ln(µ1 µ2 ) e−2γt υ1 kz(0)k2

υ1 kz(0)k2 (76)

where υ1 = max{λ3 max (P1 ), λ4 max (P2 )}

(77)

and γ = γ1 − (γ1 − γ2 )θ −

ln(µ1 µ2 ) ∈ (0, γ1 − (γ1 − γ2 )θ] 2τγ

(78)

Thus, υ2 kz(tk )k2 ≤ eN0 ln(µ1 µ2 ) e−2γtk υ1 kz(0)k2

(79)

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

where υ2 = min{λ3 min (P1 ), λ4 min (P2 )}, which leads to r υ1 N0 ln(µ1 µ2 ) −γtk e e kz(0)k. kz(tk )k ≤ υ2

(80) (81)

According to Definition 4, the CDNs (1) with control packet loss are exponentially synchronous. This completes the proof.  Remark 2 It can be seen from (70) that the lower bound of average dwell-time τγ not only depends on the convergence rate γ1 and the divergence rate γ2 but also on the packet loss rate θ. It reflects the admissible switching frequency between the cases of packet missing and non-packet missing. From (70) we can see that θ∗ , which denotes the upper bound of control packet loss rate, not only depends on the convergence rate γ1 , but also on the divergence rate γ2 . Specifically, when γ2 is fixed, a larger convergence rate γ1 corresponds to a larger θ∗ , which implies that for fixed γ2 , more control packet loss which occur in the error system (9) has faster convergence speed. When γ1 is fixed, a smaller divergence rate γ2 corresponds to a smaller θ∗ , which implies that for a fixed γ1 , the quantity of missing control packet should be less if system (10) has faster convergence speed. Since the exponential decay rate γ of the synchronization error system (11) is given by (78) which depends on τγ and θ, we say that a larger θ and a smaller τγ leads to a smaller exponential decay rate γ, that is, the loss of more control packet and frequent switching between the cases of packet and non-packet missing will degrade the exponential stability of the system (11). Remark 3 It is to be noted that, in the existing literature there are only a few works based on the stabilization of sampleddata control systems with control packet loss, see (Zhang & Yu, 2010; Chen& Zheng, 2012). Zhang & Yu, (2010) analyzed the stabilization problem for sampled-data control systems with control inputs missing, where the obtained condition establishes several quantitative relations among some system parameters (such as the sampling period and the exponential decay rate), the actual data missing rate, and the admissible data missing rate bound. Chen& Zheng, (2012) studied the stability analysis and stabilization of sampled-data systems with control packet loss. Further, Shao & Han, (2012) investigated the stability and stabilization for systems with two additive timevarying input delays arising from networked control systems. Recently, Zhu, Wang & Du,(2014) discussed the problem of stability analysis for continuous-time systems with two additive time-varying delay components. Different from the existing literature, this is the first time to deal with the problem of stochastic sampled-data control for synchronization of CDNs with control packet loss and additive time-varying coupling delays. In addition the sampling period is assumed to be timevarying and switches between m different values in a random way with given probability. By constructing a novel LKF and by using second order reciprocal convex technique and Jensens inequality, some sufficient conditions for the error system to be exponentially mean square stable are derived in terms of

19

linear matrix inequalities (LMIs). When we apply the Jensen inequality to partition double integral terms in the derivation of LMI conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters emerges. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller with stochastic sampling, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs.

4. Numerical example Consider the CDN (1) with three nodes. The outer-coupling matrix is assumed to be G = (Gi j )N×N with    −1 0 1    G =  0 −1 1  .   1 1 −2

The nonlinear function f is taken as # " −0.5xi1 + tanh(0.2xi1 ) + 0.2xi2 . f (xi (t)) = 0.65xi2 − tanh(0.45xi2 )

The above nonlinear function f satisfies the sector bounded condition (3) with " # " # −0.5 0.2 −0.3 0.2 U = , V= . 0 0.65 0 0.2 The inner coupling matrix A = 0.1I. Here we assume that control packet from controller to the actuator is lost and the actuator input to the system is set to zero. Let γ1 = 0.2, γ2 = −0.8, µ1 = 1, µ2 = 1.2, the average dwell time τγ = 0.41 and the control packet loss rate as θ = 0.1. In this example we consider only two sampling intervals for simplicity and assume h1 = 0.2, h2 = 0.4. From these values we have τγ > τγ∗ = 0.3959, θ < θ∗ = 0.2 and the exponential convergence rate γ = 0.4075. The time-varying delays are chosen to be d1 (t) = 0.3 + 0.2 sin(t), d2 (t) = 0.6 + 0.4 sin(t). Then the derivative of the time-varying delays are ρ1 = 0.2, ρ2 = 0.4, and the probability of sampling interval is β1 = 0.5 β2 = 0.5. Using the MATLAB LMI control Toolbox to solve the LMIs (12), (15), (16), (50)-(52) and (69) in Theorem 3, we can obtain the following gain matrices # # " " −0.2551 −0.0664 −0.2551 −0.0664 , , K2 = K1 = −0.0316 −0.5706 −0.0316 −0.5706 " # −0.2507 −0.0644 K3 = . −0.0223 −0.5833 Using the above parameters, Figure 1 depicts the error responses of the controlled CDN with initial values chosen as x1 (0) = [−3 5]T , x2 (0) = [2.1 3.6]T , x3 (0) = [−2.2 1]T , s(0) = [3 2]T . Here we consider three nodes and zi1 (t), zi2 (t) for (i=1,2,3)

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21

represents the first, second and third node respectively. we see from Figure 1 that the synchronization errors converge to zero. Figure 2 displays the responses of the control inputs ui (t) with c = 0.8 when there is no loss in the control packet. Also, it can be seen in figure 2 that the control signals converge to zero as time elapses. The switching signal σ(t) is shown in Figure 3. Figure 4 displays the stochastic parameters h and mean square error plot is presented in Figure 5, where x1 (0) = [−3 5]T , x2 (0) = [2.1 3.6]T , x3 (0) = [−2.2 1]T , s(0) = [3 2]T . The mean square error z(t) is calculated by using v u t 3 1X (xi (t) − s(t))2 . z(t) = 3 i=1

20

3 2 1 0 −1 −2

z11(t) z12(t)

−3

z −4

z

(t)

21

(t)

22

z31(t)

−5

z32(t) −6

0

5

10

15

time t

Figure 1. State trajectories of the error system (11) with c=0.8. 1.5 u11(t) u

(t)

12

1

u21(t) u

(t)

22

0.5

u31(t) u32(t)

0

5. Conclusion

−0.5

−1

−1.5

0

5

10

15

time t

Figure 2. Responses of the control inputs ui (t) with c=0.8. 3

2.5

σ(t)

2

1.5

1

0.5

0

0

2

4

6

8

10

time t

Figure 3. Switching signal σ(t). 0.6

0.5

0.4

h

This paper studies the stochastic sampled-data synchronization of complex dynamical networks with control packet loss and additive time-varying coupling delays. The synchronization error system with control packet loss has been modeled as a switched system. By defining appropriate LKF with triple integral terms and by employing reciprocally convex approach, new delay-dependent synchronization criteria have been derived in terms of LMIs. Some variations on the lower bound lemma to handle several kind of function combinations arising from triple integral terms in the derivation of LMI conditions has been introduced. Based on the synchronization criterion and the average dwell-time method, a sufficient condition is provided to guarantee the exponential synchronization of considered system with control packet loss and the design method of sampled-data feedback controller has also been given. The theoretical results were validated by numerical example and simulations. It should be pointed out that the proposed method in this paper can also be extended to the Markovian jumping CDNs with parameter uncertainties, partially unknown transition probabilities and random packet loss. This work will be done in the near future.

0.3

0.2

0.1

References

0

2

4

6

8

10

time t

Figure 4. Stochastic parameters h. 6

5

4

z(t)

Astrom, K.,& Wittenmark, B. (1989) Adaptive control (MA, Addison-Wesley). Boccaletti, S., Latora, V., Marenu, Y., Chavez, M., & Huang, D.U. (2006) . Complex networks: structure and dynamics. Physics Reports, 424, 175–308. Cao, J., & Li, L. (2009). Cluster synchronization in an array of hybrid coupled neural networks with delay. Neural Networks, 22, 335–342. Chen, W., & Zheng, W. X. (2012) . An improved stabilization method for sampled-data control systems with control packet loss. IEEE Transactions on Automatic Control, 57, 2378–2384. Cheng, J., Zhu, H., Zhong, S., Zhang, Y.,& Zeng, Y. (2014) . Improved delaydependent stability criteria for continuous system with two additive timevarying delay components. Communication in Nonlinear Science Numerical Simulation, 19, 210–215. Fridman, E., Seuret, A., & Richard, J.P. (2004) . Robust sampled-data stabilization of linear systems: an input delay approach. Automatica, 40, 1441–1446. Fridman, E., Shaked, U., & Suplin, V. (2005) . Input/output delay approach to robust sampled-data H∞ control. Systems and Control Letters, 54, 271–282.

0

3

2

1

0

2

4

6

8 time t

10

12

Figure 5. Mean square error z(t).

14

R. Rakkiyappan et al. / Neural Networks 00 (2015) 1–21 Gao, H., Lam, J., & Chen, G. (2006) . New criteria for synchronization stability of general complex dynamical networks with coupling delays. Physics Letters A, 360, 263–273. Gao, H., Meng, X., & Chen, T. (2008) . Stabilization of networked control systems with new delay characterization. IEEE Transactions on Automatic control, 53, 2142-2148. Gao, H., Wu, J., & Shi, P. (2009) . Robust sampled-data H∞ control with stochastic sampling. Automatica, 45, 1729-1736. Hu, B., & Michel, A.N. (2000) . Stability analysis of digital feedback control systems with time-varying sampling periods. Automatica, 36, 897-905. Ji, D. H., Lee, D. W., Koo, J. H., Won, S. C., Lee, S. M., & Park, Ju H. (2011) . Synchronization of neutral complex dynamical networks with coupling time-varying delays. Nonlinear Dynamics, 65, 349–358. Jiang, H.F., & Li, T. (2012) . Synchronization and Pinning Control in Complex Networks with Interval Time-Varying Delay, Mathematical Problems in Engineering, 2012, 1–13. Kim, S. H., Park, P. G., & Jeong, C. (2010) . Robust H∞ stabilization of networked control systems with packet analyzer. IET Control Theory and applications, 4, 1828–1837. Kinzel, W., Englert, A., Reents, G., Zigzag, M.,& Kanter, I. (2009) . Synchronization of networks of chaotic units with time-delayed couplings. Physics Review E, 79, 056207. Lee, W. L., & Park, P. G. (2014) . Second-order reciprocally convex approach to stability of systems with interval time-varying delays, Applied Mathematics and Computation, 229, 245-253. Lee, T.H., Wu, Z.G., & Park, Ju H. (2012) . Synchronization of a complex dynamical network with coupling time-varying delays via sampled-data control. Applied Mathematics and Computation, 219, 1354–1366. Lee, T. H., Park, Ju H., Kwon, O. M., & Lee, S. M. (2013) . Stochastic sampleddata control for state estimation of time-varying delayed neural networks. Neural Networks, 46, 99-108. Lee, T. H., Park, Ju H., Lee, S. M., & Kwon, O. M. (2013) . Robust synchronization of chaotic systems with randomly occurring uncertainties via stochastic sampled-data control. International journal of control, 86, 107–119. Lee, T. H., Park, Ju H., Lee, S. M., & Kwon, O. M. (2014) . Robust sampleddata control with random missing data scenario, International Journal of control, 87, 1957-1969. Li, C., & Chen, G. (2004) . Synchronization in general complex dynamical networks with coupling delays. Physica A, 343, 263–278. Li, K., Guan, S., Gong, X., & Lai, C. H. (2008) . Synchronization stability of general complex dynamical networks with time-varying delays. Physics Letters A, 372, 7133–7139. Li, N., Zhang, Y., Hu, J., & Nie, Z. (2011) . Synchronization for general complex dynamical networks with sampled-data. Neurocomputing, 74, 805– 811. Li, Y., Zhang, Q., & Jing, C. (2009) . Stochastic stability of networked control systems with time-varying sampling periods. International Journal of Information and Systems sciences, 5, 494–502. Li, T., Wang, T., Yang, X., & Fei, S. M. (2012) . Cluster Synchronization in Hybrid Coupled Discrete-Time Delayed Complex Networks, Communications in Theoretical Physics, 58, 686-696. Liang, J., Wang, Z., & Liu, X. (2008) . Exponential synchronization of stochastic delayed discrete-time complex networks. Nonlinear Dynamics, 53, 153– 165. Liberzon, D. (2003) . Switching in systems and control. Boston, MA: Birkhauser. Liu, P. L. (2014). Further results on delay-range-dependent stability with additive time-varying delay systems. ISA Transactions, 53, 258–266. Lu, J., & Ho, D. (2008) . Local and global synchronization in general complex dynamical networks with delay coupling. Chaos Solitons and Fractals, 37, 1497–1510. Mikheev, Y., Sobolev, B., & Fridman, E. (1988) . Asymptotic analysis of digital control systems. Automation and remote control, 49, 1175–1180. Newman, M.E.J. (2003) . The structure and function of complex networks. SIAM Review, 45, 167–256. Ozdemir, N., & Townley, T. (1988) . Integral control by variable sampling based on steady-state data. Automatica, 39, 135–140. Park, P. G., Ko, J. W., & Jeong, C. (2011) . Reciprocally convex approach to stability of systems with time-varying delays. Automatica, 47, 235-238. Shao, H., & Han, Q. L. (2011) . New delay-dependent stability criteria for neural networks with two additive time-varying delay components. IEEE

21

Transactions on Neural Networks, 22, 812–818. Shao, H., & Han, Q. L. (2012) . On stabilization for systems with two additive time-varying input delays arising from networked control systems. Journal of The Franklin Institute, 349, 2033–2046. Shen, B., Wang, Z., & Liu, X. (2012) . Sampled-Data Synchronization Control of Dynamical Networks With Stochastic Sampling. IEEE Transactions on Automatic Control, 57, 2644 – 2650. Wang, X., & Chen, G. (2003) . Complex networks: Small world, scale free and beyond. IEEE Circuits and Systems Magazine, 3, 6–20. Wang, T., Li, T., Yang, X., & Fei, S. M. (2012) . Cluster synchronization for delayed Lure’s dynamical networks based on pinning control, Neurocomputing, 83, 72-82. Wen, S., & Zeng, Z. (2013) . Robust sampled-data H∞ output tracking control for a class of nonlinear networked systems with stochastic sampling. International Journal of Systems Science, 44, 1626-1638. Wu, Z. G., Park, Ju H., Su, H., Song, B., & Chu, J. (2012) . Exponential synchronization for complex dynamical networks with sampled-data. Journal of the Franklin Institute, 349, 2735–2749. Wu, Z. G., Shi, P., Su, H., & Chu, J. (2013) . Sampled-data Exponential synchronization of complex dynamical networks with time-varying coupling delay. IEEE Transactions on Neural Networks and Learning Systems, 24, 1177-1187. Xiao, N., & Jia, Y. (2013) . New approaches on stability criteria for neural networks with two additive time-varying delay components. Neurocomputing, 118, 150–156. Xu, S., & Yang, Y. (2009) . Synchronization for a class of complex dynamical networks with time delay. Communications in Nonlinear Science and Numerical Simulation, 14, 3230–3238. Yue, D., & Li, H. (2009) . Synchronization stability of continuous/discrete complex dynamical networks with interval time-varying delays. Neurocomputing, 73, 809–819. Zhang, W., & Yu, L. (2010) . Stabilization of sampled-data control systems with control inputs missing. IEEE Transactions on Automatic Control, 55, 447–452. Zhang, G., Wang, T., Li, T., & Fei, S. M. (2012) . Exponential synchronization for delayed chaotic neural networks with nonlinear hybrid coupling, Neurocomputing, 85, 53-61. Zhou, J., & Chen, T. (2006) . Synchronization in general delayed dynamical networks. IEEE Transactions on Circuits and Systems I, 53, 733-744. Zhou, J., Wang, Z., Wang, Y., & Kong, Q. (2013) . Synchronization for complex dynamical networks with interval time-varying coupling delays. Nonlinear Dynamics, 72, 377–388. Zhu, X. L., Wang, Y., & Du, X. (2014) . Stability criteria for continuous-time systems with additive time-varying delays. Optimal Control Applications and Methods, 35, 166-178.

Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays.

This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additiona...
417KB Sizes 0 Downloads 7 Views