IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 12, DECEMBER 2015

3215

Synchronization of Neural Networks With Control Packet Loss and Time-Varying Delay via Stochastic Sampled-Data Controller Rajan Rakkiyappan, Shanmugavel Dharani, and Jinde Cao, Senior Member, IEEE Abstract— This paper addresses the problem of exponential synchronization of neural networks with time-varying delays. A sampled-data controller with stochastically varying sampling intervals is considered. The novelty of this paper lies in the fact that the control packet loss from the controller to the actuator is considered, which may occur in many real-world situations. Sufficient conditions for the exponential synchronization in the mean square sense are derived in terms of linear matrix inequalities (LMIs) by constructing a proper Lyapunov–Krasovskii functional that involves more information about the delay bounds and by employing some inequality techniques. Moreover, the obtained LMIs can be easily checked for their feasibility through any of the available MATLAB tool boxes. Numerical examples are provided to validate the theoretical results. Index Terms— Control packet loss, exponential synchronization, neural networks, stochastic sampled-data control, switched systems.

I. I NTRODUCTION

S

INCE neural networks play an indispensable role in various fields, such as signal processing, associative memories, image processing, combinatorial optimization, and other areas [1]–[4], there has been a wealth of research on the qualitative analysis of neural networks in the past decades [5]–[10]. Moreover, it is well known that time delays often occur in the electronic implementation of neural networks, and the introduction of time delays in neural networks will make their dynamic behavior much more complicated causing instability, oscillation, or poor performance. Most of the studies in the previous literature have focused on the stability analysis of neural networks [11]–[14]. In [11] and [12], delay partitioning approach has been employed to study the stability analysis of recurrent neural networks with time-varying delays.

Manuscript received May 30, 2014; revised January 22, 2015 and April 21, 2015; accepted April 21, 2015. Date of publication May 8, 2015; date of current version November 16, 2015. This work is supported by the National Natural Science Foundation of China under Grant nos. 11072059 and 61272530, the Specialized Research Fund for the Doctoral Program of Higher Education under Grant nos. 20110092110017 and 20130092110017, and the Natural Science Foundation of Jiangsu Province of China under Grant no. BK2012741. R. Rakkiyappan and S. Dharani are with the Department of Mathematics, Bharathiar University, Coimbatore 641046, India (e-mail: [email protected]; [email protected]). J. Cao is with the Research Center for Complex Systems and Network Sciences, Department of Mathematics, Southeast University, Nanjing 210096, China, and also with the Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2015.2425881

Moreover, delay-dependent stability criteria for neutral-type neural networks with mixed delays have been derived using delay partitioning approach in [14]. Since it has been shown that such networks can exhibit some complicated dynamics and even chaotic behaviors, synchronization of neural networks has become a hot area of research [15]–[26]. The problem of optimal exponential synchronization of general chaotic neural networks has been investigated in [18] using Lyapunov–Krasovskii stability theory and linear matrix inequality (LMI) approach, where the time-delay feedback controllers have been designed to synchronize two identical chaotic neural networks. The problem of robust exponential synchronization of chaotic neural networks with time delay and different parametric uncertainties has been considered in [22] and an impulsive control scheme has been proposed. An adaptive synchronization scheme between two different chaotic neural networks with time delays has been proposed in [19], and an adaptive controller has been designed to guarantee the global asymptotic synchronization of two different kinds of chaotic neural networks. In [24], the exponential synchronization problem for chaotic memristive neural networks with time-varying delays has been analyzed using Lyapunov functional method, where the designing laws ensuring the synchronization of neural networks have been given through state or output coupling. In addition, since modern society uses digital technology in implementing controllers, sampled-data control scheme has received significant attention [27]–[32]. It should be noted that in sampled-data controllers, control signals are updated only at sampling instants and are kept at updated constant during a sampling interval. Due to this reason, control signals in sampled-data controllers have stepwise form and these discontinuous control signals may disturb system stability. Thus, a new concept to overcome this difficulty has been introduced in [33] and [34], where discontinuous control signals are considered as time-varying delayed continuous control signals by introducing a sawtooth structural function for the time-varying delay, although the actual control signals applied are discontinuous. This approach has been further developed in [35] and [36] for the robust sampled-data stabilization of linear systems. In addition, in sampled-data controllers, selecting proper sampling interval is a demanding task, and traditionally, many researchers have concentrated on constant sampling. But to deal with problems, such as change in network situation, the limitation of the calculating speed of hardware, and so on, constant sampling does not

2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

3216

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 12, DECEMBER 2015

help and thus the concept of time-varying sampling has come into fore [37]–[41]. Apart from these facts, considering the random change in sampling intervals, a further extension of time-varying case called the stochastically varying sampling intervals has been considered in the literature and results have been proposed [42]–[45]. In [43], the problem of robust H∞ control for sampled-data systems with probabilistic sampling has been investigated, where for simplicity, only two sampling periods have been considered. The robust synchronization problem for uncertain nonlinear chaotic systems has been analyzed through a stochastic sampled-data controller with randomly varying sampling intervals in [44]. Moreover, the state estimation problem for neural networks with time-varying delays has been investigated in [45], where a stochastic sampled-data controller with m sampling intervals has been designed. It is worth pointing out that in all the above-mentioned works, it is assumed that the control packet from the controller to the actuator is received completely. But this assumption may fail in many practical situations due to temporary controller or actuator failures, intermittent unavailability of controllers, communication interference, or congestion. In such cases, the control packet from the controller to the actuator will be lost, and so, the actuator input to the plant will be zero. Frequent occurrence of such missing input will lead to difficulties in determining the stability of systems under study. This issue on sampled-data control systems gives rise to certain queries such as: what is the admissible data missing rate that guarantees the existence of a stabilizing controller for the sampled-data control system? Moreover, what is the quantitative relationship between the system stability performance and the data missing rate for a sampled-data control system? There were only few results on the aforementioned problems in the literature. Recently, the sampled-data control systems with control inputs missing have been investigated in the background of networked control systems with packet losses. Few relationships between the data transmission rate and the stability of networked control systems have been proposed in [46] and [47]. Some quantitative relations between the packet loss rate and the stability of networked control systems have been given in [48] and [49]. However, all the abovementioned results have been presented in the discrete-time framework and the approaches therein cannot be applied to general sampled-data control systems. Next, in [50] and [51], the sampled-data control approach for networked control systems with control packet losses and delays has been employed, where the relation between the stability and the sampling period has been established. However, the quantitative effects of the sampling period and the data missing rate on the stability performance are not established in these works. Therefore, in [52], the stabilization of sampled-data control systems with control inputs missing has been investigated using switched system approach, and sufficient conditions for the existence of exponential stabilizing state feedback controllers have been derived. Moreover, in [53], an improved stabilization method for sampled-data control systems with control packet loss has been developed and the obtained results are proved to be theoretically less conservative than the existing ones. Although

control packet loss is unavoidable in natural circumstances, unfortunately there were no work in the literature that reveals the response for such queries mentioned above in the case of synchronization of neural networks with control packet loss and time-varying delays. This shows that there is a room for extra improvement. Enlightened by the above ideals, in this paper, exponential synchronization problem for neural networks with time-varying delays and control packet loss is investigated using switched system approach. A stochastic sampled-data controller with m sampling intervals is considered, and the synchronization error system with a stable subsystem and an unstable subsystem is proposed. With the construction of a proper Lyapunov–Krasovskii functional and using some inequality techniques, the synchronization criteria for the error systems without control packet loss are derived in terms of LMIs. Based on the derived criteria and the average dwell-time method, the synchronization criteria for the synchronization error system with control packet loss are derived. Moreover, conditions that establish the relationship between packet loss rate and stability performance of the error system are derived. The proposed LMIs can be easily checked for their feasibility with the help of available standard softwares. Two numerical examples are provided in order to illustrate the effectiveness of the proposed results. The remainder of this paper is organized as follows. Section II deals with the model description and introduction of some lemmas and definitions to be used later. Section III proposes the main results of this paper. Section IV illustrates the effectiveness of the proposed results through numerical simulations. Finally, the conclusion is drawn in Section V. Notations: The following notations will be used throughout this paper. R n and R m×n denote the n-dimensional Euclidean space and the set of all m × n real matrices, respectively. X > 0 (X ≥ 0) means that matrix X is a real symmetric positive definite (positive semidefinite) matrix. λmax (W ) (λmin (W )) denotes the maximum (minimum) of the eigenvalue of a real symmetric matrix W.  in a matrix denotes the elements below the main diagonal of a symmetric matrix. diag{· · · } represents a diagonal matrix. T as a superscript represents the transpose of a matrix. Matrices for which dimensions are not explicitly stated are assumed to have compatible dimensions for algebraic operations. E{x} and E{x|y} mean the expectation of a stochastic variable x and the expectation of the stochastic variable x conditional on the stochastic variable y, respectively. Prob{α} is the occurrence probability of an event α. II. P ROBLEM F ORMULATION AND P RELIMINARIES Consider the following neural network: x(t) ˙ = −Ax(t) + B1 η(x(t)) + B2 η(x(t − d(t))) + J (t) r (t) = C x(t) (1) where

⎤ ⎤ ⎡ η1 (x 1 (t)) x 1 (t) ⎢ η2 (x 2 (t)) ⎥ ⎢ x 2 (t) ⎥ ⎥ ⎥ ⎢ ⎢ x(t) = ⎢ . ⎥, η(x(t)) = ⎢ ⎥ .. ⎦ ⎣ ⎣ .. ⎦ . ⎡

x n (t)

ηn (x n (t))

RAKKIYAPPAN et al.: SYNCHRONIZATION OF NEURAL NETWORKS WITH CONTROL PACKET LOSS AND TIME-VARYING DELAY

ˆ

with xl (t) as the state of the lth neuron at time t, r (t) ∈ Rl ˆ is the output, C ∈ Rl×n , η(·) denotes the neuron activation function, A = diag{ai } > 0 ∈ R n×n , B1 = (bi1j )n×n ∈ R n×n , B2 = (bi2j )n×n ∈ R n×n are the connection weight matrices, and J (t) = [ J1 (t) J2 (t) . . . Jn (t)]T is the external input vector. In addition, it is assumed that ηl (·) belongs to sector [sl− , sl+ ] for l = 1, 2, . . . , n, that is, for any l = 1, 2, . . . , n, we have sl− ≤

ηl (ξ ) − ηl (ν) ≤ sl+ , ξ = ν ∈ R ξ −ν

(2)

where sl− and sl+ are known real scalars. The delay d(t) is time-varying continuous function and satisfies ˙ ≤μ d1 ≤ d(t) ≤ d2 , d(t) where d1 , d2 , and μ are known constants. Considering system (1) as master system, we introduce a slave system for (1) as y˙ (t) = −Ay(t)+ B1η(y(t))+ B2 η(y(t − d(t)))+ J (t)+u(t) s(t) = C y(t)

(3)

where A, B1 , B2 , C, η(·), and J (t) are same as defined in (1). ˆ y(t) ∈ R n is the state, s(t) ∈ Rl is the output, and u(t) ∈ R n is the control input. This paper is concerned with a controller that uses sampled data with stochastic sampling and it takes the following form: u(t) = K (r (tk ) − s(tk )) = K Ce(tk ), tk ≤ t < tk+1 , k = 0, 1, 2, . . .

(4)

where K is the gain matrix of the feedback controller to be determined, tk is the updating instant time of the zero-order hold, and the sampling interval is defined as tk+1 − tk = h. In order to deal with discontinuous signals in the continuous system, controller (4) is modified as in (5) using the concept of time-varying delayed control input proposed in [33] and [34] and further developed in [35] and [36] u(t) = K Ce(tk ) = K Ce(t − τ (t)), tk ≤ t ≤ tk+1

(5)

where the time-varying delay τ (t) = t − tk is piecewise linear and τ (t) ≤ tk+1 − tk . Define e(t) = x(t) − y(t) as the synchronization error of the master system (1) and the slave system (3). With this definition for error state, we obtain the synchronization error system for the master system (1) and the slave system (3) as

The time-varying delay τ (t) in controller (5) satisfies τ˙ (t) = 1 and the following probability rule: h1 hm h2 − h1 Prob{h 1 ≤ τ (t) < h 2 } = hm .. . h m − h m−1 . Prob{h m−1 ≤ τ (t) < h m } = hm The stochastic variables αi (t) and βi (t) are defined such that  1, h i−1 ≤ τ (t) < h i αi (t) = i = 1, 2, . . . , m 0, otherwise  1, h = h i i = 1, 2, . . . , m βi (t) = 0, otherwise Prob{0 ≤ τ (t) < h 1 } =

with the following probability: Prob{αi (t) = 1} = Prob{h i−1 ≤ τ (t) < h i } m h i − h i−1 = βj = αi hj

Prob{βi (t) = 1} = Prob{h = h i } = βi (8)

m where i = 1, 2, . . . , m and i=1 αi = 1. Since αi (t) satisfies the Bernoulli distribution as reported in [43], we have E{αi (t)} = αi ,

E{(αi (t) − αi )2 } = αi (1 − αi ).

Therefore, system (6) with m sampling intervals can be expressed as e(t) ˙ = −Ae(t) + B1 φ(e(t), y(t)) + B2 φ(e(t − d(t)), y(t − d(t))) m − αi (t)K Ce(t − τi (t))

(9)

i=1

where h i−1 ≤ τi (t) < h i . In addition, it cannot be assured that the control packet from the controller to the actuator will always be received completely and so, in this paper, we consider the problem of synchronization of neural networks with control packet loss. If the control packet from controller to actuator is lost, then in this case, the actuator does nothing, that is, u(tk ) = 0, and the synchronization error system (6) reduces to the following system: e(t) ˙ = −Ae(t) + B1 φ(e(t), y(t)) +B2 φ(e(t − d(t)), y(t − d(t))).

(10)

To specify the control packet loss status, we introduce the notation σ (t) such that σ (t) : [0, +∞) → = {1, 2} is a piecewise constant and right continuous function. The synchronization error system with control packet loss can be described as

(6)

+B2 φ(e(t − d(t)), y(t − d(t))) m K σ (t ) Ce(t − τi (t)) − i=1

where φ(e(t), y(t)) = η(e(t) + y(t)) − η(y(t)).

(7)

j =i

e(t) ˙ = −Ae(t) + B1 φ(e(t), y(t))

e˙(t) = −Ae(t) + B1 φ(e(t), y(t)) + B2 φ(e(t − d(t)), y(t − d(t))) − K Ce(t − τ (t)), tk ≤ t ≤ tk+1

3217

where K 1 = K and K 2 = 0.

(11)

3218

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 12, DECEMBER 2015

From the above, it can be easily seen that when σ (t) = 1 for t ∈ [tk , tk+1 ), the control packet is not missing during [tk , tk+1 ), and system (6) is active; otherwise, the control packet loss occurs during [tk , tk+1 ), and system (10) is active. Thus, σ (t) can be referred to a switching signal and system (11) is a switched system consisting of two subsystems, that is, stable subsystem (6) and unstable subsystem (10). III. M AIN R ESULTS In this section, our aim is to derive the synchronization criteria for the master system (1) and the slave system (3) to be exponentially synchronous in mean square with control packet loss based on the average dwell-time method. Before that, some lemmas and definitions that are necessary in proving our main results are presented. Lemma 1 [54]: For a positive definite matrix M and any differentiable function ω in [a, b] → R n, the following inequality holds: b 1 ω˜ T (a, b) M¯ ω(a, ω˙ T (u)M ω(u)du ˙ ≥ ˜ b) b − a a where

⎤ ω(b) ⎥ ⎢ ω(a) ⎥ ω(a, ˜ b) = ⎢ b ⎦ ⎣ 1 ω(u)du b−a a ⎡ ⎤ ⎡ M −M 0 M 2 π ⎣  M 0⎦+ M¯ = ⎣  4   0  ⎡

M M 

⎤ −2M −2M ⎦. 4M

Definition 1 [55]: The switching signal σ (t) is said to have an average dwell-time τα if there exist two scalars N0 > 0 and τα > 0 such that Nσ (T, t) ≤ N0 +

t−T τα

∀t ≥ T ≥ 0

(12)

where Nσ (T, t) denotes the switching numbers of σ (t) over the interval [T, t) and N0 is called the chatter bound, respectively. Remark 1: The average dwell time in Definition 1 means that the time interval between consecutive switching is at least τα on the average. Then, a basic problem for the neural networks (1) is to specify the minimal τα and thereby get the admissible switching signals that the neural networks (1) are stable. Definition 2 [56]: The master system (1) and the slave system (3) without control packet loss are said to be exponentially synchronous in mean square if the synchronization error system (6) is mean square exponentially stable, that is, if there exist positive scalars ν and δ such that E|e(t, ϕ)|2 ≤ νe−δt E

sup

−2d ∗ ≤θ≤0

|ϕ(θ )|2 .

(13)

Definition 3 [56]: The master system (1) and the slave system (3) with control packet loss are said to be exponentially synchronous in mean square if the switched system (11) is mean square exponentially stable, that is, if there exist positive scalars ν and δ such that (13) holds.

For simplicity, we denote

E 1 = diag s1− s1+ , s2− s2+ , . . . , sn− sn+

− E 2 = diag s1 + s1+ , s2− + s2+ , . . . , sn− + sn+ ψ(t) = [η T (t) φ T (e(t), y(t))]T . To discuss the exponential synchronization of the master system (1) and the slave system (3) with control packet loss, we first investigate the same without control packet loss. That is, we first derive the conditions for exponential stability of the synchronization error system (6). Theorem 1: For given scalar δ1 > 0 and diagonal matrices E 1 > 0, E 2 > 0, error system (9) is mean square exponentially stable, if there exist matrices P1 > 0, S1 > 0, R1 > 0, U1 > 0, X 1 > 0, Y1 > 0, Fi > 0, Ti > 0, Q 12 > 0, W1 = Z i > 0 (i = 1, 2, . . . , m), Q 1 = Q11 Q 13  H11 H12   W11 W12  = > 0, H > 0, and any matrices G, 1  W13  H13 Vi (i = 1, . . . , m) such that the following LMIs hold: ⎡ ⎤ 1 12 22 32 · · · m 1 2 0 2 ⎢ ⎥ ⎢  13 24 0 · · · 0 0 0 0 ⎥ ⎢ ⎥ ⎢   23 34 · · · 0 0 0 0 ⎥ ⎢ ⎥ ⎢ . .. .. .. .. ⎥ .. .. .. .. ⎢ .. . . . . . . ⎥ . . ⎢ ⎥ ⎢ ⎥0 ∀i = 1, . . . , m (15)  Ti where  1 = r2 =

i4 = 2 = 3 =

4 =

5 =



 1211 1212 1213 = 1221 0 0 ⎤ ⎡ i i 0   311 312 −αi G K C 0 0 ⎥ ⎢ , i3 = ⎣  i322 i323 ⎦ −αi G K C 0 0   i333 ⎡ ⎤   0 0 0 111 0 113 i i ⎣ i ⎦ 422 423 , 1 = 421 0 0 0 0 0 0   211 0 GW22 0 GW11 0 GW12 0 ⎡ ⎤ 0 313 311 ⎣  322 0 ⎦   333 ⎡ ⎤ 0 0 0 412 ⎣0 0 ⎦ 0 423 0 0 0 434 ⎡ ⎤ 0 0 0 511   ⎢  522 0 0 ⎥ 0 ⎢ ⎥, 6 = 611 ⎣   533  622 0 ⎦    544 11 

 12 , 22

12

RAKKIYAPPAN et al.: SYNCHRONIZATION OF NEURAL NETWORKS WITH CONTROL PACKET LOSS AND TIME-VARYING DELAY

with

3219

where

  π2 11 = Q 11 + W11 + H11 + α1 −Z 1 − Z 1 e−2δ1 h 1 4

12

+ 2δ1 P1 − 2G A − 2D1 E 1 − e−2δ1 d1 R1 − e−2δ1 d2 U1 m = P1 −G −G A, 22 = d12 R1 + d22 U1 + pi2 (Ti + Z i ) i=1

1212

−2δ1 h 1

=e α1 (T1 − V1 ) − α1 G K C − 2G + S,   π2 Z1 = e−2δ1 h 1 α1 V1 + Z 1 − 4 1211

i422 i423 111



−h i−1 t t +θ

 e2δ1 (s−t )e˙ T (s)(Ti + Z i )e(s)dsdθ ˙ .

1 {E{V (t + h)|t} − V (t)}. h

LV11 (t) + 2δ1 V11 (t) ˙ + 2δ1 e T (t)P1 e(t) = 2e T (t)P1 e(t)

(1 − μ)H11 − 2D2 E 1

511 = Q 13 + W13 + H13 − 2D1 , 522 = −e−2δ1 d1 Q 13 533 = −e−2δ1 d2 H13 − 2D2 , 544 = −e−2δ1 d2 W13 2 −2δ1 d1 611 = d12 e Y1 − e−2δ1 d1 S, 622 = −e−2δ1 d2 X 1 .

The desired control gain is given by K = G −1 H. Moreover, the Lyapunov–Krasovskii functional (17) satisfies (16)

Proof: Consider the Lyapunov–Krasovskii functional as V1ν (t)

e2δ1 (s−t )e T (s)Fi e(s)ds

t −h i

−h i

e2δ1 (s−t )e˙ T (s)Y1 e˙(s)dsdθ

(18)

Calculating the time derivative of V1 (t), we obtain

423 = e−2δ1 d2 (1 − μ)H12 + 2D2 E 2 , 434 = e−2δ1 d2 W12

ν=1

e2δ1 (s−t )e T (s)X 1 e(s)dsdθ

−d2 t +θ  t −h i−1

h→0+

333 = e−2δ1 d2 (W11 − Y1 − U1 ), 412 = e−2δ1 d1 Q 12

5

e2δ1 (s−t )e˙ T (s)U1 e(s)dsdθ ˙

t +θ −d1 t −d1

LV (t) = lim

2 −2δ1 d1 311 = e−2δ1 d1 Q 11 + d12 e X 1 − e−2δ1 d2 Y1 − e−2δ1 d2 R1

V1 (t) =

t

Define the infinitesimal operator L as

211 = D1 E 2 + GW1 + Q 12 + W12 + H12

V1 (t) ≤ e−2δ1 (t −tk ) V1 (tk ), t ∈ [tk , tk+1 ).

αi

+ pi

π2 Zi =e αi 2 = e−2δ1 d1 R1 , 113 = e−2δ1 d2 U1

Y1 , 322 = −e

m i=1

−2δ1 h i

313 = e

−d2

+ d12 V15 (t) =

e2δ1 (s−t )e˙ T (s)R1 e(s)dsdθ ˙



0



π2

−2δ1 d2

t +θ

−d2 t +θ −d1 t −d1

V14 (t) = d12

Z i , i333 = e−2δ1 h i αi π 2 Z i 2 = e−2δ1 h i αi (Ti − Vi )   π2 −2δ1 h i Zi =e αi Vi + Z i − 4

−2δ1 d2

−d1

+ d2

i312 = e−2δ1 h i αi (−Vi + Ti )   π2 Z i + Qi i322 = −e−2δ1 h i αi Ti + Z i + 4    π2 −2δ1 h i+1 Z i+1 + −e αi+1 Ti+1 − Z i+1 − 4  −2δ1 h i +e αi+1 Q i

i421

t −d1 0 t

V13 (t) = d1

π2 1213 = e−2δ1 h 1 α1 Z 1 , 1221 = −α1 G K C 2   i311 = e−2δ1 h i αi − 2Ti + Vi + ViT

i323 = e−2δ1 h i αi

V11 (t) = e T (t)P1 e(t) t V12 (t) = e2δ1 (s−t )ψ T (s)Q 1 ψ(s)ds t −d1 t + e2δ1 (s−t )ψ T (s)W1 ψ(s)ds t −d2 t e2δ1 (s−t )ψ T (s)H1ψ(s)ds + t −d(t ) t + e2δ1 (s−t )e˙ T (s)S1 e(s)ds ˙

(17)

(19)

LV12 (t) + 2δ1 V12 (t) = ψ T (t)Q 1 ψ(t) − e−2δ1 d1 ψ T (t − d1 )Q 1 × ψ(t − d1 )+ψ T (t)W1 ψ(t)−e−2δ1 d2 ψ T (t − d2 )W1 × ψ(t − d2 ) + ψ T (t)H1 ψ(t) − e−2δ1 d2 (1 − μ) × ψ T (t − d(t))H1ψ(t − d(t)) + e˙ T (t)S1 e(t) ˙ −2δ1 d1 T e˙ (t − d1 )S1 e(t ˙ − d1 ) −e LV13 (t) + 2δ1 V13 (t) ˙ + d22 e˙ T (t)U1 e˙(t) ≤ d12 e˙ T (t)R1 e(t) t t −2δ1 d1 T −e e˙ (s)ds R1 e˙(s)ds t −d t −d t 1 t 1 e˙ T (s)dsU1 e(s)ds ˙ −e−2δ1 d2 t −d2

LV14 (t) + 2δ1 V14 (t)

t −d2

(20)

(21)

2 −2δ1 d1 T ≤ d12 e e (t − d1 )X 1 e(t − d1 ) t −d1 t −d1 −e−2δ1 d2 e T (s)ds X 1 e(s)ds t −d2

t −d2

t −d2

t −d2

2 −2δ1 d1 T e e˙ (t − d1 )Y1 e(t ˙ − d1 ) + d12 t −d1 t −d1 −2δ1 d2 T e˙ (s)dsY1 e(s)ds ˙ −e

(22)

3220

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 12, DECEMBER 2015

t (1/ p1) t −h 1 e(s)ds e(t − τ2 (t)) e(t − h 2 ) (1/ p2 )  t −h i−1  t −h 1 e(s)ds]. t −h 2 e(s)ds · · · e(t − τi (t))e(t − h i ) (1/ pi ) t −h i

LV15 (t) + 2δ1 V15 (t)  m ≤ αi e−2δ1 h i−1 e T (t − h i−1 )

From (31), we can have

i=1

V1 (t) ≤ e−2δ1 (t −tk ) V1 (tk )

× Fi e(t − h i−1 ) − e−2δ1 h i e T (t − h i )Fi e(t − h i )

≤ e−2δ1 (t −tk−1 ) V1 (tk ) ≤ ···

+ pi2 e˙ T (t)(Ti + Z i )e(t) ˙  t −h i−1 − pi e−2δ1 h i e˙ T (s)(Ti + Z i )e(s)ds ˙ .

≤ e−2δ1 V1 (0).

t −h i

(23) The last integration term of LV15 (t) can be bounded in [57, Th. 1, Lemmas 1 and 3] as  t −h i−1  m m − αi pi e˙ T (s)Ti e˙(s)ds ≤ αi ϕiT (t)Hi ϕi (t) t −h i

i=1



m i=1

 αi pi

t −h i−1 t −h i

i=1

(24)  m e˙ T (s)Z i e(s)ds ˙ ≤ αi ϕ˜ iT (t)Gi ϕ˜i (t).

Moreover, from the definition of V1 (t), we can have e2δ1 t E{|e(t)|2 }λmin (P1 ) ≤ E{V1 (t)} ≤ E{V1 (0)}.

ρ1 = ρ3 =

i=1

ρ4 =

For any appropriately dimensioned matrix G, the following equality holds:  

ρ5 =

− e˙(t) − Ae(t) + B1 φ(e(t), z(t))

+ B2 φ(e(t − d(t), z(t − d(t))))−

m

 αi K Ce(t − τi (t))

i=1

= 0.

which implies that for any matrix D1 = diag{d11, d12 , . . . , d1n } > 0, the following inequality holds: −2

n

ρ6 =

  +  d1 j φ j (e j , y j ) − s − j e j φ j (e j , y j ) − s j e j ≥ 0. (28)

1 − e−2δ1 d1 1 − e−2δ1 d2 , ρ2 = 2δ 2δ  1  1 −2δ d 1 1 d1 1−e d1 − 2δ1 4δ12   d2 1 − e−2δ1 d2 d2 − 2δ1 4δ12    e−2δ1 d1 e−2δ1 d1 − e−2δ1 d2 d12 (d2 − d1 ) − 2δ1 4δ12    e−2δ1 d1 e−2δ1 d1 − e−2δ1 d2 . d12 (d2 − d1 ) − 2δ1 4δ12

Then, from (17), one can deduce that E{V1 (0)} ≤ ζ E

(26)

In addition, it follows from (2) that for any l = 1, 2, . . . , n, we have:    ∀el , yl (27) φl (el , yl ) − sl− el φl (el , yl ) − sl+ el ≤ 0

(33)

On the other hand, let

(25)

E 2[e T (t)G + e˙ T (t)G]

(32)

sup

−2d ∗ ≤θ≤0

|χ (θ )|2

(34)

with

  T ζ = λmax (P1 ) + ρ1 λmax (Q 11 ) + λmax Q 12 + λmax (Q 12 )   + λmax (Q 13 ) + λmax (S1 ) + ρ2 λmax (W11 )  T + λmax W12 + λmax (W12 ) + λmax (W13 ) + λmax (H11)  T  + λmax H12 + λmax (H12) + λmax (H13) + ρ3 λmax (R1 ) + ρ4 λmax (U1 ) + ρ5 (λmax (X 1 ) + λmax (Y1 )).

j =1

Rewriting (29), we obtain

From (32)–(34), one can immediately obtain that

−2e T (t)D1 E 1 e(t) + 2e T (t)D1 E 2 φ(e(t), y(t)) −2φ T (e(t), y(t))D1 φ(e(t), y(t)) ≥ 0.

(29)

Similarly, one can have T

T

−2e (t − d(t))D1 E 1 e(t − d(t)) + 2e (t − d(t))D1 E 2 × φ(e(t − d(t)), y(t − d(t)))−2φ T(e(t − d(t)), y(t − d(t))) × D1 φ(e(t − d(t)), y(t − d(t))) ≥ 0.

(30)

Through (19)–(30), we can have LV1 (t) + 2δ1 V1 (t) ≤ κ T (t) κ(t) ≤ 0

E{|e(t, χ)|2 } ≤ ζ

e−2δ1 t E sup |χ (θ )|2 . λmin (P1 ) −2d ∗ ≤θ≤0

Thus, according to Definition 1, the master system (1) and the slave system (3) without control packet loss are exponentially synchronous in the mean square. This completes the proof. Theorem 2 proposes the sufficient conditions for exponential stability of error system (10) with the Lyapunov–Krasovskii functional

(31)

where the matrix is defined in (14) and κ(t) = [e(t) ei (t) e(t − d1 ) e(t − d(t)) e(t − d2 ) φ(e(t), y(t)) φ(e(t − d1 ), y(t − d1 )) φ(e(t − d(t)), y(t − d(t))) φ(e(t − d2 ), y(t − d2 )) t −d e(t ˙ − d1 ) t −d21 e(s)ds] with ei (t) = [e(t − τ1 (t)) e(t − h 1 )

(35)

V2 (t) =

5

V2ξ (t)

(36)

ξ =1

where V21 , V22 , V23 , V24 , and V25 are same as V11 , V12 , V13 , V14 , and V15 with the matrices P1 , Q 1 , W1 , H1, R1 , U1 , X 1 ,

RAKKIYAPPAN et al.: SYNCHRONIZATION OF NEURAL NETWORKS WITH CONTROL PACKET LOSS AND TIME-VARYING DELAY

and Y1 replaced by P2 , Q 2 , W2 , H2, R2 , U2 , X 2 , and Y2 , respectively. Theorem 2: For given scalar δ2 < 0 and diagonal matrices E 1 > 0, E 2 > 0, error system (10) is mean square exponentially stable, if there exist matrices P2 > 0, S2 > 0, Y2 > 0, R2 >  0, U2  > 0, X 2 >   W210,W22 and 22 > 0, W > 0, H2 = Q 2 = Q21 Q = 2 Q 23  W23  H21 H22  > 0, and any matrices G, V (i = 1, . . . , m) such i  H23 that the following LMIs hold: ⎤ ⎡ 0 1 1 2 ⎢  3 4 0 ⎥ ⎥ 0, δ2 < 0, and μ1 , μ2 ≥ 1, if there exist matrices P1 > 0, P2 > 0, S1 > 0, S2 > 0, R1 > 0, R2 > 0, U1 > 0, U2 > 0, X 1 > 0, X 2 > 0, Y1 > 0, Y2 > 0, Fi > 0, Ti > 0, Z i > 0 (i = 1, 2, . . . , m), Q 1 = [Q 1 j ]2×2 > 0, Q 2 = [Q 2 j ]2×2 > 0, W1 = [W1 j ]2×2 > 0, W2 = [W2 j ]2×2 > 0, H1 = [H1 j ]2×2 > 0, H2 = [H2 j ]2×2 > 0, and any matrices G, Vi (i = 1, . . . , m) such that (14)–(16) and the following LMIs hold: P1 ≤ μ1 P2 ,

P2 ≤ μ2 P1 , Q 2 ≤ μ2 Q 1 , W2 ≤ μ2 W1 H2 ≤ μ2 H1, S2 ≤ μ2 S1 , R2 ≤ μ2 R1 U2 ≤ μ2 U1 ,

(39)

and −2e T (t − d(t))D4 E 1 e(t − d(t)) + 2e T (t − d(t))D4 E 2 φ(e(t − d(t)), y(t − d(t))) (40)

ln(μ1 μ2 ) 2(α1 − (α1 − α2 )) α1  < ∗ = α1 − α2

(41)

(42)

From (42), it follows (38). Now, we are in a position to prove the conditions for which the master system (1) and the slave system (3) with control packet loss are exponentially synchronous in the mean square by means of an appropriate Lyapunov–Krasovskii functional and average dwell-time method. For simplicity, let T u (T, t) be the total activation time of subsystem (10) and T s (T, t) be the total activation time of subsystem (9) during the time interval [T, t), where 0 ≤ T < t. It is obvious that T s (T, t) + T u (T, t) = t − T.

(43)

Moreover, the loss rate of control packet over the time interval [T, t) is defined by T u (T, t) ≥ 0. = t−T

(44)

(47)

where V1 (t) and V2 (t) are given as in (17) and (36), respectively. When σ (t) = 1, we have V (t) ≤ e−2δ1 (t −tk ) V (tk )

(48)

V1 (tk ) ≤ μ1 V2 (tk− )

(49)

and

where from (16), we conclude that LV2 (t) + 2δ2 V2 (t) ≤ 0.

(46)

then the master system (1) and the slave system (3) with control packet loss are exponentially synchronous in the mean square. The control gain matrix in this case is also been defined as K = G −1 H. Proof: Consider the Lyapunov–Krasovskii functional

to LV2 (t) + 2δ1 V2 (t), we obtain LV2 (t) + 2δ2 V2 (t) ≤ ω T (t)ω(t)

(45)

τδ > τδ∗ =

V (t) = Vσ (t )(t)

− 2φ T (e(t − d(t)), y(t − d(t)))D4 × φ T (e(t − d(t)), y(t − d(t))) ≥ 0

X 2 ≤ μ2 X 1 , Y2 ≤ μ2 Y1

and if the switching signal σ (t) has an average dwell-time τδ satisfying

Proof: Proceeding with the same procedure in Theorem 1 and adding the left-hand side of the inequalities −2φ T (e(t), y(t))D3 φ(e(t), y(t)) ≥ 0

Q 1 ≤ μ1 Q 2 , W1 ≤ μ1 W2

H1 ≤ μ1 H2, S1 ≤ μ1 S2 , R1 ≤ μ1 R2 U1 ≤ μ1 U2 , X 1 ≤ μ1 X 2 , Y1 ≤ μ1 Y2

(38)

−2e T (t)D3 E 1 e(t) + 2e T (t)D3 E 2 φ(e(t), y(t))

3221

and if σ (t) = 2, then V (t) ≤ e−2δ1 (t −tk ) V (tk )

(50)

V2 (tk ) ≤ μ2 V1 (tk− ).

(51)

and

Let T1 , T2 , . . . , T p be the switching instants of σ (t) on the interval [0, t). It is assumed that 0 < T1 < T2 < · · · < T p . Then, for each Tq , there exists b ∈ {1, 2, . . . , k} such that Tq = tb . Thus, for any t ∈ [tk , tk+1 ), we have V (t) ≤ e ≤e

−2δσ (T j ) (t −T j ) −2δσ (T j )

Vσ (T j ) (T j )

  (t − T j )μVσ (T − ) T j− j

≤ ... ≤ e−2δ1 T ≤e

s (0,t )−2δ T u (0,t ) 2

−2δ1 (1−)t −2δ2 t



e

μ Nσ (0,t ) Vσ (0) (0)

N0 + τt

≤ e N0 lnμ e−2δt Vσ (0)(0)

δ

 lnμ

Vσ (0) (0) (52)

3222

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 12, DECEMBER 2015

where δ = δ1 − (δ1 − δ2 ) −

ln(μ1 μ2 ) ∈ (0, α1 − (α1 − α2 )]. 2τδ (53)

From the definition of V (t), we can have E{|e(t)|2 }a1 ≤ e N0 lnμ e−2δt E{Vσ (0) (0)}.

(54)

Then, following the same procedure in Theorem 1, we arrive at: E{|e(t, χ)|2 } ≤ a2 with

e N0 ln(μ1 μ2 ) a1

sup

−2d ∗ ≤θ≤0

|χ (θ )|2

(55)

  T a2 = λmax (P1 ) + ρ1 λmax (Q 11 ) + λmax Q 12  + λmax (Q 12 ) + λmax (Q 13 ) + λmax (S1 )   T + λmax (W12 ) + ρ2 λmax (W11 ) + λmax W12  T + λmax (W13 ) + λmax (H11) + λmax H12  + λmax (H12) + λmax (H13) + ρ3 λmax (R1 ) + ρ4 λmax (U1 ) + ρ5 (λmax (X 1 ) + λmax (Y1 ))   T + λmax (P2 ) + ρ1 λmax (Q 21 ) + λmax Q 22  + λmax (Q 22 ) + λmax (Q 23 ) + λmax (S2 )   T + λmax (W22 ) + ρ2 λmax (W21 ) + λmax W22  T + λmax (W23 ) + λmax (H21) + λmax H22  + λmax (H22) + λmax (H23) + ρ3 λmax (R2 ) + ρ4 λmax (U2 ) + ρ5 (λmax (X 2 ) + λmax (Y2 )).

Then, by Definition 2, we conclude that the master system (1) and the slave system (3) with control packet loss are synchronous in the mean square. Remark 2: It can be viewed from (46) that the lower bound of average dwell-time τδ∗ not only depends on the convergence rate δ1 and the divergence rate δ2 but also on the packet loss rate . It also reflects the admissible switching frequency between the cases of packet-missing and nonpacket-missing. In addition, from (53), it should be noted that the upper bound of the control packet loss rate  ∗ not only depends on the convergence rate δ1 but also on the divergence rate δ2 . Since the exponential decay rate δ of the synchronization error system (11) given by (53) depends on τδ and , we can say that a larger  and a smaller τδ lead to a smaller exponential decay rate δ, that is, the loss of more control packet and frequent switching between the cases of packet and nonpacket-missing will degrade the exponential stability of system (11). Remark 3: It is worth pointing out that, in the existing literature, there were only a very few works based on the stabilization of sampled-data control systems with control packet loss, namely, [52] and [53] and no results have been found for neural networks with control packet loss. However, in all the above-mentioned works, the sampling interval does not vary stochastically. Thus, the main contribution of this paper is to fill such gaps by making the first attempt to deal with the synchronization of neural networks with time-varying delays under a sampled-data controller with stochastically varying sampling periods and control packet loss. Thus, the

Fig. 1.

Chaotic behavior of master system.

theoretical results proposed in this paper enrich the study on synchronization of neural networks with time-varying delays. IV. N UMERICAL E XAMPLES In this section, we provide two numerical examples with simulation results to demonstrate the applicability and effectivity of our proposed controller. Example 1: Consider the neural network (1) with ⎡ ⎡ ⎤ ⎤ 1 0 0 1.2 −1.6 0 1 0.9 ⎦ A = ⎣ 0 1 0 ⎦, B1 = ⎣ 1.25 0 0 1 0 2.2 1.5 ⎡ ⎤   −0.009 0.002 0.001 1 0 0 ⎣ ⎦ 0.002 0.001 0.003 , C = B2 = 0 1 0 0.001 0.002 −0.001 and J (t) = 1/t. The neuron activation function is chosen as 1 (|xl (t) + 1| − |xl (t) − 1|), l = 1, 2, 3. (56) 2 A straightforward calculation yields E 1 = 0.1I and E 2 = 0.3I. Here, we assume that control packet from controller to the actuator is lost and the actuator input to the system is set to zero. Thus, the closed-loop system can be represented in the form of (11). We first let δ1 = 1.1, δ2 = −1.7, μ1 = 1, μ2 = 1.1, the average dwell-time τδ = 0.25, and the control packet loss rate as  = 0.1. The time-varying delay is chosen as d(t) = et /(et + 1). In this example, we consider only two sampling intervals for simplicity and assume h 1 = 0.1 and h 2 = 0.2. From these values, we have τδ > τδ∗ = 0.0391,  <  ∗ = 0.3929, and the exponential convergence rate δ = 0.8081. Moreover, solving the LMIs (14), (37), and (45) in Theorem 3 for the given values d1 = 0.5, d2 = 1, and μ = 0.25, we obtain the control gain matrix K of our controller as ⎡ ⎤ 1.1654 −0.0695 K = ⎣ 0.0779 1.1178 ⎦. 0.0004 0.0655 ηl (xl (t)) =

Set the initial conditions of the master system and the slave system to be x(0) = [−0.7 −0.3 0.4]T and y(0) = [0.5 0.3 0.2], respectively. The chaotic behavior of the master system and the slave system without controller are shown in Figs. 1 and 2, respectively. Under the given controller gain,

RAKKIYAPPAN et al.: SYNCHRONIZATION OF NEURAL NETWORKS WITH CONTROL PACKET LOSS AND TIME-VARYING DELAY

Fig. 2.

Fig. 3.

Fig. 4.

Chaotic behavior of slave system. Fig. 5.

Switching signal σ (t).

Fig. 6.

Chaotic behavior of master system.

Fig. 7.

Chaotic behavior of slave system.

3223

State responses of error system.

Stochastic parameter h.

the simulation result for the controlled error signals is presented in Fig. 3. That is, the master system (1) is synchronized with the slave system (2). Fig. 4 shows the stochastic parameter h, and the switching signal σ (t) is shown in Fig. 5. Example 2: Consider the neural network model (1) with ⎡

A = diag{1, 1, 1},



1.28 −1.65 0 1 0.9 ⎦ B1 = ⎣ 1.24 0 2.2 1.5



⎤   0.001 0.002 0.001 1 0 0 ⎣ ⎦ B2 = 0.002 0.004 0.003 , C = 0 1 0 0.001 0.002 −0.001 and J (t) = 1/t. The nonlinear function is chosen as ηl (xl (t)) = tanh(xl ), l = 1, 2, 3. In addition, letting h 1 = 0.4, h 2 = 0.5, d1 = 0.5, d2 = 1, and μ = 0.6 and solving the LMIs in (14), (37), and (45) using MATLAB LMI toolbox,

3224

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 12, DECEMBER 2015

y(0) = [0.5 0.3 0.2]. Then, the chaotic behavior of the master system (1) and the slave system (2) without the control input are shown in Figs. 6 and 7, respectively. Under the controller gain K mentioned above, the trajectories of error system are well stabilized which is shown in Fig. 8. Fig. 9 shows the stochastic parameter h, and Fig. 10 shows the switching signal σ (t). V. C ONCLUSION

Fig. 8.

State responses of error system.

In this paper, synchronization problem for neural networks with time-varying delay and control packet loss has been introduced. A sampled-data controller with m stochastically varying sampling intervals has been considered. A Lypunov–Krasovskii functional, which fully uses the information of delay bounds, has been constructed, and sufficient conditions for the exponential stability of error system have been derived in terms of LMIs. Such LMIs have been solved using MATLAB LMI toolbox and the controller gain matrix and simulation results have been presented with two numerical examples. R EFERENCES

Fig. 9.

Stochastic parameter h.

Fig. 10.

Switching signal σ (t).

it can be found that the above-mentioned LMIs are feasible, and the controller gain is given by ⎤ ⎡ 0.3957 −0.0494 K = ⎣ 0.0571 0.4105 ⎦. 0.0006 0.0057 Choose the initial conditions for master system as x(0) = [−0.5 −0.4 −1.4] and for slave system as

[1] A. Cichocki and R. Unbehauen, Neural Networks for Optimization and Signal Processing. Chichester, U.K.: Wiley, 1993. [2] Z. Zeng and J. Wang, “Design and analysis of high-capacity associative memories based on a class of discrete-time recurrent neural networks,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 6, pp. 1525–1536, Dec. 2008. [3] S. S. Young, P. D. Scott, and N. M. Nasrabadi, “Object recognition using multilayer Hopfield neural network,” IEEE Trans. Image Process., vol. 6, no. 3, pp. 357–372, Mar. 1997. [4] M. Atencia, G. Joya, and F. Sandoval, “Dynamical analysis of continuous higher-order Hopfield networks for combinatorial optimization,” Neural Comput., vol. 17, no. 8, pp. 1802–1819, 2005. [5] R. Yang, H. Gao, and P. Shi, “Novel robust stability criteria for stochastic Hopfield neural networks with time delays,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 2, pp. 467–474, Mar. 2009. [6] Z. Wu, P. Shi, H. Su, and J. Chu, “Delay-dependent stability analysis for switched neural networks with time-varying delay,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 6, pp. 1522–1530, Nov. 2011. [7] X. Le and J. Wang, “Robust pole assignment for synthesizing feedback control systems using recurrent neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 2, pp. 383–393, Jan. 2014. [8] Y. Liu, Z. Wang, J. Liang, and X. Liu, “Synchronization of coupled neutral-type neural networks with jumping-mode-dependent discrete and unbounded distributed delays,” IEEE Trans. Cybern., vol. 43, no. 1, pp. 102–114, Feb. 2013. [9] P.-L. Liu, “Delay-dependent robust stability analysis for recurrent neural networks with time-varying delay,” Int. J. Innov. Comput., Inf. Control, vol. 9, no. 8, pp. 3341–3356, 2013. [10] Z. Guo, J. Wang, and Z. Yan, “Attractivity analysis of memristor-based cellular neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 4, pp. 704–717, Apr. 2014. [11] H. Zhang, Z. Liu, G.-B. Huang, and Z. Wang, “Novel weighting-delaybased stability criteria for recurrent neural networks with time-varying delay,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 91–106, Jan. 2010. [12] H. Huang and G. Feng, “State estimation of recurrent neural networks with time-varying delay: A novel delay partition approach,” Neurocomputing, vol. 74, no. 5, pp. 792–796, 2011. [13] Z. Wang, H. Zhang, and B. Jiang, “LMI-based approach for global asymptotic stability analysis of recurrent neural networks with various delays and structures,” IEEE Trans. Neural Netw., vol. 22, no. 7, pp. 1032–1045, Jul. 2011. [14] S. Lakshmanan, J. H. Park, H. Y. Jung, O. M. Kwon, and R. Rakkiyappan, “A delay partitioning approach to delay-dependent stability analysis for neutral type neural networks with discrete and distributed delays,” Neurocomputing, vol. 111, pp. 81–89, Jul. 2013. [15] H. Lu, “Chaotic attractors in delayed neural networks,” Phys. Lett. A, vol. 298, nos. 2–3, pp. 109–116, 2002.

RAKKIYAPPAN et al.: SYNCHRONIZATION OF NEURAL NETWORKS WITH CONTROL PACKET LOSS AND TIME-VARYING DELAY

[16] C.-J. Cheng, T.-L. Liao, and C.-H. Hwang, “Exponential synchronization of a class of chaotic neural networks,” Chaos, Solitons Fractals, vol. 24, no. 1, pp. 197–206, 2005. [17] J.-J. Yan, J.-S. Lin, M.-L. Hung, and T.-L. Liao, “On the synchronization of neural networks containing time-varying delays and sector nonlinearity,” Phys. Lett. A, vol. 361, nos. 1–2, pp. 70–77, 2007. [18] M. Liu, “Optimal exponential synchronization of general chaotic delayed neural networks: An LMI approach,” Neural Netw., vol. 22, no. 7, pp. 949–957, 2009. [19] H. Zhang, Y. Xie, Z. Wang, and C. Zheng, “Adaptive synchronization between two different chaotic neural networks with time delay,” IEEE Trans. Neural Netw., vol. 18, no. 6, pp. 1841–1845, Nov. 2007. [20] S. C. Jeong, D. H. Ji, J. H. Park, and S. C. Won, “Adaptive synchronization for uncertain chaotic neural networks with mixed time delays using fuzzy disturbance observer,” Appl. Math. Comput., vol. 219, no. 11, pp. 5984–5995, 2013. [21] Z.-G. Wu, J. Lam, H. Su, and J. Chu, “Stability and dissipativity analysis of static neural networks with time delay,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 2, pp. 199–210, Feb. 2012. [22] H. Zhang, T. Ma, G.-B. Huang, and Z. Wang, “Robust global exponential synchronization of uncertain chaotic delayed neural networks via dualstage impulsive control,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 831–844, Jun. 2010. [23] H. R. Karimi and H. Gao, “New delay-dependent exponential H∞ synchronization for uncertain neural networks with mixed time delays,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 1, pp. 173–185, Feb. 2010. [24] G. Zhang and Y. Shen, “New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 10, pp. 1701–1707, Oct. 2013. [25] X. Yang, J. Cao, and J. Lu, “Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 1, pp. 60–71, Jan. 2012. [26] W. He, F. Qian, Q.-L. Han, and J. Cao, “Synchronization error estimation and controller design for delayed Lur’e systems with parameter mismatches,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 10, pp. 1551–1563, Oct. 2012. [27] P. Shi, “Filtering on sampled-data systems with parametric uncertainty,” IEEE Trans. Autom. Control, vol. 43, no. 7, pp. 1022–1027, Jul. 1998. [28] S. K. Nguang and P. Shi, “Fuzzy H∞ output feedback control of nonlinear systems under sampled measurements,” Automatica, vol. 39, no. 12, pp. 2169–2174, 2003. [29] H. Gao, T. Chen, and J. Lam, “A new delay system approach to networkbased control,” Automatica, vol. 44, no. 1, pp. 39–52, 2008. [30] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Sampled-data fuzzy control of chaotic systems based on a T–S fuzzy model,” IEEE Trans. Fuzzy Syst., vol. 22, no. 1, pp. 153–163, Feb. 2014. [31] H. Zhang, G. Hui, and Y. Wang, “Stabilization of networked control systems with piecewise constant generalized sampled-data hold function,” Int. J. Innov. Comput., Inf. Control, vol. 9, no. 3, pp. 1159–1170, 2013. [32] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Sampled-data synchronization of chaotic Lur’e systems with time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 3, pp. 410–421, Mar. 2013. [33] K. Astrom and B. Wittenmark, Adaptive Control. Reading, MA, USA: Addison-Wesley, 1989. [34] Y. V. Miheev, V. A. Sobolev, and È. M. Fridman, “Asymptotic analysis of digital control systems,” Autom. Remote Control, vol. 49, no. 9, pp. 1175–1180, 1988. [35] E. Fridman, A. Seuret, and J.-P. Richard, “Robust sampled-data stabilization of linear systems: An input delay approach,” Automatica, vol. 40, no. 8, pp. 1441–1446, 2004. [36] E. Fridman, U. Shaked, and V. Suplin, “Input/output delay approach to robust sampled-data H∞ control,” Syst. Control Lett., vol. 54, no. 3, pp. 271–282, 2005. [37] B. Hu and A. N. Michel, “Stability analysis of digital feedback control systems with time-varying sampling periods,” Automatica, vol. 36, no. 6, pp. 897–905, 2000. [38] S. Tahara, T. Fujii, and T. Yokoyama, “Variable sampling quasi multirate deadbeat control method for single phase PWM inverter in low carrier frequency,” in Proc. Power Convers. Conf., Nagoya, Japan, Apr. 2007, pp. 804–809. [39] Y. Li, Q. Zhang, and C. Jing, “Stochastic stability of networked control systems with time-varying sampling periods,” Int. J. Inf. Syst. Sci., vol. 5, nos. 3–4, pp. 494–502, 2009.

3225

[40] N. Özdemir and T. Townley, “Integral control by variable sampling based on steady-state data,” Automatica, vol. 39, no. 1, pp. 135–140, 2003. [41] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Exponential synchronization of neural networks with discrete and distributed delays under timevarying sampling,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 9, pp. 1368–1376, Sep. 2012. [42] B. Shen, Z. Wang, and X. Liu, “Sampled-data synchronization control of dynamical networks with stochastic sampling,” IEEE Trans. Autom. Control, vol. 57, no. 10, pp. 2644–2650, Oct. 2012. [43] H. Gao, J. Wu, and P. Shi, “Robust sampled-data H∞ control with stochastic sampling,” Automatica, vol. 45, no. 7, pp. 1729–1736, 2009. [44] T. H. Lee, J. H. Park, S. M. Lee, and O. M. Kwon, “Robust synchronisation of chaotic systems with randomly occurring uncertainties via stochastic sampled-data control,” Int. J. Control, vol. 86, no. 1, pp. 107–119, 2013. [45] T. H. Lee, J. H. Park, O. M. Kwon, and S. M. Lee, “Stochastic sampled-data control for state estimation of time-varying delayed neural networks,” Neural Netw., vol. 46, pp. 99–108, Oct. 2013. [46] A. S. Matveev and A. V. Savkin, “Multirate stabilization of linear multiple sensor systems via limited capacity communication channels,” SIAM J. Control Optim., vol. 44, no. 2, pp. 584–617, 2005. [47] A. S. Matveev and A. V. Savkin, “An analogue of Shannon information theory for detection and stabilization via noisy discrete communication channels,” SIAM J. Control Optim., vol. 46, no. 4, pp. 1323–1361, 2007. [48] W.-A. Zhang and L. Yu, “Output feedback stabilization of networked control systems with packet dropouts,” IEEE Trans. Autom. Control, vol. 52, no. 9, pp. 1705–1710, Sep. 2007. [49] W.-A. Zhang and L. Yu, “Modelling and control of networked control systems with both network-induced delay and packet-dropout,” Automatica, vol. 44, no. 12, pp. 3206–3210, 2008. [50] L.-S. Hu, T. Bai, P. Shi, and Z. Wu, “Sampled-data control of networked linear control systems,” Automatica, vol. 43, no. 5, pp. 903–911, 2007. [51] D. Yue, Q.-L. Han, and P. Chen, “State feedback controller design of networked control systems,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 51, no. 11, pp. 640–644, Nov. 2004. [52] W.-A. Zhang and L. Yu, “Stabilization of sampled-data control systems with control inputs missing,” IEEE Trans. Autom. Control, vol. 55, no. 2, pp. 447–452, Feb. 2010. [53] W.-H. Chen and W. X. Zheng, “An improved stabilization method for sampled-data control systems with control packet loss,” IEEE Trans. Autom. Cont., vol. 57, no. 9, pp. 2378–2384, Sep. 2012. [54] A. Seuret and F. Gouaisbaut, “Jensen’s and Wirtinger’s inequalities for time-delay systems,” in Proc. 11th IFAC Workshop Time-Delay Syst., Grenoble, France, Feb. 2013, pp. 343–348. [55] D. Liberzon, Switching in Systems and Control. Boston, MA, USA: Birkhäuser, 2003. [56] Y. Tang, J.-A. Fang, and Q. Miao, “On the exponential synchronization of stochastic jumping chaotic neural networks with mixed delays and sector-bounded non-linearities,” Neurocomputing, vol. 72, nos. 7–9, pp. 1694–1701, 2003. [57] P. G. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, 2011.

Rajan Rakkiyappan received the bachelor’s degree in mathematics from the Sri Ramakrishna Mission Vidyalaya College of Arts and Science, Coimbatore, India, in 2002, the master’s degree in mathematics from the PSG College of Arts and Science, Bharathiar University, Coimbatore, in 2004, and the D.Phil. degree from the Department of Mathematics, Gandhigram Rural University, Gandhigram, India, in 2011. He is currently an Assistant Professor with the Department of Mathematics, Bharathiar University. He has authored over 90 papers in international journals. His current research interests include qualitative theory of stochastic and impulsive systems, neural networks, and delay differential systems.

3226

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 12, DECEMBER 2015

Shanmugavel Dharani was born in 1990. She received the B.Sc. degree in mathematics from Nallamuthu Gounder Mahalingam College, Bharathiar University, Coimbatore, India, in 2010, the M.Sc. degree in mathematics from Bharathiar University, in 2012, and the M.Phil. degree from the Department of Mathematics, Bharathiar University, in 2013, where she is currently pursuing the Ph.D. degree in mathematics. Her current research interests include stability and synchronization of neural networks.

Jinde Cao (M’07–SM’07) received the B.S. degree from Anhui Normal University, Wuhu, China, in 1986, the M.S. degree from Yunnan University, Kunming, China, in 1989, and the Ph.D. degree from Sichuan University, Chengdu, China, in 1998, all in applied mathematics. He was a Professor with Yunnan University from 1996 to 2000. He joined the Department of Mathematics, Southeast University, Nanjing, China, from 1989 to 2000. From 2001 to 2002, he was a Post-Doctoral Research Fellow with the Department of Automation and Computer-Aided Engineering, Chinese University of Hong Kong, Hong Kong. From 2006 to 2008, he was a Visiting Research Fellow and Visiting Professor with the School of Information System, Computing and Mathematics, Brunel University, Middlesex, U.K. He is currently a Distinguished Professor and Ph.D. Advisor with Southeast University, and also a Distinguished Adjunct Professor with King Abdulaziz University, Jeddah, Saudi Arabia. He has authored or co-authored over 400 journal papers and five edited books. His current research interests include nonlinear systems, neural networks, complex systems, complex networks, stability theory, and applied mathematics. Dr. Cao was an Associate Editor of the IEEE T RANSACTIONS ON N EURAL N ETWORKS , the Journal of the Franklin Institute, and the Neurocomputing. He is an Associate Editor of the IEEE T RANSACTIONS ON C YBERNETICS , Neural Networks, Differential Equations and Dynamical Systems, and Mathematics and Computers in Simulation.

Synchronization of Neural Networks With Control Packet Loss and Time-Varying Delay via Stochastic Sampled-Data Controller.

This paper addresses the problem of exponential synchronization of neural networks with time-varying delays. A sampled-data controller with stochastic...
2MB Sizes 0 Downloads 7 Views