IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 10, OCTOBER 2013

1701

Brief Papers New Algebraic Criteria for Synchronization Stability of Chaotic Memristive Neural Networks With Time-Varying Delays

+

n 

bi j (x i (t))g j (x j (t − τi j (t))) + Ii ,

j =1

t ≥ 0, i = 1, 2, . . . , n

(1)

Guodong Zhang and Yi Shen

where Abstract— In this brief, we consider the exponential synchronization of chaotic memristive neural networks with time-varying delays using the Lyapunov functional method and inequality technique. The dynamic analysis here employs the theory of differential equations with discontinuous right-hand side as introduced by Filippov. The designing laws in the synchronization of neural networks are proposed via state or output coupling. In addition, the new proposed algebraic criteria are very easy to verify, and they also enrich and improve the earlier publications. Finally, an example is given to show the effectiveness of the obtained results. Index Terms— Exponential synchronization, memristor, neural networks, time delay.

I. I NTRODUCTION After Prof. L. O. Chua [1] postulated the existence of a new two-terminal circuit element called memristor (as a contraction of memory and resistor) in 1971, it took scientists almost 40 years to invent such a practical memristor device that was published by scientists at Hewlett-Packard Laboratories in the May 1, 2008 issue of Nature [2]. The memristor had a feature just as the neurons in the human brain have; because of this feature, broad potential applications of the memristor were identified [2]–[5], one of which was to apply this device to build a new model of neural networks to emulate the human brain [3], [5], and its potential applications in next generation computer and powerful brainlike neural computers [2]–[4], [9]. More recently, Wu and Zeng [6], [7] studied a new model (memristor-based neural network) where the parameters change according to its state, that is, a state-dependent switching neural network as follows:  dx i (t) ai j (x i (t)) f j (x j (t)) = −x i (t) + dt n

 ai j (x i (t)) =  bi j (x i (t)) =

ai∗j , |x i (t)| < Ti , ai∗∗ j , |x i (t)| > Ti bi∗j , |x i (t)| < Ti , bi∗∗ j , |x i (t)| > Ti

∗ ∗∗ in which switching jumps Ti > 0, ai∗j , ai∗∗ j , bi j , bi j , i, j = 1, 2, . . . , n, are all constant numbers, f j , g j : R → R are bounded continuous functions, Ii is a constant external input, τi j (t) corresponds to the transmission delays and satisfies 0 ≤ τi j (t) ≤ τ (τ is a positive constant, i, j = 1, 2, . . . , n). As it is well known, stability and synchronization of chaotic systems are very important because of their potential applications in many different areas including secure communication, information science, biological systems, optics and so on, and these years have seen a lot of good results [10]–[26] on the synchronization of chaotic system (1); few results are, however, found in literatures. In this brief, different from the previous works [5]–[9], we provide some new algebraic criteria to deal with the problem of exponential synchronization for the chaotic memristor-based neural networks with time-varying delays (1) via state or output coupling. The new proposed algebraic criteria here are very easy to verify, and they also enrich and improve the earlier publications. The organization of this brief is as follows. Some preliminaries and model description are introduced in Section II. In Section III, some sufficient conditions for the exponential synchronization are derived by constructing suitable Lyapunov-like function. Then, numerical simulations are given to demonstrate the effectiveness of the proposed approach in Section IV. Finally, conclusion is given in Section V.

II. P RELIMINARIES AND M ODEL D ESCRIPTION

j =1

Manuscript received September 12, 2012; accepted May 11, 2013. Date of publication June 13, 2013; date of current version September 27, 2013. The work of Y. Shen was supported in part by the National Science Foundation of China under Grant 11271146, the Science and Technology Program of Wuhan under Grant 2013010501010117, and the Key Program of National Natural Science Foundation of China under Grant 61134012. The authors are with the School of Automation and the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Huazhong University of Science and Technology, Wuhan 430074, China (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2013.2264106

In this brief, solutions of all systems considered in the following are intended in Filippov’s sense [27]. We define φ = n  | φi (t) |r ]1/r , where r is a constant and r > 1, for sup [

−τ ≤t ≤0 i=1

∀ φ = (φ1 (t), φ2 (t), . . . , φn (t)) ∈ C([−τ, 0], R n ), co[ξ i , ξ i ] is the convex hull of [ξ i , ξ i ]. a i j = min{ai∗j , ai∗∗ j }, a i j = ∗ , b ∗∗ }, b = max{b ∗ , b ∗∗ }. For a }, b = min{b max{ai∗j , ai∗∗ ij ij j ij ij ij ij continuous function k(t) : R → R, D + k(t) is called the upper right dini derivative and defined as D + k(t) = lim h1 (k(t + h→0+

h)−k(t)). System (1) has the following form initial condition: x(s) = ψ(s) = (ψ1 (s), ψ2 (s), . . . , ψn (s))T ∈ C([−τ, 0], R n ).

2162-237X © 2013 IEEE

1702

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 10, OCTOBER 2013

Now, we introduce the following definitions about set-valued map and differential inclusion [27], [28]. Definition 1: Let E ⊂ R n , x → F(x) is called a set-valued map from E → R n ; if to each point x of a set E ⊂ R n , there corresponds a nonempty set F(x) ⊂ R n . A set-valued map F with nonempty values is said to be upper semi-continuous at x 0 ∈ E ⊂ R n if, for any open set N containing F(x 0 ), there exists a neighborhood M of x 0 such that F(M) ⊂ N. F(x) is said to have a closed (convex, compact) image if for each x ∈ E, F(x) is closed (convex, compact). Definition 2: For the system dx/dt = F(x), x ∈ R n , with discontinuous right-hand sides, a set-valued map is defined as follows:   (x) = co[F(B(x, δ) \ N)]

Now, we do the following assumptions for (1). H1 For i = 1, 2, ., n, ∀s1 , s2 ∈ R, s1 = s2 , the neuron activation functions f i , gi are bounded and satisfy Lipschitz condition | f i (s1 ) − f i (s2 )| ≤ ρi |s1 − s2 | |gi (s1 ) − gi (s2 )| ≤ σi |s1 − s2 | where ρi > 0, σi > 0. H1’ For i = 1, 2, . . . , n, ∀s1 , s2 ∈ R, s1 = s2 , the neuron activation functions fi , gi are bounded and satisfy Lipschitz condition fi (s1 ) − f i (s2 ) ≤ ρi s1 − s2 |gi (s1 ) − gi (s2 )| ≤ σi |s1 − s2 |

0 < ρi∗
0 μ(N)=0

where co[E] is the closure of the convex hull of set E, B(x, δ) = {y : y − x ≤ δ}, and μ(N) is Lebesgue measure of set N. By applying the theories of set-valued maps and differential inclusions [27], [28], the memristor-based neural network (1) can be written as the following differential inclusion:

where ρi > 0, ρi∗ > 0, σi > 0. H2 For the differential inclusion (2a) and (3a), the following conditions hold: co[a i j , a i j ] f j (y j (t)) − co[a i j , a i j ] f j (x j (t)) ⊆ co[a i j , a i j ] × ( f j (y j (t)) − f j (x j (t))), co[bi j , bi j ]g j (y j (t)) − co[bi j , bi j ] × g j (x j (t))

 dx i (t) ∈ −x i (t) + co[ai j , a i j ] f j (x j (t)) dt n

⊆ co[bi j , bi j ](g j (y j (t)) − g j (x j (t)))

j =1

+

n 

co[bi j , bi j ]g j (x j (t − τi j (t))) + Ii ,

j =1

for a.e. t ≥ 0, i = 1, 2, . . . , n.

(2a)

From [27] and [28], we know that the differential inclubi j ∈ sion (2a) means that there exist  ai j ∈ co[a i j , a i j ],  co[bi j , bi j ], such that   dx i (t)  = −x i (t) +  ai j f j (x j (t)) + bi j dt n

n

j =1

j =1

where x(t) = (x 1 (t), x 2 (t), . . . , x n (t))T , y(t) = (y1 (t), y2 (t), . . . , yn (t)) are two solutions of (1) with initial conditions φ(s) = (φ1 (s), . . . , φn (s))T , ψ(s) = (ψ1 (s), . . . , ψn (s))T ∈ C([−τ, 0], R n ), respectively. Definition 3: A function (in Filippov’s sense) x ∗ (t) = ∗ (x 1 (t), x 2∗ (t), . . . , x n∗ (t))T is a solution of (1), with the initial conditions φ(s) = (φ1 (s), φ2 (s), . . . , φn (s))T ∈ C([−τ, 0], R n ), if x ∗ (t) is an absolutely continuous function and satisfies the differential inclusion  dx i∗ (t) ∈ −x i∗ (t) + co[a i j , a i j ] f j (x ∗j (t)) dt n

×g j (x j (t −τi j (t)))+ Ii , t ≥ 0, i = 1, . . . , n. (2b)

j =1

Throughout this brief, we consider (2a) or (2b) as drive system and corresponding response system as follows: dyi (t) ∈ −yi (t) + dt +

n 

n 

+

j =1

or equivalently, there exist  ai j ∈ co[bi j , bi j ], i = 1, 2, . . . , n, such that

j =1

co[bi j , bi j ]g j (y j (t − τi j (t))) + u i (t) + Ii , for a.e. t ≥ 0, i = 1, 2, . . . , n

dyi (t) = −yi (t) + dt

n  j =1

co[a i j , a i j ],  bi j

 ai j f j (y j (t)) +

n 

(3a)

 bi j

×g j (y j (t − τi j (t))) + u i (t) + Ii , t ≥ 0 (3b) where u i (t), i = 1, 2, . . . , n are the appropriate control input to obtain a certain control objective. The initial conditions of (3a) or (3b) are yi (s) = ψi (s) ∈ C([−τ, 0], R).

n

n

j =1

j =1

×g j (x ∗j (t − τi j (t))) + Ii , t ≥ 0.



j =1

co[a i j , a i j ],  bi j

(4) ∈

  dx i∗ (t)  = −x i∗ (t) +  ai j f j (x ∗j (t)) + bi j dt

j =1



co[bi j , bi j ]g j (x ∗j (t − τi j (t))) + Ii , for a.e. t ≥ 0, i = 1, 2, . . . , n

co[a i j , a i j ] f j (y j (t))

or equivalently, there exist  ai j co[bi j , bi j ], such that

n 

(5)

Definition 4: Systems (2a) and (3a) or (2b) and (3b) are said to be exponentially synchronized if there exist constants β ≥ 1 and ε > 0 such that 1/r  n  r |yi (t) − x i (t)| ≤ βe−εt ψ − φ, for ∀ t ≥ 0 i=1

where the constant ε is said to be the degree of exponential synchronization.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 10, OCTOBER 2013

Lemma 1: Under assumptions H1 or H1’, the local existence of a solution x(t) with initial condition φ(s) = (φ1 (s), φ2 (s), . . . , φn (s))T ∈ C([−τ, 0], R n ) of (1) can be obtained from [27]. Because of the activation functions f i , gi , i = 1, 2, . . . , n in (1) are bounded and satisfy Lipschitz conditionis, thus the local solution x(t) can be extended to the interval [0, +∞) in the sense of Filippov [27]. In this brief, the control inputs in the response system (3a) are considered as follows: u i (t) = ωi (yi (t) − x i (t))

i = 1, 2, . . . , n

(6)

or u i (t) =

n 

i j ( f j (y j (t)) − f j (x j (t))) i = 1, 2, . . . , n.

(7)

such that −r (1 − ωi ) +

ω1 ⎜ 0 ⎜ Let  = ⎜ . ⎝ ..

0 ω2 .. .

··· ··· .. .

0 0 .. .

r−ξ¯i j

+ (r − 1)(Bi j +

r−ζ¯i j 1/(r−1)

σj



)

λj ξ¯ ζ¯ B ji σ ji < 0 λi (1 − δ) j i i

(9)

where Ai j = max{|a i j |, |a i j |}, Bi j = max{|bi j |, |bi j |}, i, j = 1, 2, . . . , n. Proof: It follows from (8a) or (8b) and (6) that e˙i (t) ∈ −ei (t) +

n 

co[a i j , a i j ]F j (e j (t))

j =1



+

⎟ ⎟ ⎟ ⎠

n 

co[bi j , bi j ]G j (e j (t − τi j (t))) + ωi ei (t),

j =1

for a.e. t ≥ 0, i, j = 1, 2, . . . , n.

0 0 0 ωn and  = (i j )n×n , and  or  is a constant gain matrix to be determined for synchronizing both the drive system and response system. Now, define the synchronization error e(t) as follows: e(t) = (e1 (t), e2 (t), . . . , en (t))T , where ei (t) = yi (t) − x i (t). And by the theories of set-valued maps and differential inclusions [27], [28], assumption H2 and associated with the systems (2a) and (3a) or (2b) and (3b), then we can obtain the following synchronization error system:  dei (t) ∈ −ei (t) + co[a i j , a i j ]F j (e j (t)) dt j =1 n  + co[bi j , bi j ]G j (e j (t − τi j (t))) + u i (t), n

for a.e. t ≥ 0, i = 1, 2, . . . , n (8a) ∈ co[a i j , a i j ], B i j



  dei (t) = −ei (t) + Ai j F j (e j (t)) + Bi j dt n

n

j =1

j =1

×G j (e j (t − τi j (t))) + u i (t), t ≥ 0

(10)

For i, j = 1, 2, . . . , n, we can choose a small ε > 0 such that n   λ j ξ ji ζ ji r−ξ r−ζ r (ε−1+ωi ) + (r − 1)(Ai j i j ρ j i j )1/(r−1) + A ρ λi j i i j =1

r−ξ¯i j

+ (r − 1)(Bi j +

r−ζ¯i j 1/(r−1)

σj



)

λ j erετ ξ¯ ζ¯ B j ij i σi j i < 0. λi (1 − δ)

(11)

Now, we consider a Lyapunov functional as follows:  n n  1  ξ¯i j ζ¯i j r rεt V (t) = λi |ei (t)| e + Bi j σ j 1−δ i=1 j =1   t r rε(s+τi j (s)) × |e j (t)| e ds . (12) t −τi j (t )

j =1

or equivalently, there exist Ai j co[bi j , bi j ], such that

n   λ j ξ ji ζ ji r−ξ r−ζ (r − 1)(Ai j i j ρ j i j )1/(r−1) + A ρ λi j i i j =1

j =1



1703

Under assumptions H1 and H2, by calculating the upper right derivation D + V (t) of V (t) along the solution to (8a) or (8b), we obtain the following: +

D V |(8a) or (8b) = e

rεt

n 

 r λi ε|ei (t)|r + |ei (t)|r−1 sgn(ei (t))

i=1

(8b)

where F j (e j (t)) = f j (y j (t))− f j (x j (t)), G j (e j (t −τi j (t))) = g j (y j (t − τi j (t))) − g j (x j (t − τi j (t))). In the following section, this brief aims to find some sufficient synchronization criteria for the purpose of exponentially synchronizing the unidirectional coupled identical systems (2a) and (3a) or (2b) and (3b).

 × D + ei (t) n n   λi ξ¯ ζ¯ Bi ji j σ j i j +erεt 1−δ i=1 j =1  rετ (t ) × e i j |e j (t)|r − (1 − τ˙i j (t))|e j (t)  − τi j (t)|r  n  = erεt r λi ε|ei (t)|r +|ei (t)|r−1 sgn(ei (t)) i=1

III. M AIN R ESULTS Theorem 1: Under assumptions H1 and H2 and τi j (t) satisfy τ˙i j (t) ≤ δ < 1, systems (2a) and (3a) or (2b) and (3b) are exponentially synchronized with control inputs (6), if there exists constants λi > 0, ξi j , ξ¯i j , ζi j , ζ¯i j ∈ R and r > 1,

 n  × −(1−ωi )ei (t)+ Ai j F j (e j (t)) j =1

+

n  j =1

 B i j G j (e j (t −τi j (t))) +erεt

1704

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 10, OCTOBER 2013

n n   λi ξ¯ ζ¯  Bi ji j σ j i j erετi j (t ) |e j (t)|r 1−δ i=1 j =1  −(1 − τ˙i j (t))|e j (t − τi j (t)|r  n  rεt r−1 ≤e (ε−1+ωi )|ei (t)| r λi |ei (t)|

ξ

r−ξ¯i j

× (Bi j

erετ ξ¯ ζ¯ λ j B j ij i σi j i + 1−δ ≤ 0.

i=1

+

n 

+ +erεt

As

Bi j |G j (e j (t −τi j (t))| ξ¯

V (0) =

ζ¯

λi Bi ji j σ j i j

n 



i=1 j =1

× (13)

Now, using Young inequality: u γ v 1−γ ≤ γ u + (1 − γ )v, where u > 0, v > 0, 0 < γ < 1, and (11), we obtain the following estimate for the right-hand side of (13): n n   erετ  + rεt r λi (ε − 1 + ωi ) + D V (t)|(8a) or (8b) ≤ e 1−δ i=1 j =1  n  ξ¯ ζ¯ × λ j B j ij i σi j i + Ai j r λi ρ j j =1 n  ×|e j (t)| + Bi j r λi σ j|ei (t)|r−1 r

j =1

× |e j (t − τi j (t))|  n  ξ¯ ζ¯ Bi ji j σ j i j |e j (t −τi j (t))|r −λi



 n   1 r−ξ r−ζ |ei (t)|r (Ai j i j ρ j i j ) r−1

r−1 r

j =1 ξ

ζ

×(Ai ji j ρ j i j |e j (t)|r ) + r λi r−ξ¯i j

ξ¯

r−ζ¯i j

σj

1

) r−1 |ei (t)|r ]

ζ¯

r−1 r

1

×(Bi ji j σ j i j |e j (t − τi j (t))|r ) r −λi ≤ erεt

n 

ξ¯

ζ¯



Bi ji j σ j i j |e j (t − τi j (t))|r

j =1 n  

r λi (ε − 1 + ωi )

i=1

+

n   1 r−ξ r−ζ λi (r − 1)(Ai j i j ρ j i j ) r−1 j =1

(14)

(15)

max λi + τ

1≤i≤n

n 1  ξ¯i j ζ¯i j Bi j σ j 1−δ j =1  |e j (s)|r erε(s+τi j (s)) ds

 n erετ  ξ¯ ζ¯ λ j max (B j ij i σi j i ) 1≤i≤n 1−δ j =1

× sup

n 

−τ ≤t ≤0 i=1

|ψi (t) − φi (t)|r .

(16)

From (12), for t ≥ 0, we have V (t) ≥

n 

λi erεt |ei (t)|r ≥ ( min λi )erεt 1≤i≤n

i=1

n 

|ei (t)|r . (17)

i=1

Then, from (15)–(17), we obtain the following: 1/r  n r |yi (t, ψ) −x i (t, φ)| ≤ βe−εt i=1

× sup

−τ ≤t ≤0

 n

1/r |ψi (t) − φi (t)|r

i=1

that is y(t, ψ) − x(t, φ) ≤ βe−εt ψ − φ where



max λi + ⎢ 1≤i≤n ⎢ β =⎢ ⎣

τ erετ 1−δ

(18)

⎤1/r ξ¯ ζ¯ λ j max (B j ij i σi j i ) ⎥ 1≤i≤n j =1 ⎥ ⎥ ≥ 1. (19) ⎦ min λi n 

1≤i≤n

n  j =1

×[(Bi j

0

−τi j (t )



j =1

n n   erετ  rεt r λi (ε − 1 + ωi ) + =e 1−δ i=1 j =1  ¯ξ j i ζ¯ j i × λ j B j i σi |ei (t)|r

1 r

 |ei (t)|r

 λi |ψi (0) − φi (0)|r +

i=1

 rετ  e r r |e j (t)| −|e j (t −τi j (t))| . × 1−δ

+r λi

1

) r−1

V (t) ≤ V (0) for t ≥ 0.



j =1 n  n 

r−ζ¯i j

σj

Thus

Ai j |F j (e j (t))|

j =1 n 

ζ

+ λ j A j ij i × ρi j i + λi (r − 1)

×

The proof is completed. Corollary 1: Under assumptions H1 and H2, when τi j (t) = τi j , systems (2a) and (3a) or (2b) and (3b) are exponentially synchronized with control inputs (6), if the following inequality holds: −2(1 − ωi ) +

n 

[ Ai j ρ j + A j i ρi + Bi j σ j + B j i σi ] < 0 (20)

j =1

where Ai j = max{|a i j |, |a i j |}, Bi j = max{|bi j |, |bi j |}, i, j = 1, 2, . . . , n. Proof: Corollary 1 can be obtained from Theorem 1 by considering r = 2, λi = 1, ξi j = ξ¯i j = ζi j = ζ¯i j = 1, i, j = 1, 2, . . . , n.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 10, OCTOBER 2013

j =1, j =i



+ r−ξ¯i j

×(Bi j

λ j ξ ji ζ ji A ρ + λi j i i

r−ζ¯i j 1/(r−1)

σj

)

+

n  

n 

(r − 1)

j =1

 λj ξ¯ ζ¯ B j ij i σi j i < 0 (21) λi (1 − δ)

[ Ai j ρ j + A j i ρi ] +

j =1, j =i

3

2

1

0 2

−1

−2

−3

−4

−5 −1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

x (t) 1

Fig. 1. Phase plot of drive system (23). The initial condition x1 (θ ) = 0.45, x2 (θ ) = 0.65, ∀θ ∈ [−1, 0).

Example 1: Consider two-dimensional memristor-based neural networks

where Aii = max{aii∗ + ii , aii∗∗ + ii } < 0, and Ai j = max{|ai j + i j |, |a i j + i j |}(i = j ), Bi j = max{|bi j |, |bi j |}, i, j = 1, 2, . . . , n. Proof: Theorem 2 can be similar proved as Theorem 1, hence the process omitted here. Corollary 2: Under assumptions H1’ and H2, when τi j (t) = τi j , systems (2a) and (3a) or (2b) and (3b) are exponentially synchronized with control inputs (7), if the following inequality holds: 2ρi∗ Aii +

4

x (t)

Remark 1: For the memristor-based neural networks, sufficient conditions were obtained for global uniform asymptotic stability [5], exponential stability [6], [8], exponential antisynchronization [7]. Compared with the above results, the main results of this brief are obtained for exponential synchronization, and our main results based on r -norm (r > 1), which improve the main results are obtained based on 2-norm in [6]–[9]. Hence, the results of this brief enrich and complement the earlier publications. In addition, compared with other researches using LMIs technique to obtain the conditions of exponential synchronization, such as [14] and [22], the conditions in this brief can be directly derived from the parameters of the neural networks, are very easily verified. Theorem 2: Under assumptions H1’ and H2 and τi j (t) satisfy τ˙i j (t) ≤ δ < 1, systems (2a) and (3a) or (2b) and (3b) are exponentially synchronized with control inputs (7), if there exists constants λi > 0, ξi j , ξ¯i j , ζi j , ζ¯i j ∈ R and r > 1, such that  n  r−ξ r−ζ −r + rρi∗ Aii + (r − 1)(Ai j i j ρ j i j )1/(r−1)

1705

dx 1(t) = −x 1 (t) + a11 (x 1 (t)) f1 (x 1 (t)) + a12 (x 1 (t)) dt × f2 (x 2 (t)) + b11(x 1 (t))g1 (x 1 (t − τ11 (t))) +b12 (x 1 (t))g2 (x 2 (t − τ12 (t))) + I1 dx 2(t) = −x 2 (t) + a21 (x 2 (t)) f1 (x 1 (t)) + a22 (x 2 (t)) dt × f2 (x 2 (t)) + b21(x 2 (t))g1 (x 1 (t − τ21 (t))) +b22 (x 2 (t))g2 (x 2 (t − τ22 (t))) + I2 where

 a11 (x 1 (t)) =

n  [Bi j σ j + B j i σi ] < 2

b11 (x 1 (t)) =

j =1

a21(x 2 (t)) = (22)

where Aii = max{aii∗ + ii , aii∗∗ + ii } < 0, and Ai j = max{|ai j + i j |, |a i j + i j |}(i = j ), Bi j = max{|bi j |, |bi j |}, i, j = 1, 2, . . . , n. Proof: Corollary 2 can be obtained from Theorem 2 by considering r = 2, λi = 1, ξi j = ξ¯i j = ζi j = ζ¯i j = 1, i, j = 1, 2, . . . , n. Remark 2: Different from the previous research in [9], in this brief, we refer to (7) as output coupling in Theorem 2 and Corollary 2. As we know, synchronization via output coupling is important because in many real systems only output signals can be measured. IV. N UMERICAL E XAMPLES Now, we perform some numerical simulations to illustrate our analysis.

a12 (x 1 (t)) = a22 (x 2 (t)) = b21(x 2 (t)) = b12 (x 1 (t)) = b22 (x 2 (t)) = t

2, 1.95,  −1.48, −1.5,  −4.9, −5,  −0.1, −0.09,  3, 2.5,  −0.2, −0.15,  −0.1, −0.09,  −2.49, −2.5,

(23)

|x 1 (t)| < 1, |x 1 (t)| > 1, |x 1 (t)| < 1, |x 1 (t)| > 1, |x 2 (t)| < 1, |x 2 (t)| > 1, |x 1 (t)| < 1, |x 1 (t)| > 1, |x 2 (t)| < 1, |x 2 (t)| > 1, |x 2 (t)| < 1, |x 2 (t)| > 1, |x 1 (t)| < 1, |x 1 (t)| > 1, |x 2 (t)| < 1, |x 2 (t)| > 1.

e τi j (t) = 1+e = (I1 , I2 )T = (0, 0)T , and take the t,I activation function as f i (x i ) = gi (x i ) = tanh(x i ), i, j = 1, 2. Model (23) has chaotic attractors with the initial condition x 1 (θ ) = 0.45, x 2(θ ) = 0.65, ∀θ ∈ [−1, 0) that is shown in Fig. 1.

1706

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 10, OCTOBER 2013

and because of the limit of this brief’s page, the figure of state trajectories of variable x 2 (t), y2 (t) is omitted. Some examples can also be given for other theorems; here they are omitted. Remark 3: In (1), as ai j (x i (t)) and bi j (x i (t)), i, j = 1, 2, . . . , n are discontinuous, our results are different from those from research on exponential synchronization of the neural networks with continuous right-hand side [12]–[14]. In addition, the results obtained in [6]–[8] can not be used here.

0.4

0.3

e (t) 1 e (t) 2

error e1 (t) and e2(t)

0.2

0.1

0

V. C ONCLUSION

−0.1

−0.2

−0.3

−0.4 0

5

10

15

time − T

Fig. 2. Synchronization error e1 (t) and e2 (t) between drive system (23) and the corresponding response system. 1

x1(t) y1(t)

0.8

In this brief, under the framework of Filippov’s solution, and by building a useful Lyapunov functional and using the inequality technique, we obtained some new testable algebraic criteria to ensure the exponential stability for the error system, and thus the drive system synchronized with the response system. The new proposed results here enriched and improved the earlier publications and also were very easy to verify. Finally, with further analyzing dynamic properties of the memristive neural networks in practice or from experimental points of view, we believe more precise mathematic or physical architectures of the memristive neural networks will be proposed and studied in future research.

0.6

R EFERENCES 0.2

1

x (t), y (t)

0.4

1

0 −0.2 −0.4 −0.6 −0.8 −1 0

50

100

150

200

250

300

350

400

450

500

time − T

Fig. 3. State trajectories of variable x1 (t) and y1 (t) with the initial condition x1 (θ ) = 0.45, y1 (θ ) = −0.5, ∀θ ∈ [−1, 0).

Now, we consider the controller matrix (output coupling matrix) that is given in (6) as follows: ω1 = ω2 = −8. By simple computing, we obtain A11 = 2, A12 = 0.1, A21 = 5, A22 = 3, B11 = 1.5, B12 = 0.1, B21 = 0.2, B22 = 2.5, ρi = σi = 1, τ˙i j (t) ≤ δ = 14 < 1, i, j = 1, 2. Now, let r = 2, λi = 1, ξi j = ξ¯i j = ζi j = ζ¯i j = 1, then, 7B11 4B21 + B12 + ≈ −5 3 3 7B22 4B12 −18 + 2 A22 + A12 + A21 + + + B21 ≈ −0.73. 3 3 Hence, the conditions in Theorem 1 are satisfied. Therefore, the drive system (23) and the corresponding response system exhibit exponential synchronization. Define errors ei (t) = yi (t) − x i (t), i = 1, 2 and the errors are shown in Fig. 2. The state trajectories of variable x 1 (t), y1 (t) are shown in Fig. 3, −18 + 2 A11 + A12 + A21 +

[1] L. O. Chua, “Memristor—The missing circuit element,” IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507–519, Sep. 1971. [2] D. B. Strukov, G. S. Snider, G. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, vol. 453, pp. 80–83, May 2008. [3] F. Corinto, A. Ascoli, and M. Gilli, “Nonlinear dynamics of memristor oscillators,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 58, no. 6, pp. 1323–1336, Jun. 2011. [4] M. Itoh and L. O. Chua, “Memristor oscillators,” Int. J. Bifurcation Chaos, vol. 18, no. 11, pp. 3183–3206, Nov. 2008. [5] J. Hu and J. Wang, “Global uniform asymptotic stability of memristorbased recurrent neural networks with time delays,” in Proc. Int. Joint Conf. Neural Netw., Jul. 2010, pp. 1–8. [6] A. L. Wu and Z. G. Zeng, “Exponential stabilization of memristive neural networks with time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 12, pp. 1919–1929, Dec. 2012. [7] A. L. Wu and Z. G. Zeng, “Anti-synchronization control of a class of memristive recurrent neural networks,” Commun. Nonlinear Sci. Numer. Simul., vol. 18, no. 2, pp. 373–385, Feb. 2013. [8] G. D. Zhang, Y. Shen, and J. W. Sun, “Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, no. 15, pp. 149–154, Nov. 2012. [9] A. L. Wu, S. P. Wen, and Z. G. Zeng, “Synchronization control of a class of memristor-based recurrent neural networks,” Inf. Sci., vol. 183, no. 1, pp. 106–116, Jan. 2012. [10] J. Cao and L. Li “Cluster synchronization in an array of hybrid coupled neural networks with delay,” Neural Netw., vol. 22, no. 4, pp. 335–342, May 2009. [11] Y. Yang and J. Cao, “Exponential lag synchronization of a class of chaotic delayed neural networks with impulsive effects,” Phys. A, vol. 386, no. 1, pp. 492–502, 2007. [12] T. Huang, A. Chan, Y. Huang, and J. Cao, “Stability of Cohen–Grossberg neural networks with time-varying delays,” Neural Netw., vol. 20, no. 6, pp. 868–873, 2007. [13] W. Lu and T. Chen, “Synchronization of coupled connected neural networks with delays,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 51, no. 12, pp. 2491–2503, Dec. 2004. [14] T. Li, S. Fei, and K. Zhang, “Synchronization control of recurrent neural networks with distributed delays,” Phys. A, vol. 387, no. 4, pp. 982–996, Feb. 2008. [15] C. Zhang, Y. He, and M. Wu, “Exponential synchronization of neural networks with time-varying mixed delays and sampled-data,” Neurocomputing, vol. 74, nos. 1–3, pp. 265–273, Dec. 2010.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 10, OCTOBER 2013

[16] M. Dong, “Global exponential stability and existence of periodic solutions of CNN’s with delays,” Phys. Lett. A, vol. 300, no. 1, pp. 49–57, Jul. 2002. [17] D. Liu, S. Hu, and J. Wang, “Global output convergence of a class of continuous-time recurrent neural networks with time-varying thresholds,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 51, no. 4, pp. 161–167, Apr. 2004. [18] L. Zhang, Z. Yi, S. L. Zhang, and P. A. Heng, “Activity invariant sets and exponentially stable attractors of linear threshold discrete-time recurrent neural networks,” IEEE Trans. Autom. Control, vol. 54, no. 6, pp. 1341–1347, Jun. 2008. [19] R. Yang, Z. Zhang, and P. Shi, “Exponential stability on stochastic neural networks with discrete interval and distributed delays,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 169–175, Jan. 2010. [20] Y. Liu, Z. Wang, J. Liang, and X. H. Liu, “Stability and synchronization of discrete-time Markovian jumping neural networks with mixed modedependent time delays,” IEEE Trans. Neural Netw., vol. 20, no. 7, pp. 1102–1116, Jul. 2009. [21] H. T. Lu and C. V. Leeuwen, “Synchronization of chaotic neural networks via output or state coupling,” Chaos, Solitons Fractals, vol. 30, no. 1, pp. 166–176, Oct. 2006.

1707

[22] M. Q. Liu, “Optimal exponential synchronization of general chaotic delayed neural networks: An LMI approach,” Neural Netw., vol. 22, no. 7, pp. 949–957, Sep. 2009. [23] H. Zhang, Z. Liu, G. Huang, and Z. Wang, “Novel weighting-delaybased stability criteria for recurrent neural networks with time-varying delay,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 91–106, Jan. 2010. [24] H. Zhang, Y. H. Xie, Z. L. Wang, and C. D. Zheng, “Adaptive synchronization between two different chaotic neural networks with time delay,” IEEE Trans. Neural Netw., vol. 18, no. 6, pp. 1841–1845, Nov. 2007. [25] G. Chen, J. Zhou, and Z. Liu, “Global synchronization of coupled delayed neural networks and applications to chaotic CNN models,” Int. J. Bifurcation Chaos, vol. 14, no. 7, pp. 2229–2240, Jul. 2004. [26] Y. Shen and J. Wang, “Almost sure exponential stability of recurrent neural networks with Markovian switching,” IEEE Trans. Neural Netw., vol. 20, no. 5, pp. 840–855, May 2009. [27] A. F. Filippov, Differential Equations with Discontinuous Right-hand Sides, Boston, MA, USA: Kluwer, 1988. [28] F. H. Clarke, Y. S. Ledyaev, R. J. Stem, and R. R. Wolenski, Nonsmooth Analysis and Control Theory, New York, NY, USA: Springer-Verlag, 1998.

New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays.

In this brief, we consider the exponential synchronization of chaotic memristive neural networks with time-varying delays using the Lyapunov functiona...
364KB Sizes 0 Downloads 0 Views