1300

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

Global Exponential Synchronization of Multiple Memristive Neural Networks With Time Delay via Nonlinear Coupling Zhenyuan Guo, Shaofu Yang, and Jun Wang, Fellow, IEEE Abstract— This paper presents theoretical results on the global exponential synchronization of multiple memristive neural networks with time delays. A novel coupling scheme is introduced, in a general topological structure described by a directed or undirected graph, with a linear diffusive term and discontinuous sign term. Several criteria are derived based on the Lyapunov stability theory to ascertain the global exponential stability of synchronization manifold in the coupling scheme. Simulation results for several examples are given to substantiate the effectiveness of the theoretical results. Index Terms— Global exponential synchronization, memristive neural network (MNN), nonlinear coupling, synchronization manifold.

I. I NTRODUCTION

C

ONCEIVED in [1] and prototyped in [2], the memristor is gaining worldwide attention in view of its potential applications. Unlike the resistor, the memristance (the value of the memristor) depends on the quantity of charge through it. Hence, the memristor can remember its past dynamic history. Due to the memory property, the memristor is considered as a potential candidate to simulate biological synapses. By substituting resistors in artificial neural networks, a memristorbased neural network can be constructed. This model can exhibit complex behaviors, including chaos [3], and provide an important means to better understand the neural processes in the human brain [4]. Some effort has been devoted to study the dynamic behavior of memristive neural network (MNN). In [5], a memistorbased recurrent neural network was proposed where the feature of memristor was characterized by a simplified mathematical model, and its global stability was studied using differential inclusion theory. Then, many further investigations are followed [6]–[12]. In particular, it is revealed that the number

Manuscript received February 27, 2014; revised July 20, 2014; accepted August 27, 2014. Date of publication September 12, 2014; date of current version May 15, 2015. This work was supported in part by the Research Grants Council, Hong Kong, under Project CUHK416811E, in part by the China Post-Doctoral Science Foundation under Grant 2013M542104 and Grant 2012T50714, in part by the Science and Technology Plan of Hunan Province under Grant 2014RS4030, and in part by the National Natural Science Foundation of China under Grant 11101133. Z. Guo is with the College of Mathematics and Econometrics, Hunan University, Changsha 410082, China, and also with the Department of Mechanical and Automation Engineering, Chinese University of Hong Kong, Hong Kong (e-mail: [email protected]). S. Yang and J. Wang are with the Department of Mechanical and Automation Engineering, Chinese University of Hong Kong, Hong Kong (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2014.2354432

of equilibria in an n-neuron memristor-based neural network 2 can be significantly increased to 22n +n [9], which result in substantially enlarging the storage capacity. Hence, compared to conventional neural network, MNNs have more attractive applications. The collective behaviors, especially synchronization, in multiple neural networks have attracted tremendous attention of researchers due to its wide applications in various fields. For instance, drive-response synchronization of two identical-coupled chaotic systems [13] can be employed in secret communication [14], [15]. Synchronization of an array of coupled neural networks has been shown to play a very important role in information processing and cognition behaviors in the brain [16]. Up to now, many results been obtained on the synchronization of two neural networks [17]–[20]. There are also some works considering this issue between two coupled MNNs in recent years [21]–[25]. By designing a proper control law on the response system, the response system can synchronize with the drive system under some conditions. Compared to the drive-response synchronization, synchronization of multiple (more than two) neural networks is more complicated. This is because the synchronization criteria not only has a relationship to the coupling protocol, but also depends on the coupling structure. Generally, the coupling structure is described by a graph. The coupling structure in drive-response system is the simplest one, i.e., a directed graph with two nodes. But, for multiple neural networks, the coupling structure can be connected or unconnected, directed or undirected. In [26], global synchronization in an array of identical delayed neural networks under undirected connected coupling topology was investigated, then the restriction on coupling structure was removed in [27] and some synchronization criteria were obtained without assuming that the coupling structure was undirected or connected. In the following, the synchronization issue was studied under different situations, such as considering time delay in coupling [28], time-varying coupling topology [29], influence of stochastic noise [30], and so on. Besides, there are some works considering synchronization on other types of neural networks, such as reaction-diffusion neural networks [31]. However, to the best of our knowledge, up to now, few results focus on the synchronization problem of multiple MNNs, and the coupling structures in the references mentioned above cannot synchronize multiple MNNs. Motivated by the above discussions, this paper addresses the problem of global synchronization of an array of coupled MNNs with general coupling structure. Unlike conventional

2162-237X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

GUO et al.: GLOBAL EXPONENTIAL SYNCHRONIZATION OF MULTIPLE MNNs

neural networks mentioned before, MNN is a state-dependent switch system. Therefore, all MNNs will switch asynchronously due to their different states and then all the MNNs will be nonidentical before reaching the synchronization. Obviously, this is quite different from previous works, where all the neural networks were assumed to be identical all the time. Hence, traditional linear diffusive coupling scheme may be unable to ensure complete synchronization of multiple MNNs. In order to overcome this difficulty, we propose a novel nonlinear coupling scheme, which is composed of linear diffusive term and discontinuous sign function term. Using the similar approach in [27] and [32], the synchronization problem is transformed to the stability problem of synchronization manifold. Based on the Lyapunov stability theory and by constructing some suitable Lyapunov–Krasovskii functionals, several global exponential synchronization criteria are derived. Moreover, some criteria in the form of LMI are also obtained, which are easy to be verified using the MATLAB Toolbox. The rest of this paper is organized as follows. In Section II, the problems are formulated and some preliminaries are presented. In Section III, main results for global exponential synchronization of multiple MNNs are derived. In Section IV, some illustrative examples and their simulation are provided to demonstrate the effectiveness of our obtained criteria. In Section V, concluding remarks are given. II. P RELIMINARIES In this section, we will first introduce the memristor-based neural network (MNN) model, and then the synchronization problem of an array of MNNs is formulated. Some necessary definitions and useful lemmas are also briefly outlined.

A. Model Description Using memristors to replace resistors in the circuit realization of the connections of a neural network, a single MNN model with time delay can be described by d x(t) = −C x(t) + A(x) f (x(t)) + B(x) f (x(t − τ )) + u(t) dt (1) where x(t) ∈ Rn is the state vector, C = diag{c1 , c2 , . . . , cn } is the real diagonal positive-defined matrix representing the neuron self-inhibitions, and u(t) ∈ Rn represents the input or bias; τ > 0 is the transmission delay among neurons, f (x(t)) = ( f 1 (x 1 (t)), f2 (x 2 (t)), . . . , fn (x n (t)))T is the neuronal activation function, and A(x) = [ai j ( f (x j (t) − x i (t)))]n×n and B(x) = [bi j ( f (x j (t) − x i (t)))]n×n are the weight matrix and delayed weight matrix, respectively. Due to the pinched hysteretic feature of the memristor (Fig. 1), the weight between every two neurons is time varying. Here, we consider a simplified model (Fig. 2), which was first proposed in [5]. The connection weights ai j (·) and bi j (·) depend on the variation direction of f (x j (t))− x i (t) and f (x j (t −τ ))− x i (t) along time t, respectively. They can be described by the

1301

Fig. 1. Typical current–voltage characteristics of memristor with a sinusoidal current source. It is a pinched hysteresis loop (from [11]).

Fig. 2. Two-state pinched hysteresis loop resulting from driving a piecewise linear charge-controlled memristor with a sinusoidal current source (from [11]).

following function: ⎧  ⎨w , w(v(t)) = w , ⎩ w(v(t − )),

v(s) ↓, s ∈ (t − dt, t] (2) v(s) ↑, s ∈ (t − dt, t] v(s)unchange, s ∈ (t − dt, t]

where dt is a sufficiently small positive constant, and ↓ means decrease, whereas ↑ means increase. It is easy to see that each weight switches between two different constant values. Hence, the MNN is a switched system. In this paper, we denote aˆ i j , bˆi j as the large value of ai j and bi j , and aˇ i j , bˇi j as the small one. Meanwhile, denote that diaj = aˆ i j − aˇ i j , dibj = bˆi j − bˇi j . Consider N identical MNNs with a time delay via nonlinear coupling. The state equations of this system are as follows: d x i (t) = −C x i (t) + A(x i ) f (x i (t)) + B(x i ) f (x i (t − τ )) dt N  + u(t) + gi j φ(x j − x i ), i = 1, 2, . . .,N (3) j =1

where x i (t) = (x i1 (t), x i2 (t), . . . , x in (t))T ∈ Rn are the state of the i th MNN.  = diag{γ1 , γ2 , . . . , γn } ∈ Rn×n is a nonnegative diagonal matrix, which represents the coupling strength between two state components. The coupling matrix G = [gi j ] N×N represents the coupling configuration of the system satisfying gii = 0, gi j ≥ 0 for i = j . Note that G is not required to be symmetric in this paper; namely, it can represent either a directed graph or an undirected one. The nonlinear coupling function φ : Rn → Rn is defined as φ(v) = v + sign(v)

(4)

1302

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

where sign(v) = (sign v 1 , sign v 2 , . . . , sign v n )T and ⎧ ⎨ −1, v i < 0 vi = 0 sign(v i ) = 0, ⎩ 1, v i > 0. Remark 1: Different from the conventional linear diffusion coupling scheme, the sign function in ψ is introduced to make the coupled MNNs (3) synchronize. Define matrix L ∈ R N×N as  l i j = gi j i = j (5) N lii = − k=1 gik others then −L is called the Laplace matrix corresponding to G. Given a vector x ∈ R N , it is easy  to obtain that the i th component of Lx satisfy (Lx)i = Nj=1 gi j (x j − x i ). Denote C = I N ⊗ C, L = L ⊗  T  x = x 1T , x 2T , . . . , x NT A(x) = diag{A(x 1 ), A(x 2 ), . . . , A(x N )} B(x) = diag{B(x 1 ), B(x 2 ), . . . , B(x N )} u(t) = (u T (t), u T (t), . . . , u T (t))T f (x) = ( f T (x 1 ), f T (x 2 ), . . . , f T (x N ))T where ⊗ denotes the Kronecker product operator of two matrices, and I N represents the identity matrix with order N. Then, coupled MNNs (3) can be written in matrix form as x˙ (t) = −C x(t) + A(x) f (x(t)) + B(x) f (x(t − τ )) + u(t) + Lx(t) + ψ where

Here, ε is called the convergence rate of synchronization of MNNs (3) or (6). Definition 3 [27]: Let M1N be a set of matrices of N columns with the following property: for row i of any M ∈ M1N , there exist i 1 and i 2 such that Mii1 = −Mii2 = αi . Definition 4 [27]: Let M2N be a subset of M1N with the following property: for any pair of indexes i and j , there exist columns with indexes i 1 , i 2 , . . . , i l in M ∈ M2N such that i 1 = i and i l = j , and also exist rows with indexes p1 , . . . , pl−1 in M such that the entries M pq ,iq = 0 and M pq ,iq+1 = 0 for q = 1, . . . , l − 1. Definition 5 [27]: Let T1N (0) ⊂ R N×N be a set of symmetric matrices with the following property: for any matrix T ∈ T1N (0), its off-diagonal elements are nonnegative and the sum of the entries in each row is 0. Lemma 1 [32]: If the matrix A ∈ T1N (0), then A is irreducible if and only if there exists an m × N matrix M ∈ M2N , such that A = −M T M. Throughout this paper, the following assumptions on the activation functions will be used. Assumption 1: The activation function f i (·) is globally Lipschitz continuous and bounded, namely, there exist constants li > 0 and K¯ i > 0 such that

j =1 g1 j sign(x j − x 1 )

| f i (u)| ≤ K¯ i

(9)

hold for all u, v ∈ R, i = 1, 2, . . . , n.



⎜ ⎟ ⎜ N g sign(x − x ) ⎟ ⎜ j 2 ⎟ j =1 2 j ψ=⎜ ⎟ ∈ Rn N . .. ⎜ ⎟ . ⎝ ⎠ N g sign(x − x ) N j j N j =1

(8)

and

(6)

⎛ N

| f i (u) − fi (v)| ≤ li |u − v|

(7)

B. Definitions and Lemmas

III. M AIN R ESULTS In this section, we will present some sufficient conditions which guarantee the global synchronization of the coupled MNNs. Denote ςi j = gi j + g j i −

N 

(gik + g j k )

k=1 k=i, j

and

Let C([−τ, 0], Rn ) be the Banach space of continuous functions mapping the interval [−τ, 0] into Rn with norm

ϕ = max1≤i≤n {sup−τ ≤θ≤0 |ϕi (θ )|}. The initial value of (1) is an element of C([−τ, 0], Rn ). Here, we denote x(t; t0 , ϕ0 ) as the solution of (1) with given initial condition (t0 , ϕ0 ), where ϕ0 ∈ C([−τ, 0], Rn ) and t0 ∈ R. Next, we present some definitions and lemmas, which are useful in the analysis of our results. Definition 1: The set S = {x(s) = (x 1T (s), x 2T (s), . . . , T x N (s))T : x i (s) ∈ C([−τ, +∞), Rn ), x i (s) = x j (s), i, j = 1, 2, . . . , N} is called synchronization manifold of (3). Definition 2: The multiple MNNs (3) or (6) is said to be globally exponentially synchronizable, if there exist ε > 0, T > t0 , and K > 0, such that

x i (t) − x j (t) ≤ K e−εt holds for all initial condition ϕ 0 = C([−τ, 0], Rn ) N , and T > t0 , i, j

κi = 2

n 

 K¯ s disa + disb

s=1

for i, j = 1, 2, . . . , N. Then, the main results of this paper are shown as follows. Theorem 1: Let Assumption 1 hold. The multiple MNNs (3) is globally exponentially synchronizable with convergence rate ε if there exist a real number ε > 0, a positive-defined diagonal matrix P = diag{ p1 , p2 , . . . , pn }, a diagonal matrix = diag{θ1 , θ2 , . . . , θn }, r1 ∈ [0, 1], r2 ∈ [0, 1], and an m × N matrix M ∈ M2N such that {M T M(γr L + θr I N )}s ≤ 0 (10) n  ε   1 − cr − θr pr + δr = pr |aˆ rs |ls2r1 + ps |aˆ sr |lr2−2r1 2 2 s=1

T , ϕ T , . . . , ϕ T )T (ϕ10 20 N0

=

∈ 1, 2, . . . , N.

 1   ˆ 2r2 pr |brs |ls + ps |bˆsr |lr2−2r2 eετ ≤ 0 + 2 n

s=1

(11)

GUO et al.: GLOBAL EXPONENTIAL SYNCHRONIZATION OF MULTIPLE MNNs

and

1303

and κr − γr ςi1 i2 ≤ 0

(12)

where {A}s  1/2(A + AT ), r = 1, 2, . . . , n, and i 1 , i 2 are the column indexes of nonzero entries in row i (i = 1, 2, . . . , m) of matrix M. Proof: Denote M = M ⊗ In and y(t) = M x(t) = (y1T (t), y2T (t), . . . , ymT (t))T . According to the structure of M, it is easy to see that yi = αi (x i1 − x i2 ) for all i = 1, 2, . . . , m. Note that y = M x can be used to measure the distance from x to the synchronization manifold S, since y = 0 if and only if x ∈ S. Let P = Im ⊗ P. Consider the following Lyapunov functional with respect to (6): 1  1 T x (t)M T P M x(t)eεt + pr 2 2 i=1 r=1 s=1  t 2(1−r2 ) 2 ˆ ×|brs |ls yis (u)eε(u+τ )du. (13) m

V (t, x) =

n

n

⎞ α1 ( f (x 11 ) − f (x 12 )) ⎟ ⎜ .. ⎟ ⎜ . ⎟ ⎜ ⎜ M f (x) = ⎜ αi ( f (x i1 ) − f (x i2 )) ⎟ ⎟ ⎟ ⎜ .. ⎠ ⎝ . αm ( f (x m 1 ) − f (x m 2 )) ⎛ ⎞ ⎛ ⎞ α1 f˜(α1−1 y1 ) h(y1 ) ⎜ ⎟ ⎜ .. ⎟ .. ⎜ ⎟ ⎜ . ⎟ . ⎜ ⎟ ⎜ ⎟ −1 ⎜ ⎜ ⎟ ˜ = ⎜ αi f (αi yi ) ⎟ ⎟  ⎜h(y2 ) ⎟ = h( y) ⎜ ⎜ ⎟ . ⎟ .. ⎝ ⎠ ⎝ .. ⎠ . −1 y )) h(ym ) αm f˜(αm m ⎛

(20)

where f˜(x 1 − x 2 )  f (x 1 ) − f (x 2 ), and h(yi ) = (h 1 (yi1 ), h 2 (yi2 ), . . . , h n (yin ))T . It is easy to verify that |h j (yi j )| ≤ l j |yi j |, i = 1, 2, . . . , m; j = 1, 2, . . . , n. Next, we have that ˆ f (x)+(B− B) ˆ f (x(t − τ ))] x T (t)M T P M[( A− A) =

t −τ

m  n 

αi pr yir (t)

i=1 r=1

n  ((ars (x i1 ) − aˆ rs ) f s (x i1 s (t)) s=1

− (ars (x i2 ) − aˆ rs ) f s (x i2 s (t)) + (brs (x i1 ) − bˆrs )

Then, we have

× f s (x i1 s (t − τ )) − (brs (x i2 ) − bˆrs )) f s (x i2 s (t − τ ))) 1 V (t, x) ≥ pmin eεt y(t)

2

(14)



n m  

αi pr

i=1 r=1

where pmin = min1≤r≤n { pr } > 0. Denote  = I N ⊗ and calculate the upper right Dini-derivative of V along the solution of (6), one has

=

m  n 

n 

 a b |yir (t)| 2 K¯ s drs + drs

s=1

αi pr κr |yir (t)|

(21)

i=1 r=1

and 1 D + V (t, x) = εeεt x T (t)M T P M x(t) + eεt x T (t)M T 2 × P M(−C x(t) − x(t) + A f (x(t)) + B f (x(t −τ ))+u(t)+ Lx(t)+x(t)+ψ) n m n 1  + pr |bˆrs |ls2(1−r2 ) 2 i=1 r=1 s=1  2 2 × yis (t)eε(t +τ ) − yis (t − τ )eεt . (15)

x T (t)M T P Mψ = ⎡ ×⎣

αi pr γr (x i1 r − x i2 r )

i=1 r=1 N 

gi1 j sign(x j r − x i1 r ) −

j =1

=

m  n 

m  n 

N 

⎤ gi2 j sign(x j r − x i2 r )⎦

j =1



⎢ αi pr γr ⎣ −(gi1 i2 + gi2 i1 )|x i1 r −x i2 r |+(x i1 r − x i2 r )

i=1 r=1

Denote C 1 = Aˆ ⊗ In , Bˆ = 1 = Im ⊗ .

Im ⊗ C, Aˆ = [aˆ i j ]n×n , ˆ Bˆ ⊗ In , Aˆ 1 = Im ⊗ A, It is easy to verify that

⎛ ⎞⎤ N N   ⎜ ⎟⎥ ×⎝ gi1 j sign(x j r − x i1 r )− gi2 j sign(x j r − x i2 r )⎠⎦

Bˆ = [bˆi j ]n×n , Aˆ = ˆ and Bˆ 1 = Im ⊗ B,

MC = C 1 M, M = 1 M, Mu(t) = 0 M Aˆ = Aˆ 1 M, M Bˆ = Bˆ 1 M ˆ f (x) M A f (x) = M Aˆ f (x) + M( A − A) ˆ f (x) = Aˆ 1 M f (x) + M( A − A)

M B f (x(t − τ )) ˆ f (x(t −τ )) = M Bˆ f (x(t −τ ))+ M(B − B) ˆ f (x(t −τ )) = Bˆ 1 M f (x(t −τ ))+ M(B− B)

j =1 j  =i2

(16) (17) (18)

(19)



m  n 

j =1 j  =i1

αi pr γr |x i1 r − x i2 r |

i=1 r=1





⎜ × ⎝−(gi1 i2 + gi2 i1 ) +

N 

⎟ (gi1 j + gi2 j )⎠

j =1 j =i1 ,i2

≤−

n m   i=1 r=1

αi pr γr ςi1 i2 |yir (t)|.

(22)

1304

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

y¯ j (t) = M x¯ j (t), j = 1, 2, . . . , n. Therefore

Substituting (16)–(22) into (15), we have D + V (t, x) ≤

m  n 

x T (t)M T P M(L + )x(t) n  = pi x¯iT (t)M T M(γi L + θi I N )x¯i (t) ≤ 0.

pr eεt

i=1 r=1





n  ε × yir2 (t)+ yir (t) −(cr +θr )yir (t)+ (aˆ rs h s (yis (t)) 2 s=1  + bˆrs h s (yis (t − τ ))) + αi (κr − γr ςi1 i2 )|yir |

It follows from (10) that:

s=1

+ eεt x T (t)M T P M(L + )x(t)  m  n n ε    −cr −θr yir2 (t)+ (aˆ rs yir (t)h s (yis (t)) pr eεt ≤ 2 s=1

1  ˆ 2(1−r2 ) + bˆrs yir (t)h s (yis (t −τ )))+ |brs |ls 2 s=1   2 2 × yis (t)eετ − yis (t − τ ) n

+ eεt x T (t)M T P M(L + )x(t)  m  n n ε    εt −cr −θr yir2 (t)+ (|aˆ rs ||yir (t)|ls |yis (t)| ≤ pr e 2 i=1 r=1

s=1

+ |bˆrs ||yir (t)|ls |yis (t − τ )|)  n 1  ˆ 2(1−r2 )  2 2 |brs |ls (t − τ ) + yis (t)eετ − yis 2 s=1

+ eεt x T (t)M T P M(L + )x(t)  n m  n ε    εt 2 − cr − θr yir (t) + pr e ≤ 2 i=1 r=1 s=1  1 × |aˆ rs | · (|yir (t)|lsr1 )2 + (|yis (t)|ls1−r1 )2 2

  1 + |bˆrs | · (|yir (t)|lsr2 )2 +(|yis (t −τ )|ls1−r2 )2 2  n 1  ˆ 2(1−r2 )  2 ετ 2 yis (t)e − yis (t − τ ) |brs |ls + 2

+ eεt x T (t)M T P M(L + )x(t)  n m  n  ε  1 εt − cr − θr pr + ≤ e 2 2

V (t) ≤ V (0) ∀t ≥ 0.

(26)

−1 −1 V (t)e−εt ≤ 2 pmin V (0)e−εt

y(t) ≤ 2 pmin

(27)

which completes the proof. Remark 2: According to the proof, it is easy to find that the sign function in the coupling term is necessary. Since each MNN switches depending on its own state, and then all MNNs are nonidentical before they synchronize with each other. The sign function here is used to suppress the influence of the difference between two MNNs, so that the whole system can reach complete synchronization. This is also the main difference between MNNs and conventional neural networks. In the proof of Theorem 1, the key point is to find a proper matrix M ∈ M2N , then the Lyapunov functional can be constructed. Note that if G represents an undirected graph, it is easy to verify that L ∈ T1N (0). Furthermore, if the graph is also connected, that is, L is also irreducible, then there exists an m × N matrix M ∈ M2N such that L = −M T M based on Lemma 1. Meanwhile, the eigenvalues of L satisfy 0 = λ1 > λ2 ≥ λ3 ≥ · · · ≥ λ N , where 0 is a simple eigenvalue and λ2 is the second largest eigenvalue of L. If we replace M T M in Theorem 1 by −L, the following corollary can be obtained. Corollary 1: Let Assumption 1 hold and G be symmetric and irreducible. The multiple MNNs (3) is globally exponentially synchronizable with convergence rate ε if there exist ε > 0, a positive-defined diagonal matrix P = diag{ p1, p2 , . . . , pn }, r1 ∈ [0, 1], and r2 ∈ [0, 1] such that pr γr λ22 + δr λ2 ≥ 0

(28)

κr − γr ςi1 i2 ≤ 0

(29)

and

s=1

where r = 1, 2, . . . , n, i 1 , i 2 are the column indexes of nonzero entries in row i (i = 1, 2, . . . , m) of matrix M obtained from L based on Lemma 1, and

n  1 × pr |aˆ rs |ls2r1 + ps |aˆ sr |lr2(1−r1 ) + 2  s=1  × ( pr |bˆrs |ls2r2 + ps |bˆsr |lr2(1−r2 ) eετ ) yir2 (t)

+ eεt x T (t)M T P M(L + )x(t) ≤ eεt x T (t)M T P M(L + )x(t).

(25)

It follows from (14) that:

s=1

i=1 r=1

D + V (t, x) ≤ 0. This implies that

 n 1  ˆ 2(1−r2 )  2 2 yis (t)eετ − yis + |brs |ls (t − τ ) 2

i=1 r=1

(24)

i=1

δr =

n  1  − cr pr + pr |aˆ rs |ls2r1 + ps |aˆ sr |lr2−2r1 2 2



s=1

(23)

Denote y¯ j (t) = (y1 j (t), y2 j (t), . . . , ym j (t))T , x¯ j (t) = (x 1 j (t), x 2 j (t), . . . , x N j (t))T . It is easy to obtain that

+

n 1 

2

pr |bˆrs |ls2r2 + ps |bˆsr |lr2−2r2 eετ .

s=1

Proof: Since L = −M T M, then we consider the same Lyapunov functional as (13). Let  = 0, by similar analysis

GUO et al.: GLOBAL EXPONENTIAL SYNCHRONIZATION OF MULTIPLE MNNs

an m × N matrix M ∈ M2N such that

in Theorem 1, (24) should be x (t)M P M Lx(t) = T

T

n 

pi x¯iT (t)M T Mγi L x¯i (t)



i=1

=−

1305

n 

2 pi γi x¯iT (t)L 2 x¯i (t) ≤ 0.

{M T M(γr L + θr I N )}s ≤ 0  − cr − θr + (aˆ rr )+lr pr

(34)

1  pr |aˆ rs |ls2r1 + ps |aˆ sr |lr2−2r1 2 s=1 n

(30)

+

i=1

s=r

and m  n 

δr yir2 (t) =

i=1 r=1

n 

δi y¯iT (t) y¯i (t) =

i=1

=−

n 

+ δi x¯iT (t)M T M x¯i (t)

i=1

n 

x¯iT [ pi γi L 2 + δi L]x¯i (t).

(31)

i=1

According to (28), we have that the eigenvalues of pi γi L 2 + δi L is no less than 0, which indicates D + V (t, x) ≤ 0.

(32)

Then, by the same procedure as that in Theorem 1, we complete the proof. The activation functions are usually monotonically nondecreasing in classical neural networks. For example, the sigmoid activation functions are used in the Hopfield networks [33], piecewise linear activation functions in cellular neural networks [34], and hard comparator (signum) function used in neural networks with discontinuous activation functions [35], [36]. If activation functions are monotonically nondecreasing, then

s=1

|bˆrs ||yir (t)|ls |yis (t − τ )|.

(39)

V (t, x(t)) = x T (t)M T P M x(t)e2εt  t + f (x(s))T M T Q M f (x(s))e2ε(s+τ )ds.

|aˆ rs ||yir (t)|ls |yis (t)|

s=1 s=r

+

{M T M(γr L + θr I N )}s ≤ 0 (38)

where r = 1, 2, . . . , n, i 1 , i 2 are the column indexes of nonzero entries in row i (i = 1, 2, . . . , m) of matrix M. Proof: Construct a Lyapunov functional as

n  (aˆ rs yir (t)h s (yis (t)) + bˆrs yir (t)h s (yis (t − τ )))

n 

(36)

where r = 1, 2, . . . , n, and i 1 , i 2 are the column indexes of nonzero entries in row i (i = 1, 2, . . . , m) of matrix M. Next, we will construct a new Lyapunov functional and give another result, which is easy to be verified using the LMI toolbox. Theorem 2: Denote L¯ = diag{l1 , l2 , . . . , ln } and let Assumption 1 hold. The multiple MNNs (3) are globally and exponentially synchronizable with a convergence rate ε if there exist a positive constant ε > 0, positive definite diagonal matrices P = diag{ p1, p2 , . . . , pn } and  = diag{σ1 , σ2 , . . . , σn }, a diagonal matrix = diag{θ1 , θ2 , . . . , θn }, a semipositive definite matrix Q ∈ Rn×n , and an m × N matrix M ∈ M2N such that ⎡ ⎤ 2P(ε In − C − ) ˆ ˆ PA PB ⎥ ⎢ ¯ L¯ + L ⎢ ⎥ ⎢ ⎥ < 0 (37) ⎣ Aˆ T P − + Qe2ετ 0 ⎦ 0 −Q Bˆ T P

κr − γr ςi1 i2 ≤ 0

where (aˆ rr )+ = max{aˆ rr , 0}, therefore

n 

(35)

s=1

and

aˆ rr yir (t)h r (yir (t)) ≤ (aˆ rr )+ lr yir (t)2

≤ (aˆ rr )+lr yir (t)2 +

pr |bˆrs |ls2r2 + ps |bˆsr |lr2−2r2 eετ ≤ 0

κr − γr ςi1 i2 ≤ 0

Therefore n 

2

and

δi x¯iT (t)L x¯i (t).

i=1

D + V (t, x) ≤ −

n 1 

t −τ

(40)

(33)

s=1

Applying (33) to the third inequality of (23), we have the following corollary, which is more relax than Theorem 1. Corollary 2: Let Assumption 1 hold, the activation functions are monotonically nondecreasing. The multiple MNNs (3) is globally exponentially synchronizable with convergence rate ε if there exist a real number ε > 0, a positivedefined diagonal matrix P = diag{ p1 , p2 , . . . , pn }, a diagonal matrix = diag{θ1 , θ2 , . . . , θn }, r1 ∈ [0, 1], r2 ∈ [0, 1], and

The upper right Dini-derivative of V (t, x(t)) along the solution is D + V (t, x(t)) = 2εe2εt x T (t)M T P M x(t) + 2e2εt x T (t)M T P M ×[−C x(t) + A f (x(t)) + B f (x(t − τ )) + u(t) + Lx(t) + ψ − x + x] + e2ε(t +τ ) f T (x(t))M T Q M f (x(t)) − e2εt f T (x(t − τ ))M T Q M f (x(t − τ ))

1306

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

where  = I N ⊗ . Denote 1 = Im ⊗ ,  = Im ⊗ . According to the proof in Theorem 1, we have D + (V (t, x(t)))

Theorem 1 are relaxed. This is because n  ε 1  − cr − θr pr + pr |a˜ rs |ls2r1 + ps |a˜ sr |lr2−2r1 2 2 s=1

≤ 2e2εt x T (t)M T P[(ε I Nn − C 1 − 1 )M x(t) + Aˆ 1 M f (x) + Bˆ 1 M f (x(t − τ ))] + 2e2εt x T (t)M T P M(L + )x(t) + e2εt f T (x(t))M T  M f (x(t)) − e2εt f T (x(t))M T  M f (x(t))



+ e2ε(t +τ ) f T (x(t))M T Q M f (x(t)) − e2εt f T (x(t − τ ))M T Q M f (x(t − τ )).

m 



h T (yi (t))h(yi (t)) ¯ L¯ yi (t). yiT (t) L

i=1

Let ηi = [yi (t)T , h T (yi (t)), h T (yi (t − τ ))]T and denote the right side of (37) by , then D + V (x(t)) ≤ 2e2εt x T (t)M T P M(L + )x(t) m  e2εt ηiT ηi . + i=1

From (24) and (38), we have x T (t)M T P M(L + )x(t) n  pi x¯iT (t)M T M(γi L + θi I N )x¯i (t) ≤ 0. =

(41)

i=1

Since  ≤ 0, it follows from (41) that:

s=1

n 1 

2

pr |bˆrs |ls2r2 + ps |bˆsr |lr2−2r2 eετ .

s=1

Remark 4: The procedures for determining whether the multiple MNNs (3) is globally exponentially synchronizable by Theorems 1 or 2 or Corollary 1 can be summarized as follows. For Theorem 1, we have the following. 1) According to the coupling configuration and value of , the location of nonzero elements in matrix M, i.e., the structure of M, can be determined to make (12) hold M = M1

(44)

where  = diag{α1 , α2 , . . . , αm }, M1 ∈ M1N , which is determined, denotes the structure of M and all nonzero entries of M1 is −1 or 1. 2) Fix the positive defnite diagonal matrix P, , r1 , and r2 , such as, P = In ,  = 0.1, and r1 = r2 = 1/2, then find proper such that (11) holds  n  ε 1 ps θr ≥ − cr + |aˆ rs |ls2r1 + |aˆ sr |lr2−2r1 2 2 pr s=1   n 1  ˆ 2r2 ps ˆ 2−2r2 ετ + |brs |ls + |bsr |lr . (45) e 2 pr 3) Based on obtained , find proper M such that (10) holds using MATLAB LMI Toolbox. From (10), we have

By similar analysis as Theorem 1, we complete the proof. Remark 3: Note that in (18) and (19), if we let

(42)

and M B f (x(t − τ )) ˜ f (x(t − τ )) = M B˜ f (x(t − τ )) + M(B − B) ˜ f (x(t − τ )) = B˜ 1 M f (x(t − τ )) + M(B − B)



s=1

D + V (x(t)) ≤ 0.

˜ f (x) M A f (x) = M A˜ f (x) + M( A − A) ˜ f (x) = A˜ 1 M f (x) + M( A − A)

pr |b˜rs |ls2r2 + ps |b˜sr |lr2−2r2 eετ

n  1  pr |aˆ rs |ls2r1 + ps |aˆ sr |lr2−2r1 − cr − θr pr + 2 2



+

i=1 m 

2

s=1

From (20) and Assumption 1, we have f T (x(t))M T  M f (x(t)) =

+

n 1 

(43)

where A˜ = [a˜ i j ]n×n , B˜ = [b˜i j ]n×n , and a˜ i j ∈ [aˇ i j , aˆ i j ], ˜ B˜ = I N ⊗ B, ˜ A˜ 1 = Im ⊗ A, ˜ b˜i j ∈ [bˇi j , bˆi j ], A˜ = I N ⊗ A, ˜ then by the similar proof, it is easy to find B˜ 1 = Im ⊗ B, ˆ Bˆ that Theorems 1 and 2 will also hold if we replace A, ˜ B, ˜ respectively. Therefore, we can find the best therein by A, choice of A˜ and B˜ so that the condition will be more relax. Substitute aˆ i j and bˆi j with a˜ i j = min{|aˆ i j |, |aˇ i j |}, b˜i j = min{|bˆi j |, |bˇi j |}, respectively, in (11), then the conditions in

M1T 2 M1 (γr L + θr I N ) + (γr L + θr I N )T M1T 2 M1 ≤ 0 (46) which is linear in 2 = diag{α12 , α22 , . . . , αn2 }. For Corollary 1, we have the following. 1) Solve the second largest eigenvalue λ2 of L. 2) Fix , r1 , and r2 , then find proper pi > 0(i = 1, 2, . . . , n) such that condition (28) holds. 3) Find proper M such that (34) holds based on the coupling configuration and the value of . For Theorem 2, we have the following. 1) According to the coupling configuration and value of , determine the structure of M such that condition (39) holds. 2) Fix the positive definite diagonal matrix P and ε, such as, P = In and  = 0.1, then find proper matrics , , Q, and M using MATLAB LMI Toolbox so that (37) holds. 3) Based on obtained , find proper M such that (38) holds using MATLAB LMI Toolbox.

GUO et al.: GLOBAL EXPONENTIAL SYNCHRONIZATION OF MULTIPLE MNNs

1307

IV. N UMERICAL S IMULATION In this section, we present three numerical examples to substantiate the theoretical results. Consider the following two-neuron memristor-based cellular neural network with delay: dx = −C x(t) + A f (x(t)) + B f (x(t − τ )) dt

(47)

where x(t) = (x 1 (t), x 2 (t))T ∈ R2 , C = I2 , τ = 0.85, f (x) = ( f 1 (x 1 ), f2 (x 2 ))T with f 1 (s) = f 2 (s) = (|s + 1| − |s − 1|)/2. Obviously, the activation functions f i (s)(i = 1, 2) satisfy Assumption 1 with l1 = l2 = 1, K¯ 1 = K¯ 2 = 1. The resistors in each neuronal self-feedback loop of this neural network are substituted by the memristors. As a result, the weight matrix A and B have the following forms:   20 b11 0.1 a , B= A = 11 0.1 a22 0.1 b22 where the value of aii and bii depend on the variation directions of f i (x i (t))−x i (t) and f i (x i (t −τ ))−x i (t), i = 1, 2, respectively. Denote f ii (t) = f i (x i (t))−x i (t) and f ii (t −τ ) = f i (x i (t − τ )) − x i (t), then the form of aii and bii are given by ⎧ f 11 (s) ↓, s ∈ (t − dt, t] ⎨ 1.7854, f 11 (s) ↑, s ∈ (t − dt, t] a11 (t) = 2.0472, ⎩ a11 (t − ), f 11 (s) unchange, s ∈ (t − dt, t] ⎧ f22 (s) ↓, s ∈ (t − dt, t] ⎨ 1.5236, f22 (s) ↑, s ∈ (t − dt, t] a22 (t) = 1.6283, ⎩ a22 (t − ), f22 (s) unchange, s ∈ (t − dt, t] ⎧ −2.2214, f 11 (s − 0.85) ↓, s ∈ (t − dt, t] ⎪ ⎪ ⎨ −1.3329, f 11 (s − 0.85) ↑, s ∈ (t − dt, t] b11 (t) = b11 (t − ), f 11 (s − 0.85) unchange, ⎪ ⎪ ⎩ s ∈ (t − dt, t] and

⎧ −1.9252, ⎪ ⎪ ⎨ −1.6661, b22 (t) = b22 (t − ), ⎪ ⎪ ⎩

f 22 (s − 0.85) ↓, s ∈ (t − dt, t] f 22 (s − 0.85) ↑, s ∈ (t − dt, t] f 22 (s − 0.85) unchange, s ∈ (t − dt, t].

It is easy to see that  2.0472 20 Aˆ = , 0.1 1.6283 and

 1.7854 ˇ A= 0.1

20 , 1.5236

Bˆ =



−1.3329 0.1

 −2.2214 ˇ B= 0.1

0.1 −1.6661 0.1 . −1.9252

Hence, we can obtain that κ1 = 2.3006, and κ2 = 0.7276. This MNN can exhibit chaotic behavior or a limit cycle as shown in Fig. 3. In the following Examples 1–3, we ascertain our theoretical results under different coupling structures, respectively. Since Theorem 1 is available for the coupled networks with direct or undirect coupling structure, we present two coupling structures, i.e., direct and undirect, in Example 1. Since Corollary 1 is just available for the coupled networks with undirect coupling structure, we give an undirect coupling structure in Example 2. Because the conditions in Theorem 2 can be

Fig. 3. Phase plot of MNN (47) with various initial conditions. (a) (x1 (t), x2 (t))T = (0.1, 0.1)T for t ∈ [−1, 0]. (b) (x1 (t), x2 (t))T = (2, 5)T for t ∈ [−1, 0]. (c) (x1 (t), x2 (t))T = (sin t, cos t)T for t ∈ [−1, 0]. (d) (x1 (t), x2 (t))T = (0.2t, −0.1 cos(2t))T for t ∈ [−1, 0].

Fig. 4. Ring coupling structure in Example 1. Left: directed. Right: undirected.

easy to verified by LMI toolbox, we give a more complex structure than those in Theorem 1 and Corollary 1 to show the advantage of Theorem 2 in Example 3. Example 1: Consider five coupled MNNs, which are defined by (47) in a directed ring structure (see the left subplot in Fig. 4). The corresponding adjacent matrix is given by ⎡ ⎤ 0 0 0 0 0.8 ⎢1 0 0 0 0 ⎥ ⎢ ⎥ G=⎢ 0 1.2 0 0 0 ⎥ (48) ⎢ ⎥ ⎣0 0 1.6 0 0 ⎦ 0 0 0 2 0 and  = diag{18, 18}. First, according to the coupling matrix G, the value of κr , γr (r = 1, 2), it is easy to verify that ςi,i+1 (i = 1, 2, 3, 4) make (12) hold, hence we can obtain the location of nonzero elements in each row of M ⎡ ⎤⎡ ⎤ 0 0 0 α1 1 −1 0 0 0 ⎢ 0 α2 ⎢ 0 0⎥ 1 −1 0 0⎥ ⎥⎢0 ⎥. M =⎢ ⎣ 0 ⎦ ⎣ 0 α3 0 0 0 1 −1 0⎦ 0 0 0 α4 0 0 0 1 −1 Second, choose p1 = p2 = 1 and ε = 0.1, then we can obtain a feasible solution of θi (i = 1, 2) according to (45) θ1 ≥ 12.5801, θ2 ≥ 12.4944.

1308

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

Fig. 5. Transient behaviors of state xi (t), i = 1, 2, for the directed ring structure case in Example 1. Fig. 6. Transient behaviors of state xi (t), i = 1, 2, for the undirected ring structure case in Example 1.

If we choose θ1 = θ2 = 12.8, from (11), it is easy to verify that δ1 = −0.1563 and δ2 = −0.2273, respectively. Therefore, (11) is also satisfied. Last, check if there exists a proper M such that the condition (10) holds. Since the value of θi (i = 1, 2) and the structure of M are known, we only need to determine the exact value of the nonzero elements in M. It is equivalent to checking the existence of the solution of the LMI (46) using MATLAB LMI Toolbox. Here, we obtain that ⎡ ⎤ 0.09 −0.09 0 0 0 ⎢ 0 0.13 −0.13 0 0⎥ ⎥. M =⎢ ⎣ 0 0 0.12 −0.12 0⎦ 0 0 0 0.06 −0.06 By computing the eigenvalues of matrix {M T M(γi L +θi I N )}s , which equal to 0, −0.0053, −0.0393, −0.4625, and −1.3379, the third condition in Theorem 1 can be verified. It follows from Theorem 1 that MNNs are synchronized. The simulation result are shown in Fig. 5, where the initial conditions of each MNN are constant functions in [−1, 0] whose values are randomly selected as     −27.0207 26.6872 , x2 = x1 = 24.1630 −0.5482     −0.6448 24.0032 x3 = , x4 = −9.7368 −7.8452   −23.3278 x5 = . 16.8151 Consider the undirected coupling structure (the right subplot in Fig. 4), which can be viewed as a transformation of the directed one in Fig. 4, by adding a reverse directed edge with the same weight.  is same as before. Note that by similar discussion as before, it is easy to find that (11) and (12) in Theorem 1 also hold if we choose the same parameters as before. However, due to the Laplace matrix L is different from that of directed graph, a different matrix M can be obtained so that the condition (10) holds,

Fig. 7.

Undirected line coupling structure in Example 2.

which is given by ⎡ 0.1 −0.1 ⎢ 0 0.08 M =⎢ ⎣ 0 0 0 0

0 −0.08 0.07 0

0 0 −0.07 0.05

⎤ 0 0⎥ ⎥. 0⎦ −0.05

Then, all conditions in Theorem 1 also hold. The coupled system can achieve synchronization and the simulation results are shown in Fig. 6, where the same initial values are selected as before. Example 2: Consider five MNNs coupled in an undirected line structure (Fig. 7). The corresponding adjacent matrix is given by ⎤ ⎡ 0 1 0 0 0 ⎢1 0 1.6 0 0 ⎥ ⎥ ⎢ ⎢ 2 0 ⎥ (49) G = ⎢ 0 1.6 0 ⎥ ⎣0 0 2 0 1.4 ⎦ 0 0 0 1.4 0 and  = diag{18, 18}. First, we can obtain that the Laplace matrix of G is ⎡ ⎤ 1 −1 0 0 0 ⎢ −1 2.6 −1.6 0 0 ⎥ ⎢ ⎥ ⎢ L=⎢ 0 −1.6 3.6 −2 0 ⎥ ⎥. ⎣ 0 0 −2 3.4 −1.4 ⎦ 0 0 0 −1.4 1.4

(50)

Then, the second largest eigenvalue of L is computed as λ2 (L) = −0.5838. Second, choose  = 0.1, r1 = 0, and r2 = 1, we find that p1 = p2 = 1 is a feasible solution of (28) in Corollary 1. This is because pi γi λ22 + δi λ2 equal to 6.2261 and 6.2675 for i = 1 and 2, respectively.

GUO et al.: GLOBAL EXPONENTIAL SYNCHRONIZATION OF MULTIPLE MNNs

1309

Fig. 10.

Coupling structure in Example 3.

Fig. 8. Transient behaviors of state xi (t), i = 1, 2, for the undirected line structure case in Example 2.

Fig. 11. Transient behaviors of state xi (t), i = 1, 2 in the coupling structure in Example 3.

Fig. 9. Distance to synchronization manifold M y(t) for three coupling structures in Examples 1 and 2.

Finally, calculate M according to L, which is given by ⎡ ⎤ 1 −1 0 0 0 ⎢ 0 1.2649 −1.2649 0 0⎥ ⎥. M =⎢ ⎣0 0 1.4142 −1.4142 0⎦ 0 0 0 1.1832 −1.1832 Then, we can verify that (29) holds. From Corollary 1, the coupled system can reach synchronization and the simulation results are shown in Fig. 8. We use M y(t) to measure the distance from system state x(t) to synchronization manifold S. Fig. 9 shows the transient behaviors of M y(t) in above three different topology. We can see that M y(t) approach to 0. Note that the synchronization rate corresponding to different coupling configuration is different. Here, the undirected ring structure has the highest synchronization rate. Example 3: Consider MNNs (3) with seven coupled MNNs in a more general coupling topology shown in Fig. 10 and  = diag{13, 13}. First, determine the structure of M according to the coupling configuration and the value of γ1 , γ2 , so that (39) holds.

Fig. 12.

Distance to synchronization manifold M y(t) in Example 3.

We have that



1 ⎢0 ⎢ ⎢0 M = ⎢ ⎢0 ⎢ ⎣0 0

−1 1 0 0 0 0

0 −1 1 0 0 0

0 0 −1 1 0 0

0 0 0 −1 1 0

0 0 0 0 −1 1

⎤ 0 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ 0⎦ −1

where  = diag{α1 , α2 , . . . , α6 }. Next, choose p1 = p2 = 1 and ε = 0.1, then find the matrices , , Q using MATLAB LMI Toolbox so

1310

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

that (37) holds. We obtain θ1 = 12.6408, θ2 = 12.6347, σ1 = 4.0746, σ2 = 23.6133, and  1.9972 −0.6293 Q= . −0.6293 2.2066 Last, substitute the obtained into (38), using MATLAB LMI Toolbox, matrix M can be obtained as follows: ⎡ 2.9267 −2.9267 0 ⎢ 0 1.9647 −1.9647 ⎢ ⎢ 0 0 1.4122 M =⎢ ⎢ 0 0 0 ⎢ ⎣ 0 0 0 0 0 0 ⎤ 0 0 0 0 ⎥ 0 0 0 0 ⎥ ⎥. 0.1026 −0.1026 0 0 ⎥ ⎦ 0 0.0795 −0.0795 0 0 0 0.0785 −0.0785 Then, all the conditions in Theorem 2 are satisfied. As a result, all MNNs can synchronize as shown in Figs. 11 and 12. V. C ONCLUSION In this paper, we address global exponential synchronization of the multiple MNNs via a novel nonlinear coupling containing a term of discontinuous sign function. First, by constructing a proper Lyapunov functional and using the inequality technique, a set of sufficient conditions are derived to ascertain the synchronization ability of the multiple MNNs with the coupling structure being direct graph or undirect graph. In particular, for the case that the adjacent matrix is symmetric and irreducible, by analyzing the eigenvalues of the corresponding Laplace matrix, we present a criterion for assuring the synchronization ability of the multiple MNNs with the coupling structure being undirect graph. Next, using LMI technique, we also derive a set of sufficient conditions of the stability of the synchronization manifold. Finally, three examples are presented to substantiate the results. Many avenues are open for future research on the synchronization of multiple MNNs. Further investigations may aim at analyzing the synchronization of the multiple MNNs with many other coupling structure in practice, such as, eventtriggered coupling, pinning coupling, impusive coupling, and some other coupling schemes. Moreover, the methodology we used in discussing global synchronization of an array of nonlinear coupled memristive systems can be extended to discuss more general delayed switched systems. R EFERENCES [1] L. O. Chua, “Memristor—The missing circuit element,” IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507–519, Sep. 1971. [2] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, vol. 453, no. 7191, pp. 80–83, 2008. [3] A. Buscarino, L. Fortuna, M. Frasca, and L. V. Gambuzza, “A chaotic circuit based on Hewlett–Packard memristor,” Chaos, Interdiscipl. J. Nonlinear Sci., vol. 22, no. 2, p. 023136, 2012. [4] Y. V. Pershin and M. Di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Netw., vol. 23, no. 7, pp. 881–886, Sep. 2010.

[5] J. Hu and J. Wang, “Global uniform asymptotic stability of memristorbased recurrent neural networks with time delays,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2010, pp. 1–8. [6] G. Zhang, Y. Shen, and J. Sun, “Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 149–154, Nov. 2012. [7] A. Wu and Z. Zeng, “Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays,” Neural Netw., vol. 36, pp. 1–10, Dec. 2012. [8] G. Zhang, Y. Shen, Q. Yin, and J. Sun, “Global exponential periodicity and stability of a class of memristor-based recurrent neural networks with multiple delays,” Inf. Sci., vol. 232, no. 20, pp. 386–396, 2013. [9] Z. Guo, J. Wang, and Z. Yan, “Attractivity analysis of memristor-based cellular neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 4, pp. 704–717, Apr. 2013. [10] S. Wen, Z. Zeng, and T. Huang, “Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 233–240, Nov. 2012. [11] Z. Guo, J. Wang, and Z. Yan, “Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with timevarying delays,” Neural Netw., vol. 48, pp. 158–172, Dec. 2013. [12] Z. Guo, J. Wang, and Z. Yan, “Passivity and passification of memristorbased recurrent neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., doi: 10.1109/TNNLS.2014.2305440. [13] L. M. Pecora and T. L. Carroll, “Synchronization in chaotic systems,” Phys. Rev. Lett., vol. 64, no. 8, pp. 821–824, 1990. [14] K. M. Cuomo, A. V. Oppenheim, and S. H. Strogatz, “Synchronization of Lorenz-based chaotic circuits with applications to communications,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol. 40, no. 10, pp. 626–633, Oct. 1993. [15] T. Yang and L. O. Chua, “Impulsive stabilization for control and synchronization of chaotic systems: Theory and application to secure communication,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 44, no. 10, pp. 976–988, Oct. 1997. [16] C. M. Gray, “Synchronous oscillations in neuronal systems: Mechanisms and functions,” J. Comput. Neurosci., vol. 1, nos. 1–2, pp. 11–38, 1994. [17] J. Cao and J. Lu, “Adaptive synchronization of neural networks with or without time-varying delay,” Chaos, Interdiscipl. J. Nonlinear Sci., vol. 16, no. 1, p. 013133, 2006. [18] H. Zhang, T. Ma, G.-B. Huang, and C. Wang, “Robust global exponential synchronization of uncertain chaotic delayed neural networks via dualstage impulsive control,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 831–844, Jun. 2010. [19] C.-J. Cheng, T.-L. Liao, J.-J. Yan, and C.-C. Hwang, “Exponential synchronization of a class of neural networks with time-varying delays,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 36, no. 1, pp. 209–215, Feb. 2006. [20] Y. Zhang and Q.-L. Han, “Network-based synchronization of delayed neural networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 60, no. 3, pp. 676–689, Mar. 2013. [21] A. Wu, S. Wen, and Z. Zeng, “Synchronization control of a class of memristor-based recurrent neural networks,” Inf. Sci., vol. 183, no. 1, pp. 106–116, 2012. [22] G. Zhang, Y. Shen, and L. Wang, “Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays,” Neural Netw., vol. 46, pp. 1–8, Oct. 2013. [23] G. Zhang and Y. Shen, “New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 10, pp. 1701–1707, Oct. 2013. [24] S. Wen, G. Bao, Z. Zeng, Y. Chen, and T. Huang, “Global exponential synchronization of memristor-based recurrent neural networks with timevarying delays,” Neural Netw., vol. 48, pp. 195–203, Dec. 2013. [25] G. Wang, S. Yi, and Y. Quan, “Exponential synchronization of coupled memristive neural networks via pinning control,” Chin. Phys. B, vol. 22, no. 5, p. 050504, 2013. [26] G. Chen, J. Zhou, and Z. Liu, “Global synchronization of coupled delayed neural networks and applications to chaotic CNN models,” Int. J. Bifurcation Chaos, vol. 14, no. 7, pp. 2229–2240, 2004. [27] W. Lu and T. Chen, “Synchronization of coupled connected neural networks with delays,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 51, no. 12, pp. 2491–2503, Dec. 2004. [28] J. Cao, G. Chen, and P. Li, “Global synchronization in an array of delayed neural networks with hybrid coupling,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 2, pp. 488–498, Apr. 2008.

GUO et al.: GLOBAL EXPONENTIAL SYNCHRONIZATION OF MULTIPLE MNNs

[29] W. Wu and T. Chen, “Global synchronization criteria of linearly coupled neural network systems with time-varying coupling,” IEEE Trans. Neural Netw., vol. 19, no. 2, pp. 319–332, Feb. 2008. [30] J. Liang, Z. Wang, Y. Liu, and X. Liu, “Robust synchronization of an array of coupled stochastic discrete-time delayed neural networks,” IEEE Trans. Neural Netw., vol. 19, no. 11, pp. 1910–1921, Nov. 2008. [31] X. Yang, J. Cao, and Z. Yang, “Synchronization of coupled reactiondiffusion neural networks with time-varying delays via pinningimpulsive controller,” SIAM J. Control Optim., vol. 51, no. 5, pp. 3486–3510, 2013. [32] C. W. Wu and L. O. Chua, “Synchronization in an array of linearly coupled dynamical systems,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 42, no. 8, pp. 430–447, Aug. 1995. [33] J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Nat. Acad. Sci. USA, vol. 81, no. 10, pp. 3088–3092, 1984. [34] L. O. Chua and L. Yang, “Cellular neural networks: Theory,” IEEE Trans. Circuits Syst., vol. 35, no. 10, pp. 1257–1272, Oct. 1988. [35] Q. Liu and J. Wang, “A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming,” IEEE Trans. Neural Netw., vol. 19, no. 4, pp. 558–570, Apr. 2008. [36] Q. Liu and J. Wang, “A one-layer recurrent neural network with a discontinuous activation function for linear programming,” Neural Comput., vol. 20, no. 5, pp. 1366–1383, May 2008.

Zhenyuan Guo received the B.S. degree in mathematics and applied mathematics and the Ph.D. degree in applied mathematics from the College of Mathematics and Econometrics, Hunan University, Changsha, China, in 2004 and 2009, respectively. He was a joint Ph.D. student to visit the Department of Applied Mathematics with the University of Western Ontario, London, ON, Canada, from 2008 to 2009. He is currently a Post-Doctoral Research Fellow with the Department of Mechanical and Automation Engineering, Chinese University of Hong Kong, Hong Kong. He is also an Associate Professor with the College of Mathematics and Econometrics. His current research interests include theory of functional differential equations and differential equations with discontinuous right-hands, and their applications to dynamics of neural networks, memristive systems, and control systems.

1311

Shaofu Yang received the B.S. and M.S. degrees in applied mathematics from Southeast University, Nanjing, China, in 2010 and 2013, respectively. He is currently pursuing the Ph.D. degree with the Department of Mechanical and Automation Engineering, Chinese University of Hong Kong, Hong Kong. His current research interests include the dynamics of neural networks and their application in optimization.

Jun Wang (S’89–M’90–SM’93–F’07) received the B.S. degree in electrical engineering and the M.S. degree in systems engineering from the Dalian University of Technology, Dalian, China, in 1982 and 1985, respectively, and the Ph.D. degree in systems engineering from Case Western Reserve University, Cleveland, OH, USA, in 1991. He held various academic positions with the Dalian University of Technology, Case Western Reserve University, and the University of North Dakota, Grand Forks, ND, USA. He also held various short-term or part-time visiting positions with the U.S. Air Force Armstrong Laboratory in 1995, the RIKEN Brain Science Institute, Wak?, Japan, in 2001, the Universite Catholique de Louvain, Louvain-la-Neuve, Belgium, in 2001, the Chinese Academy of Sciences, Beijing, China, in 2002, the Huazhong University of Science and Technology, Wuhan, China, from 2006 to 2007, Shanghai Jiao Tong University, Shanghai, China, from 2008 to 2011, as a Cheung Kong Chair Professor, and the Dalian University of Technology, since 2011, as a National Thousand-Talent Chair Professor. He is currently a Professor with the Department of Mechanical and Automation Engineering, Chinese University of Hong Kong, Hong Kong. His current research interests include neural networks and their applications. Prof. Wang has been the Editor-in-Chief of the IEEE T RANSACTIONS ON C YBERNETICS since 2014, and served as an Associate Editor of the journal and its predecessor from 2003 to 2013. He was an Editorial Board Member of Neural Networks from 2012 to 2014. He also served as an Associate Editor of the IEEE T RANSACTIONS ON N EURAL N ETWORKS , from 1999 to 2009, and the IEEE T RANSACTIONS ON S YSTEMS , M AN , AND C YBERNETICS PART C, from 2002 to 2005, and an Editorial Advisory Board Member of the International Journal of Neural Systems from 2006 to 2012. He was a Guest Editor of special issues of the European Journal of Operational Research in 1996, the International Journal of Neural Systems in 2007, and Neurocomputing in 2008. He served as the President of the Asia Pacific Neural Network Assembly (APNNA) in 2006, the General Chair of the 13th International Conference on Neural Information Processing in 2006 and the IEEE World Congress on Computational Intelligence in 2008, and the Program Chair of the IEEE International Conference on Systems, Man, and Cybernetics in 2012. In addition, he has served on many committees, such as the IEEE Fellow Committee. He was/is an IEEE Computational Intelligence Society Distinguished Lecturer from 2010 to 2012 and 2014 to 2016. He was a recipient of the Research Excellence Award from the Chinese University of Hong Kong from 2008 to 2009, two Natural Science Awards (first class), respectively, from the Shanghai Municipal Government in 2009 and the Ministry of Education of China in 2011, the Outstanding Achievement Award from APNNA, the IEEE T RANSACTIONS ON N EURAL N ETWORKS Outstanding Paper Award (with Q. Liu) in 2011, and the Neural Networks Pioneer Award from the IEEE Computational Intelligence Society in 2014.

Global exponential synchronization of multiple memristive neural networks with time delay via nonlinear coupling.

This paper presents theoretical results on the global exponential synchronization of multiple memristive neural networks with time delays. A novel cou...
4MB Sizes 7 Downloads 4 Views