Cogn Neurodyn DOI 10.1007/s11571-013-9277-6

RESEARCH ARTICLE

Exponential synchronization of memristive Cohen–Grossberg neural networks with mixed delays Xinsong Yang • Jinde Cao • Wenwu Yu

Received: 9 September 2013 / Revised: 19 November 2013 / Accepted: 24 December 2013 Ó Springer Science+Business Media Dordrecht 2014

Abstract This paper concerns the problem of global exponential synchronization for a class of memristor-based Cohen–Grossberg neural networks with time-varying discrete delays and unbounded distributed delays. The driveresponse set is discussed. A novel controller is designed such that the response (slave) system can be controlled to synchronize with the drive (master) system. Through a nonlinear transformation, we get an alternative system from the considered memristor-based Cohen–Grossberg neural networks. By investigating the global exponential synchronization of the alternative system, we obtain the corresponding synchronization criteria of the considered memristor-based Cohen–Grossberg neural networks. Moreover, the conditions established in this paper are easy to be verified and improve the conditions derived in most of existing papers concerning stability and synchronization for memristor-

X. Yang (&) Department of Mathematics, Chongqing Normal University, Chongqing 401331, China e-mail: [email protected] J. Cao  W. Yu Department of Mathematics and Research Center for Complex Systems and Network Sciences, Southeast University, Nanjing 210096, China e-mail: [email protected] W. Yu e-mail: [email protected] J. Cao Department of Mathematics, Faculty of Science, King Abdulaziz University, Jidda 21589, Saudi Arabia W. Yu School of Electrical and Computer Engineering, RMIT University, Melbourne, VIC 3001, Australia

based neural networks. Numerical simulations are given to show the effectiveness of the theoretical results. Keywords Exponential synchronization  Memristor  Cohen–Grossberg neural networks  Unbounded distributed delay  Control

Introduction Memristor is a contraction for memory resistor, which was firstly postulated by Chua (1971) and Chua and Kang (1976). However, memristor did not cause much attention of researchers until the papers Strukov et al. (2008) and Tour and He (2008) announced that a memristor of nanometer-size solid-state two-terminal device has been fabricated by a team from the Hewlett-Packard Company. In this memristor, the value (memristance) depends on the magnitude and polarity of the voltage applied to it and the length of the time that the voltage has been applied. When the voltage is turned off, the memristor remembers its most recent value until it is turned on next time. Because of this feature, the passive electronic device has generated unprecedented worldwide interest because of its potential applications (Itoh and Chua 2008; Wang et al. 2010; Kvatinsky et al. 2013). For instance, based on the memristor technique, the next generation computers may be powerful brain-like ‘‘neural’’ computers and turn on instantly without the usual ‘‘booting time’’ currently required in personal computers (Itoh and Chua 2008). Neural networks can be constructed by nonlinear circuits and have been extensively studied because of their immense potential applications in different areas such as pattern recognition, parallel computing, signal and image processing, and associative memory (Balasubramaniam et al. 2011; Yang et al. 2010; Zhu and Cao 2010; Chen and Song 2010;

123

Cogn Neurodyn

Tsukada et al. 2013). Recently, dynamical behaviors of memristor-based neural networks have attracted increasing attention of researchers because this class of neural networks is a new model to emulate the human brain (Itoh and Chua 2009; Thomas 2013). It is known that memristor-based neural networks is a special kind of differential equations with discontinuous right-hand side (Liu and Cao 2011), which indicates that this class of systems may not have any solution in the classical sense. In fact, Filippov proposed a novel method, i.e., Filippov regularization (Filippov 1988), to transform differential equations with discontinuous righthand side into a differential inclusion (Aubin and Cellina 1984). By utilizing theories of differential inclusion, dynamical behaviors of differential equations with discontinuous right-hand side can be investigated under the framework of Filippov solution (Shen and Cao 2011). Recently, authors in Wu and Zeng (2012, 2013), Zhang et al. (2012, 2013a, b), and Wu et al. (2011, 2012) studied the stability and synchronization of a class of memristorbased neural networks with discrete time delays by using differential inclusion method. However, the main conditions in Wu and Zeng (2012, 2013), Wu et al. (2011, 2012), and Zhang et al. (2012, 2013a, b) were not correct. The study of synchronization for neural networks with discontinuous right-hand sides is not an easy work, since the conditions for the stability of neural networks with discontinuous right-hand sides cannot be simply utilized to study synchronization. In Liu and Cao (2011), attempted to investigate the synchronization of neural networks with discontinuous activation functions, but the obtained synchronization criterion is local. Moreover, when the usual control techniques are considered, only qusi-synchronization results can be derived (Liu and Yu 2012; Liu et al. 2012). Recently, in Yang and Cao (2013), the authors investigated exponential synchronization of delayed neural networks with discontinuous activations by designing discontinuous state feedback controller and adaptive controller; the authors in Yang et al. (2013) studied finite-time synchronization of complex networks with nonidentical discontinuous nodes by designing special discontinuous state feedback controllers. Although memristor-based neural networks are also belong to the nonlinear systems with discontinuous right hand sides, their discontinuities are different from those of neural networks with discontinuous activations, and hence the analytical technology utilized to study the synchronization of neural networks with discontinuous activations may not be applicable to investigate the synchronization of memristor-based neural networks. Therefore, in this paper, we shall study the drive-response synchronization issue of memristor-based neural networks and improve the results in Wu and Zeng (2012, 2013), Wu et al. (2011, 2012), and Zhang et al. (2012, 2013a, b). As an important neural network model, Cohen–Grossberg neural networks were firstly introduced by Cohen and

123

Grossberg (1983). Cohen–Grossberg neural networks model is one of the most popular and typical neural network models. Some other models, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory neural networks, are special cases of the model (Kamel and Xia 2009; Mahdavi and Kurths 2013; Yang et al. 2008, 2011). Stability and synchronization of continuous Cohen–Grossberg neural networks with or without discrete and distributed delays were studied in the literature (Zhu and Cao 2010; He and Cao 2008; Song and Wang 2008). Recently, stability of Cohen–Grossberg neural networks with discontinuous activations were considered in Chen and Song (2010) and Lu and Chen (2008). However, to the best of our knowledge, few published paper considered synchronization control of memristor-based Cohen–Grossberg neural networks. Moreover, according to our study, results on stability of neural networks with discontinuous activations and memristors can not be extended to investigate the synchronization of discontinuous chaotic systems due to the special requirements of stability on the connection weight matrices of the neurons. For example, the connection matrices of discontinuous activations must satisfy the Lyapunov Diagonal Stable (LDS) condition (Chen and Song 2010; Cheng et al. 2007; Di Marco et al. 2012; Forti et al. 2006; Lu and Chen 2008; Wu et al. 2010). However, when these special and strict conditions are satisfied, neural networks usually do not exhibit chaotic behaviors. Therefore, the methods applicable to the stability of neural networks with discontinuous activations and memristors can not be directly employed to study the synchronization of memristor-based Cohen–Grossberg neural networks. Motivated by the above analysis, this paper proposes a memristor-based Cohen–Grossberg neural networks model with time-varying discrete delays and unbounded distributed delays, and then investigates global exponential synchronization of the model. By using the sign function, we design a novel state feedback controller, which is added to the slave system such that its driven states can globally exponentially synchronize with those in the master system. Based on the characteristics of amplification function, behaved function, and derivative theorem for inverse function, we first get an alternative system from the considered memristor-based Cohen–Grossberg neural networks. By investigating the global exponential synchronization of the alternative system, we obtain the corresponding synchronization criteria of the considered model. The convergence rate is explicitly estimated. Moreover, the conditions utilized in this paper are easy to be verified and improve the conditions derived in Wu and Zeng (2012, 2013), Wu et al. (2011, 2012), and Zhang et al. (2012, 2013a, b). Numerical simulations are given to show the effectiveness of the theoretical results. The rest of this paper is organized as follows. In Sect. 2, model of memristor-based Cohen–Grossberg neural

Cogn Neurodyn

networks with mixed delays is described. Some necessary assumptions, definitions, and lemmas are also given in this section. Exponential synchronization of the considered model under state feedback control are studied in Sect. 3. In Sect. 4, numerical example is given to show the effectiveness of our results. Conclusions are finally reached in Sect. 5.

Model description and some preliminaries Referring to the relevant models in Wu and Zeng (2012, 2013), Wu et al. (2011, 2012), and Zhang et al. (2012, 2013a, b) for memristor-based recurrent neural networks, in this paper, we consider a memristor-based Cohen– Grossberg neural network model with mixed delays which is described as follows: ( n  X u_ i ðtÞ ¼ ai ðui ðtÞÞ bi ðui ðtÞÞ  wij ðui ðtÞÞ j¼1

 fj ðuj ðtÞÞ þ cij ðui ðtÞÞfj ðuj ðt  sij ðtÞÞÞ 9 3 Zt = Kij ðt  sÞfj ðuj ðsÞÞds5  Ii ; þ dij ðui ðtÞÞ ; 1

i ¼ 1; 2; . . .; n;

ð1Þ

where ui ðtÞ denotes the state variable of the ith neuron at time t; Ii is the external input to the ith neuron, ai ðui ðtÞÞ and bi ðui ðtÞÞ represent the amplification function and appropriately behaved function at time t, respectively; the time-varying delay sij ðtÞ corresponds to the finite speed of the axonal signal transmission; Kij ðtÞ is a non-negative bounded scalar function defined on ½0; þ1Þ describing the delay kernel of the unbounded distributed delay; wij ðui ðtÞÞ; cij ðui ðtÞÞ and dij ðui ðtÞÞ are connection weights of the neural network satisfying the following conditions:  ^ij ; jui ðtÞj\Ti ; w wij ðui ðtÞÞ ¼ ð2Þ ij ; jui ðtÞj [ Ti ; w  c^ij ; jui ðtÞj\Ti ; cij ðui ðtÞÞ ¼ ð3Þ cij ; jui ðtÞj [ Ti ; ( d^ij ; jui ðtÞj\Ti ; ð4Þ dij ðui ðtÞÞ ¼ dij ; jui ðtÞj [ Ti ; ^ij ; w ij ; c^ij ; cij ; d^ij ; dij ; i; j ¼ where switching jumps Ti [ 0; w 1; 2; . . .; n; are constants. Remark 1 Model (1) includes the memristor-based recurrent neural networks studied in Wu and Zeng (2012, 2013), Wu et al. (2011, 2012), and Zhang et al. (2012, 2013a, b) as a special case since unbounded distributed delays are also considered in Model (1).When wij ðui ðtÞÞ; cij ðui ðtÞÞ,and dij ðui ðtÞÞ are deterministic constants and Kij ðtÞ ¼ 1 for t 2

½0; h (h is a positive constant) and Kij ðtÞ ¼ 0 for t [ h, the model (1) turns out to the systems studied in Gan (2012), Zhu and Cao (2010) and Song and Wang (2008). Therefore, models in this paper are general. In order to achieve our main results, the following assumptions are needed: ðH1 Þ The parameters wij ðui ðtÞÞ; cij ðui ðtÞÞ, and dij ðui ðtÞÞ satisfy the conditions (3) and (4), and there exist constants sij [ 0 such that 0  sij ðtÞ  sij ; i; j ¼ 1; 2; . . .; n; t 2 R. ðH2 Þai ðuÞ is continuous and there exist positive constants ai and ai such that 0\ai  ai ðuÞ  ai ; u 2 R; i ¼ 1; 2; . . .; n. ðH3 Þ There exist positive constants li such that b0i ðuÞ  li , where b0i ðuÞ denotes the derivative of b0i ðuÞ; u 2 R and bi ð0Þ ¼ 0; i ¼ 1; 2; . . .; n. ðH4 Þ There exist constants Li such that jfi ðxÞ  fi ðyÞj  Li jx  yj; 8x; y 2 R; x 6¼ y; i ¼ 1; 2; . . .; n. ðH5 Þ There exist constants Mi such that jfi ðxÞj  Mi for bounded x; 8x 2 R; i ¼ 1; 2; . . .; n. ðH6 Þ The delay kernels Kij : ½0; þ1Þ ! ½0; þ1Þ are real-valued non-negative continuous functions and there R þ1 exist positive numbers bij such that 0 Kij ðsÞds  bij ; i ¼ 1; 2; . . .; n. From ðH2 Þ, the antiderivative of an antiderivative hi ðui Þ of Obviously,

d du i

hi ðui Þ ¼

1 ai ðui Þ :

1 ai ðui Þ

1 ai ðui Þ

exists. We choose

that satisfies hi ð0Þ ¼ 0.

By ai ðui Þ [ 0, we obtain that

hi ðui Þ is strictly monotone increasing about ui . In view of derivative theorem for inverse function, the inverse funcd 1 tion h1 i ðui Þ of hi ðui Þ is differentiable and dui hi ðui Þ ¼ ai ðui Þ: By ðH3 Þ, composition function bi ðt; h1 i ðzÞÞ is differentiable. Denote xi ðtÞ ¼ hi ðui ðtÞÞ. It is easy to see that u0 ðtÞ

x0i ðtÞ ¼ ai ðui i ðtÞÞ and ui ðtÞ ¼ h1 i ðxi ðtÞÞ. Substituting these equalities into system (1), we get n    X   wij h1 x_i ðtÞ ¼  bi h1 i ðxi ðtÞÞ þ i ðxi ðtÞÞ





j¼1

 1   fj h1 j ðxj ðtÞÞ þ cij hi ðxi ðtÞÞ   ðx ðt  s ðtÞÞÞ  fj h1 j ij j   þ dij h1 i ðxi ðtÞÞ 



fj h1 j ðxj ðsÞÞ

Zt

Kij ðt  sÞ

1

 i ds þ Ii :

ð5Þ

From conditions (2), (3), and (4), one can see that system (1) is a differential equation with discontinuous right-hand side. In this case, the solution of (1) in the conventional

123

Cogn Neurodyn

sense does not exist. However, we can discuss the dynamical behaviors of system (1) by means of Filippov solution. In the following, we first recall the notation of set-valued map, which is needed to define Filippov solution.

where the feedback control term Ri ðtÞ is

Definition 1 Filippov(1960). The Filippov set-valued map of f ðxÞ at x 2 Rn is defined as follows:

where pi ; gi are the control gains to be determined. Similar to the analysis of (5), (6), and (7), we have from (8) and (9) that n h X y_i ðtÞ ¼  bi ðh1 cij ðtÞfj ðh1 i ðyi ðtÞÞÞ þ j ðyj ðtÞÞÞ

FðxÞ ¼

\

\

co½ f ðBðx; dÞ n XÞ;

d [ 0 lðXÞ¼0

Ri ðtÞ ¼  pi ðvi ðtÞ  ui ðtÞÞ  gi signðvi ðtÞ  ui ðtÞÞ;

ð9Þ

t  0;

j¼1

where co½E is the closure of the convex hull of the set E; Bðx; dÞ ¼ fy : ky  xk  dg, and lðXÞ is the Lebesgue measure of set X: For the convenience of later study, we ij ¼ maxfw ^ij ; w ij g; introduce the following notations: w ^ij ; w ij g; cij ¼ maxf^ wij ¼ minfw cij ; cij g; cij ¼ minf^ cij ; cij g; dij ¼ maxfd^ij ; dij g; d ij ¼ minfd^ij ; dij g: Based on Definition 1 and theory of differential inclusion, it can be obtained from (5) that n h h i   X  x_i ðtÞ 2 bi h1 ðx ðtÞÞ þ co w ; w i ij ij i 



fj h1 j ðxj ðtÞÞ

þ co½cij ; cij    fj h1 j ðxj ðt  sij ðtÞÞÞ 

ð6Þ

ij ; dij ðtÞ 2 or equivalently, there exist cij ðtÞ 2 co½wij ; w  co½cij ; cij ; fij ðtÞ 2 co½dij ; dij  such that n    X x_i ðtÞ ¼  bi h1 cij ðtÞ i ðxi ðtÞÞ þ





fj h1 j ðxj ðtÞÞ

þ fij ðtÞ

Zt



j¼1

  þ dij ðtÞ fj h1 j ðxj ðt  sij ðtÞÞÞ

Kij ðt  sÞ

1

   fj h1 ðx ðsÞÞ ds þ Ii : j j

ð7Þ

Let system (1) be the driving system. We construct a controlled response system described by ( " n X v_i ðtÞ ¼  ai ðvi ðtÞÞ bi ðvi ðtÞÞ  wij ðvi ðtÞÞ j¼1

 hj ðvj ðtÞÞ þ cij ðvi ðtÞÞ fj ðvj ðt  sij ðtÞÞÞþdij ðvi ðtÞÞ # ) Zt Kij ðt  sÞgj ðvj ðsÞÞds  Ii þ Ri ðtÞ; ð8Þ 1

123

 

pi ðh1 ðyi ðtÞÞ 1 ai ðhi ðyi ðtÞÞÞ i

 h1 i ðxi ðtÞÞÞ

gi signðh1 i ðyi ðtÞÞ 1 ai ðhi ðyi ðtÞÞÞ h1 i ðxi ðtÞÞÞ;

ui ðsÞ ¼ ui ðsÞ;

1

 i  fj h1 ðx ðsÞÞ ds þ Ii : j j

þ Ii 

ð10Þ

The initial values uðsÞ ¼ ðu1 ðsÞ; u2 ðsÞ; . . .; un ðsÞÞT of (1) and /ðsÞ ¼ ð/1 ðsÞ; /2 ðsÞ; . . .; /n ðsÞÞT of (8) are of the following form

h i Zt þ co d ij ; dij Kij ðt  sÞ 

1

ij ; dij ðtÞ 2 co½cij ; where yi ðtÞ ¼ hi ðvi ðtÞÞ; cij ðtÞ 2 co½wij ; w   cij ; fij ðtÞ 2 co½d ij ; dij 

j¼1



þ dij ðtÞfj ðh1 j ðyj ðt  sij ðtÞÞÞÞ t Z Kij ðt  sÞfj ðh1 þ fij ðtÞ j ðyj ðsÞÞÞds

vi ðsÞ ¼ /i ðsÞ;

ð11Þ

where s  0; i ¼ 1; 2; . . .; n; ui ðsÞ and /i ðsÞ are continuous functions. Then the initial values hðuðsÞÞ ¼ ðh1 ðu1 ðsÞÞ; h2 ðu2 ðsÞÞ; . . .; hn ðun ðsÞÞÞT of (7) and hð/ðsÞÞ ¼ ðh1 ð/1 ðsÞÞ; h2 ð/2 ðsÞÞ; . . .; hn ð/n ðsÞÞÞT of (10) are of the following form xi ðsÞ ¼ hi ðui ðsÞÞ; yi ðsÞ ¼ hi ð/i ðsÞÞ;

s  0;

ð12Þ

where i ¼ 1; 2; . . .; n. Remark 2 The assumptions in this paper are very mild and can be easily verified. Recently, the authors in Wu and Zeng (2012, 2013), Wu et al. (2011, 2012), and Zhang et al. (2012, 2013a, b) investigated the stability and synchronization of neural networks with memristors under the following condition ðH Þ: ij fj ðxj Þ  co½wij ; w ij fj ðyj Þ co½wij ; w ij ðfj ðxj Þ  fj ðyj ÞÞ; co½wij ; w  co½d ; dij fj ðxj Þ  co½d ; dij fj ðyj Þ ij

ij

ij ðfj ðxj Þ  fj ðyj ÞÞ; co½dij ; w  co½f ; fij fj ðxj Þ  co½f ; fij fj ðyj Þ ij

ij

ij ðfj ðxj Þ  fj ðyj ÞÞ: co½fij ; w

Cogn Neurodyn

However, the condition ðH Þ is not right. We take the first ij ; inclusion relation as example. When n1 ¼ wij and n2 ¼ w ij  such then, for any xj ; yj 2 R; there exists n 2 co½wij ; w that

Definition 2 The controlled system (8) is said to be globally exponentially synchronized with system (1) if there exist positive constants M and a such that

ij fj ðyj Þ n1 fj ðxj Þ  n2 fj ðyj Þ ¼ wij fj ðxj Þ  w

i ¼ 1; 2; . . .; n; hold for t  0, where kuðsÞ  /ðsÞk ¼ sups  0 max1  i  n jui ðsÞ  /i ðsÞj:

ð13Þ

¼ nðfj ðxj Þ  fj ðyj ÞÞ:

ij : Otherwise, n ¼ wij ¼ w ij ; Obviously, n 6¼ wij and n 6¼ w ij : Hence, which contradicts the condition wij \w ð14Þ

ij : wij \n\w It follows from (13) and (14) that ij  n w fj ðyj Þ; fj ðxj Þ ¼ wij  n

ð15Þ

which means that "

# ij  n w fj ðxj Þ  fj ðyj Þ ¼  1 fj ðyj Þ: wij  n

jvi ðtÞ  ui ðtÞj  MkuðsÞ  /ðsÞkexpðatÞ;

Lemma 1 (Chain rule) Clarke (1987). If VðxÞ : Rn ! R is C-regular and xðtÞ is absolutely continuous on any compact subinterval of ½0; þ1Þ; then xðtÞ and VðxðtÞÞ : ½0; þ1Þ ! R are differentiable for a.a. t 2 ½0; þ1Þ and d _ VðxðtÞÞ ¼ cðtÞxðtÞ; dt

8cðtÞ 2 oVðxðtÞÞ;

ð18Þ

where oVðxðtÞÞ is the Clark generalized gradient of V at xðtÞ.

ð16Þ w n

On the other hand, one has from (14) that wij n \0: So, the ij equality (16) implies that fj ðxj Þ\0. From the arbitrariness of xj 2 R we get that fj ðuÞ is an negative-valued function on R: However, considering fj ðxj Þ\0 and

wij n wij n \0;

we

obtain from (15) that fj ðyj Þ [ 0, which implies that fj ðuÞ is a positive-valued function on R; this is a contradiction. ij  such that the equality Hence, there is no n 2 co½wij ; w (13) holds, and so the first inclusion relation ðH Þ is not correct. If fj ðuÞ; j ¼ 1; 2; . . .; n; u 2 R satisfy the Lipschitz condition, i.e., there exist positive constants Lj such that jfj ðxj Þ  fj ðyj Þj  Lj jxj  yj j for all xj ; yj 2 R; then other contradiction can also derived. It is obtained from (16) that   !   w   ij  n  1 fj ðyj Þ jfj ðxj Þ  fj ðyj Þj ¼    wij  n ð17Þ  Lj jxj  yj j: Letting xj ! yj ; the above inequality implies that fj ðyj Þ ¼ 0, which contradicts the practical meaning of neural networks. This contradiction also means the Lipschitz condition of the activation and the condition ðH Þ cannot coexist.

Synchronization control of memristor-based Cohen– Grossberg neural networks In this section, synchronization criteria for memristorbased Cohen–Grossberg neural networks with time-varying and unbounded distributed delays under the controller (9) is derived by rigorous mathematical proof. One corollary, which is applicable to memristor-based recurrent networks, is also derived. Theorem 1 Assume (H1)–(H6) hold and the following inequalities are satisfied: pi [

n X ai ij þ cij ai li þ aj Lj ðw ai j¼1

ð19Þ

þ dij bij ÞÞ ¼ Ni ; gi  ai

n X j¼1

^ij  w ij j þ j^ ½jw cij  cij j 

ð20Þ

þjd^ij  dij jbij Mj ¼ Ki ; i ¼ 1; 2; . . .; n; then the controlled system (8) is globally exponentially synchronized with system (1) under the controller (9). Proof Set zðtÞ ¼ ðz1 ðtÞ; z2 ðtÞ; . . .; zn ðtÞÞT ¼ yðtÞ  xðtÞ: It follows from systems (7) and (10) that

123

Cogn Neurodyn 1 z_i ðtÞ ¼  ½bi ðh1 i ðyi ðtÞÞÞ  bi ðhi ðxi ðtÞÞÞ n h i X 1 þ cij ðtÞfj ðh1 j ðyj ðtÞÞÞ:  cij ðtÞfj ðhj ðxj ðtÞÞÞ j¼1

þ

n h X

Define a Lyapunov functional by Vi ¼ eat jzi ðtÞj; i ¼ 1; 2; . . .; n: In view of Lemma 1, it can be obtained from system (21) that V_ i ðtÞ ¼eat signðzi ðtÞÞ ½bi ðh1 i ðyi ðtÞÞÞ n h X bi ðh1 cij ðtÞfj ðh1 i ðxi ðtÞÞÞ þ j ðyj ðtÞÞÞ

dij ðtÞfj ðh1 j ðyj ðt  sij ðtÞÞÞÞ

j¼1

i dij ðtÞfj ðh1 j ðxj ðt  sij ðtÞÞÞÞ " Zt n X þ fij ðtÞ Kij ðt  sÞfj ðh1 ðyj ðsÞÞÞds:

j¼1

i cij ðtÞfj ðh1 ðx ðtÞÞÞ j j n h X dij ðtÞfj ðh1 þ j ðyj ðt  sij ðtÞÞÞÞ

j

j¼1

1

fij ðtÞ

Zt

j¼1

#

i dij ðtÞfj ðh1 j ðxj ðt  sij ðtÞÞÞÞ 2 Zt n X  4fij ðtÞ þ Kij ðt  sÞfj ðh1 j ðyj ðsÞÞÞds

Kij ðt  sÞfj ðh1 j ðxj ðsÞÞÞds

1



pi ðh1 ðyi ðtÞÞ 1 ai ðhi ðyi ðtÞÞÞ i

 h1 i ðxi ðtÞÞÞ

j¼1

gi 1 signðh1  i ðyi ðtÞÞ  hi ðxi ðtÞÞÞ: ai ðh1 ðy ðtÞÞÞ i i

fij ðtÞ

ð21Þ

þdij

Zþ1



jkij ðsÞje dsA;

i ¼ 1; 2; . . .; n:

ð22Þ

It follows from (19) that qi(0) \ 0. Moreover, qi ðkÞ; i ¼ 1; 2; . . .; n; are continuous functions about k 2 R; and q0i ðkÞ ¼ 1 þ

aj Lj ðsij cij eksij þ dij

j¼1

Zþ1

þdij

Zþ1

 h1 i ðxi ðtÞÞÞ

ð24Þ

Since bi ðuÞ and h1 i ðkÞ are strictly monotonically increasing and differentiable, bi ð0Þ ¼ 0, and h1 i ð0Þ ¼ 1 0; bi ðhi ðkÞÞ is strictly monotonically increasing and differentiable about k 2 R and 1 bi ðh1 i ðyi ðtÞÞÞ  bi ðhi ðxi ðtÞÞÞ

jkij ðsÞjseks dsÞ [ 0;

0

and qi ðþ1Þ ¼ þ1; hence qi ðkÞ; i ¼ 1; 2; . . .; n; are strictly monotonically increasing functions. Therefore, for any i 2 f1; 2; . . .; ng; there is a unique ki [ 0 such that ki  ai l i 

pi ðh1 ðyi ðtÞÞ 1 ai ðhi ðyi ðtÞÞÞ i



0

n X

5 Kij ðt  sÞfj ðh1 j ðxj ðsÞÞÞds

gi signðh1 i ðyi ðtÞÞ 1 ai ðhi ðyi ðtÞÞÞ at h1 i ðxi ðtÞÞÞg þ ae jzi ðtÞj:

n  pi ai X ij þ cij eksij þ aj Lj w ai j¼1 1 ks

3

1

Construct the function qi ðkÞ as follows: qi ðkÞ ¼k  ai li 

1

Zt

n pi ai X ij þ cij eki sij þ aj Lj ðw ai j¼1

ð25Þ

¼ b0i ðh1 i ðkÞÞjk¼n ðyi ðtÞ  xi ðtÞÞ;

1 where b0i ðh1 i ðkÞÞjk¼n denote the derivative of bi ðhi ðkÞÞ at the point k ¼ n; n is between yi ðtÞandxi ðtÞ. It is obvious that b0i ðh1 i ðkÞÞjk¼n is unique for any yi ðtÞandxi ðtÞ. Moreover, b0i ðh1 i ðkÞÞjk¼n  ai li : Therefore, it is obtained from (25) that 1  signðzi ðtÞÞbi ðh1 i ðyi ðtÞÞÞ  bi ðhi ðxi ðtÞÞÞ

¼ b0i ðh1 i ðkÞÞjk¼n jzi ðtÞj   ai li jzi ðtÞj;

jkij ðsÞjeki s dsÞ ¼ 0:

ð26Þ

and

0

Taking a ¼ minfk1 ; k2 ; . . .; kn g yields qi ðaÞ ¼ a  ai li  þdij

Zþ1 0

123

 signðzi ðtÞÞ

n pi ai X ij þ cij easij þ aj Lj ðw ai j¼1 as

jkij ðsÞje dsÞ  0;

i ¼ 1; 2; . . .; n:



ð23Þ

pi ðh1 ðyi ðtÞÞ 1 ai ðhi ðyi ðtÞÞÞ i

pi ai jzi ðtÞj: ai

 h1 i ðxi ðtÞÞÞ ð27Þ

Moreover, since h1 i ðkÞ strictly monotone increasing and 1 ð0Þ ¼ 0, we get signðh1 h1 i i ðyi ðtÞÞ  hi ðxi ðtÞÞÞ ¼ signðzi ðtÞÞ. Thus,

Cogn Neurodyn

gi signðh1 i ðyi ðtÞÞ ai ðh1 ðy ðtÞÞÞ i i gi g  h1   i: i ðxi ðtÞÞÞ ¼  i a ai ðh1 ðy ðtÞÞÞ i i

It is obvious that

 signðzi ðtÞÞ

jzi ðtÞj  khðuÞ  hð/Þk  khðuÞ  hð/Þkeat ð28Þ

It is derived from ðH1 Þ; ðH4 Þ; ðH5 Þ and ðH6 Þ that

Vi ðtÞ ¼ jzi ðtÞjeat  khðuÞ  hð/Þk

1 j cij ðtÞfj ðh1 j ðyj ðtÞÞÞ  cij ðtÞfj ðhj ðxj ðtÞÞÞj ij ðtÞfj ðh1  j cij ðtÞfj ðh1 j ðyj ðtÞÞÞ  c j ðxj ðtÞÞÞj 1 þ j cij ðtÞfj ðhj ðxj ðtÞÞÞ  cij ðtÞfj ðh1 j ðxj ðtÞÞÞj

ij Lj aj jzj ðtÞj þ jw ^ij  w ij jMj ; w

ð29Þ

Vi ðtÞ  khðuÞ  hð/Þk;

 cij Lj aj jzj ðt  sij ðtÞÞj þ j^ cij  cij jMj ; and   Zt  fij ðtÞ Kij ðt  sÞfj ðh1 j ðyj ðsÞÞÞds   1

Kij ðt 

1 Zt

 dij Lj aj

ð30Þ

 

 sÞfj ðh1 j ðxj ðsÞÞÞds

ð36Þ

j¼1



Kij ðt  sÞjzj ðsÞjds þ jd^ij  dij jbij Mj :

8t  ~t; i ¼ 1; 2; . . .; n:

0\V_ i0 ð~tÞ

pi0 ai0  a  ai0 li0  Vi0 ð~tÞ ai0 n  X i0 j Lj aj Vj ðtÞ þ ci0 j easi0 j Lj aj  þ w Vj ð~t  si0 j ð~tÞÞ ð31Þ

þdi0 j Lj aj

Z t~

Ki0 j ð~t  sÞeaðt~sÞ Vj ðsÞds

1

Substituting (26)–(9) into (24) produces the following inequality: 

pi ai at _ Vi ðtÞ  e a  ai li  jzi ðtÞj ai n  X ij Lj aj jzj ðtÞj þ cij Lj aj jzj ðt  sij ðtÞÞj þ w j¼1

þdij Lj aj

Zt

 khðuÞ  hð/Þkfa  ai0 li0  þ

n X

i0 j þ ci0 j easi0 j aj Lj ðw

j¼1

þdi0 j

Zþ1

jki0 j ðsÞjeas dsÞg:

Hence,

Kij ðt  sÞjzj ðsÞjds5

1

0\a  ai0 li0 

^ij  w ij j þ j^ ½jw cij  cij j

g þjd^ij  dij jbij Mj  i : ai

pi0 ai0 ai0

0

3

n X j¼1

ð35Þ

Together with (33), (35), and (36), we obtain that

1

þ

V_ i0 ð~tÞ [ 0;

Vi0 ð~tÞ ¼ khðuÞ  hð/Þk; and

 dij ðtÞfj ðh1 j ðxj ðt  sij ðtÞÞÞÞj

fij ðtÞ

ð34Þ

for all t  0; i ¼ 1; 2; . . .; n: Contrarily, there must exists i0 2 f1; 2; . . .; ng and ~t [ 0 such that

jdij ðtÞfj ðh1 j ðyj ðt  sij ðtÞÞÞÞ

Zt

for t  0; i ¼ 1; 2; . . .; n: We claim that

ð32Þ

þdi0 j

Zþ1

n pi 0 ai 0 X i0 j þ ci0 j easi0 j þ aj Lj ðw ai0 j¼1

jki0 j ðsÞjeas dsÞ;

0

which contradicts (23). Hence (34) holds. It follows that

Considering the condition (20), we get from (32) that

n X pi ai _ ij Lj aj Vj ðtÞ Vi ðtÞ  a  ai li  ½w Vi ðtÞ þ ai j¼1

jzi ðtÞj  khðuÞ  hð/Þkeat ;

ð37Þ

for 8t  0; i ¼ 1; 2; . . .; n: It follows from (37) that

þ cij easij Lj aj Vj ðt  sij ðtÞÞþdij Lj aj Zt Kij ðt  sÞeaðtsÞ Vj ðsÞds; i ¼ 1; 2; . . .; n: 1

ð33Þ

1 jvi ðtÞ  ui ðtÞj ¼ jh1 i ðyi ðtÞÞ  hi ðxi ðtÞÞj  ai jzi ðtÞj a  ku  /keat a

ð38Þ

123

Cogn Neurodyn

for 8t  0; i ¼ 1; 2; . . .; n; where a ¼ maxf ai ; i ¼ 1; 2; . . .; ng and a ¼ minfai ; i ¼ 1; 2; . . .; ng: According to Definition 2, the controlled system (8) is globally exponentially synchronized with system (1) under the controller (9). This completes the proof. Remark 3 We first transform the Cohen–Grossberg neural network (1) and (8) into (7) and (10) respectively, then we get the exponential synchronization criteria between (1) and (8) through investigating the exponential synchronization criteria between (7) and (10). By this way, we need not to introduce special condition as those in Gan (2012) for Cohen–Grossberg neural network to derive synchronization criteria. Moreover, the transformation technique used in this paper also enables us to simplify Lyapunov function to prove our main results. However, the conventional Lyapunov functions for studying stability and synchronization of Cohen–Grossberg neural network are not so simple. Remark 4 How to deal with the general amplification function ai ðui ðtÞÞ is a key technology in studying synchronization of Cohen–Grossberg neural networks. In Zhu and Cao (2010) and He and Cao (2008), the amplification function ai ðui ðtÞÞ was constant, which simplified the research greatly. Hence the methods in Zhu and Cao (2010) and He and Cao (2008) are invalid for the models of this paper. When ai ¼ ai ¼ 1 in ðH2 Þ, then the memristor-based Cohen–Grossberg neural networks in this paper turn out to the models considered in Wu and Zeng (2012, 2013), Wu et al. (2011, 2012), and Zhang et al. (2012, 2013a, b). We derive the following corollary from Theorem 1. Corollary 1 Assume ðH1 ÞðH6 Þ hold and ai ¼ ai ¼ 1; and the following inequalities are satisfied: n X ij þ cij þ dij bij Þ; Lj ðw ð39Þ pi [  li þ j¼1

gi 

n X

^ij  w ij j þ j^ ½jw cij  cij j

j¼1

ð40Þ

þ jd^ij  dij jbij Mj ; i ¼ 1; 2; . . .; n; then the controlled system (8) is globally exponentially synchronized with system (1) under the controller (9). Remark 5 The designed controller (9) synchronizes the memristor-based Cohen–Grossberg neural networks effectively. One may notice that the designed controller consists two parts: pi ðvi ðtÞ  ui ðtÞÞ and gi signðvi ðtÞ ui ðtÞÞ. It can be seen from the proof of Theorem 1 that

123

the part gi signðvi ðtÞ  ui ðtÞÞ in the controller plays an important role in dealing with the uncertain differences between the Filippov solutions of the drive and response systems, while the other part pi ðvi ðtÞ  ui ðtÞÞ is to drive the state of the slave system to synchronize with the master system.

Examples and simulations In this section, numerical example is given to show the effectiveness of our theoretical results obtained above. Consider a memristor-based Cohen–Grossberg neural network model with mixed delays as follows: ( 2  X u_ i ðtÞ ¼  ai ðui ðtÞÞ bi ðui ðtÞÞ  wij ðui ðtÞÞ j¼1

 fj ðuj ðtÞÞ þ cij ðui ðtÞÞfj ðuj ðt  sij ðtÞÞÞ 3 Zt Kij ðt  sÞfj ðuj ðsÞÞds5 þdij ðui ðtÞÞ 1

Ii g; i ¼ 1; 2;

ð41Þ

1 1 where a1 ðu1 Þ ¼ 6 þ 1þu 2 ; a2 ðu2 Þ ¼ 3  1þu2 ; b1 ðu1 Þ ¼ 1:61 1

2

u1 þ sinðu1 Þ; b2 ðu2 Þ ¼ 1:45u2 þ sinðu2 Þ; I1 ¼ 0:01; I1 ¼ 0:12; s11 ðtÞ ¼ 1  0:2j sinðtÞj; s12 ðtÞ ¼ 0:9  0:1j cosðtÞj; s21 ðtÞ ¼ j sinðtÞj; s22 ðtÞ ¼ j cosðtÞj; Kij ðtÞ ¼ e0:5t ; fi ðui Þ ¼ tanhðui Þ; i; j ¼ 1; 2;  1:81; ju1 j\0:3; w11 ðu1 Þ ¼ 2:2; ku1 j [ 0:3;  0:14; ku1 j\0:3; w12 ðu1 Þ ¼ 0:12; ku1 j [ 0:3;  1:9; ku2 j\1; w21 ðu2 Þ ¼ 2:2; ku2 j [ 1;  5; ku2 j\1; w22 ðu2 Þ ¼ 5:2; ku2 j [ 1;  0:95; ku1 j\0:3; c11 ðu1 Þ ¼ 1:3; ku1 j [ 0:3;  0:08; ku1 j\0:3; c12 ðu1 Þ ¼ 0:15; ku1 j [ 0:3;  0:2; ku2 j\1; c21 ðu2 Þ ¼ 0:18; ku2 j [ 1;  2:5; ku2 j\1; c22 ðu2 Þ ¼ 2:3; ku2 j [ 1;

Cogn Neurodyn

(a)

0.8

6

0.7

4 0.6

u2(t)

2

0.5 0.4

0 0.3

−2

0.2 0.1

−4 0

−6 −1.5

−1

−0.5

0

0.5

1

1.5

−0.1 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

u1(t)

(b)

0.1

6

0

4

−0.1 −0.2

2

u2(t)

−0.3

0

−0.4 −0.5

−2

−0.6

−4

−0.7 −0.8

−6 −1.5

−1

−0.5

0

0.5

1

1.5

u (t) 1

Fig. 1 Trajectories of system (41) with different initial values: a uðtÞ ¼ ð0:2; 1:2ÞT ; t 2 ½5; 0; uðtÞ ¼ 0 for t 2 ð1; 5Þ; b uðtÞ ¼ ð0:4; 0:6ÞT ; t 2 ½5; 0; uðtÞ ¼ 0 for t 2 ð1; 5Þ:

 d11 ðu1 Þ ¼

0:6; ku1 j\0:3;

0:65; ku1 j [ 0:3;  0:12; ku1 j\0:3; d12 ðu1 Þ ¼ 0:12; ku1 j [ 0:3;  0:2; ku2 j\1; d21 ðu2 Þ ¼ 0:18; ku2 j [ 1;  0:1; ku2 j\1; d22 ðu2 Þ ¼ 0:12; ku2 j [ 1:

Obviously, the assumptions ðH1 Þ  ðH6 Þ are satisfied with s11 ¼ 1:2; s12 ¼ 1; s21 ¼ s22 ¼ 1; a1 ¼ 6; a1 ¼ 7; a2 ¼ 2; a2 ¼ 3; l1 ¼ 0:61; l2 ¼ 0:45; Li ¼ 1, and bij ¼ 2; i; j ¼ 1; 2. Figure 1 describes trajectories of (41) with different initial valves. Moreover, when uðtÞ ¼ ð0:2; 1:2ÞT ; t 2 ½5; 0; uðtÞ ¼ 0 for t 2 ð1; 5Þ, we have M1 ¼ 0:6206andM2 ¼ 1.

Fig. 2 Time responses synchronization errors e1 ðtÞ (upper) and e2 ðtÞ (lower) between (41) and (42) under the controller (9)

The controlled response system is described by ( " 2 X wij ðvi ðtÞÞ:: v_i ðtÞ ¼  ai ðvi ðtÞÞ bi ðvi ðtÞÞ  j¼1

 hj ðvj ðtÞÞ þ cij ðvi ðtÞÞfj ðvj ðt  sij ðtÞÞÞ # ) Zt Kij ðt  sÞgj ðvj ðsÞÞds Ii þdij ðvi ðtÞÞ 1

þ Ri ðtÞ;

i ¼ 1; 2;

ð42Þ

where the feedback control Ri ðtÞ is defined in ((9). By simple computation, we have N1 ¼ 36:8550; N1 ¼ 63:78, K1 ¼ 9:0763; and K2 ¼ 1:9456: According to Theorem 1, the response system (42) can globally exponentially synchronize with the drive system (41) if we take p1 ¼ 67; p2 ¼ 64; g1 ¼ K1 ; and g2 ¼ K2 : In the simulations, the initial condition of system (41) and (42) are the same as those of (a) and (b) in the Fig. 1, respectively. Figure 2 shows the time responses of synchronization

123

Cogn Neurodyn

errors, which implies that the states of the two systems realize synchronization quickly as time goes. Remark 6 Figure 2 shows that initial values have important effects on the trajectories of memristor-based neural networks. This is because the parameters of memristorbased neural networks depend on the states. Moreover, the numerical example demonstrates that the designed controller (9) is powerful to synchronize memristor-based neural networks even though the different trajectories of the coupled memristor-based neural networks.

Conclusions In this paper, global exponential synchronization of memristor-based Cohen–Grossberg neural networks model with time-varying discrete delays and unbounded distributed delays has been studied. The considered model is general and covers most of the existing neural network models. By adding a new controller to the response system, this paper shows theoretically and numerically that the response system can globally exponentially synchronize with the drive system, where the synchronization criteria are easily verified. As a by product, numerical simulations also show that the initial values of the memristor-based Cohen–Grossberg neural networks have key effects on their trajectories. Acknowledgments This work was jointly supported by the National Natural Science Foundation of China (NSFC) under Grants Nos. 61263020, 11101053, 61322302, 61104145, 61272530, and 11072059, the Scientific Research Fund of Chongqing Normal University under Grants Nos. 12XLB031 and No. 940115, the Scientific Research Fund of Chongqing Municipal Education Commission under Grant No. KJ130613, and the Program of Chongqing Innovation Team Project in University under Grant No. KJTD201308, the Natural Science Foundation of Jiangsu Province of China under Grant BK2012741, the Specialized Research Fund for the Doctoral Program of Higher Education under Grants 20110092110017 and 20130092110017.

References Aubin J, Cellina A (1984) Differential inclusions. Springer, Berlin Balasubramaniam P, Chandran R, Theesar SJS (2011) Synchronization of chaotic nonlinear continuous neural networks with timevarying delay. Cogn Neurodyn 5(4):361–371 Chen X, Song Q (2010) Global exponential stability of the periodic solution of delayed cohen–grossberg neural networks with discontinuous activations. Neurocomputing 73(16):3097–3104 Cheng CY, Lin KH, Shih CW (2007) Multistability and convergence in delayed neural networks. Phys D 225(1):61–74 Chua L (1971) Memristor-the missing circuit element. IEEE Trans Circuit Theory 18(5):507–519 Chua LO, Kang SM (1976) Memristive devices and systems. Proc IEEE 64(2):209–223 Clarke F (1987) Optimization and nonsmooth analysis. 5 Society for Industrial Mathematics

123

Cohen MA, Grossberg S (1983) Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans Syst Man Cybern 13(5):815–826 Di Marco M, Forti M, Grazzini M, Pancioni L (2012) Limit set dichotomy and multistability for a class of cooperative neural networks with delays. IEEE Trans Neural Netw Learn Syst 23(9):1473–1485 Filippov A (1960) Differential equations with discontinuous righthand side. Matematicheskii Sbornik 93(1):99–128 Filippov A (1988) Differential equations with discontinuous righthand sides. Kluwer, Dordrecht Forti M, Grazzini M, Nistri P, Pancioni L (2006) Generalized lyapunov approach for convergence of neural networks with discontinuous or non-lipschitz activations. Phys D 214(1):88–99 Gan Q (2012) Adaptive synchronization of cohen–grossberg neural networks with unknown parameters and mixed time-varying delays. Commun Nonlinear Sci Numer Simul 17(7):3040–3049 He W, Cao J (2008) Adaptive synchronization of a class of chaotic neural networks with known or unknown parameters. Phys Lett A 372(4):408–416 Itoh M, Chua LO (2008) Memristor oscillators. Int J Bifurcation Chaos 18(11):3183–3206 Itoh M, Chua LO (2009) Memristor cellular automata and memristor discrete-time cellular neural networks. Int J Bifurcation Chaos 19(11):3605–3656 Kamel MS, Xia Y (2009) Cooperative recurrent modular neural networks for constrained optimization: a survey of models and applications. Cogn Neurodyn 3(1):47–81 Kvatinsky S, Friedman EG, Kolodny A, Weiser UC (2013) Team: threshold adaptive memristor model. IEEE Trans Circuits Syst 60(1):211–221 Liu X, Cao J (2011) Local synchronization of one-to-one coupled neural networks with discontinuous activations. Cogn Neurodyn 5(1):13–20 Liu X, Yu W (2012) Quasi-synchronization of delayed coupled networks with non-identical discontinuous nodes. Adv Neural Netw–ISNN 7367:274–284 Liu X, Cao J, Yu W (2012) Filippov systems and quasi-synchronization control for switched networks. Chaos 22(3):033110–033110 Lu W, Chen T (2008) Almost periodic dynamics of a class of delayed neural networks with discontinuous activations. Neural Comput 20(4):1065–1090 Mahdavi N, Kurths J (2013) Synchrony based learning rule of hopfield like chaotic neural networks with desirable structure. Cogn Neurodyn. doi:10.1007/s11571-013-9260-2 Shen J, Cao J (2011) Finite-time synchronization of coupled neural networks via discontinuous controllers. Cogn Neurodyn 5(4):373–385 Song Q, Wang Z (2008) Stability analysis of impulsive stochastic cohen–grossberg neural networks with mixed time delays. Phys A 387(13):3314–3326 Strukov DB, Snider GS, Stewart DR, Williams RS (2008) The missing memristor found. Nature 453(7191):80–83 Thomas A (2013) Memristor-based neural networks. J Phys D 46(9):093001 Tour JM, He T (2008) The fourth element. Nature 453(7191):42–43 Tsukada H, Yamaguti Y, Tsuda I (2013) Transitory memory retrieval in a biologically plausible neural network model. Cogn Neurodyn. doi:10.1007/s11571-013-9244-2 Wang FZ, Helian N, Wu S, Lim MG, Guo Y, Parker MA (2010) Delayed switching in memristors and memristive systems. IEEE Electron Device Lett 31(7):755–757 Wu A, Zeng Z (2012) Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays. Neural Netw 36:1–10

Cogn Neurodyn Wu A, Zeng Z (2013) Anti-synchronization control of a class of memristive recurrent neural networks. Commun Nonlinear Sci Numer Simul 18:373–385 Wu X, Wang Y, Huang L, Zuo Y (2010) Robust stability analysis of delayed takagi-sugeno fuzzy hopfield neural networks with discontinuous activation functions. Cogn Neurodyn 4(4):347–354 Wu A, Zeng Z, Zhu X, Zhang J (2011) Exponential synchronization of memristor-based recurrent neural networks with time delays. Neurocomputing 74(17):3043–3050 Wu A, Wen S, Zeng Z (2012) Synchronization control of a class of memristor-based recurrent neural networks. Inf Sci 183(1):106–116 Yang X, Cao J (2013) Exponential synchronization of delayed neural networks with discontinuous activations. IEEE Trans Circuits Syst I 60(9):2431–2439 Yang X, Huang C, Zhang D, Long Y (2008) Dynamics of cohen– grossberg neural networks with mixed delays and impulses. Abstract Appl Anal 2008:14. Article ID 432341 Yang X, Cao J, Long Y, Rui W (2010) Adaptive lag synchronization for competitive neural networks with mixed delays and uncertain hybrid perturbations. IEEE Trans Neural Netw 21(10):1656–1667

Yang X, Zhu Q, Huang C (2011) Lag stochastic synchronization of chaotic mixed time-delayed neural networks with uncertain parameters or perturbations. Neurocomputing 74(10):1617–1625 Yang X, Wu Z, Cao J (2013) Finite-time synchronization of complex networks with nonidentical discontinuous nodes. Nonlinear Dyn 73(4):2313–2327 Zhang G, Shen Y, Sun J (2012) Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays. Neurocomputing 97:149–154 Zhang G, Shen Y, Wang L (2013a) Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays. Neural Netw 46:1–8 Zhang G, Shen Y, Yin Q, Sun J (2013b) Global exponential periodicity and stability of a class of memristor-based recurrent neural networks with multiple delays. Inf Sci 232:386–396 Zhu Q, Cao J (2010) Adaptive synchronization of chaotic cohen– crossberg neural networks with mixed time delays. Nonlinear Dyn 61(3):517–534

123

Exponential synchronization of memristive Cohen-Grossberg neural networks with mixed delays.

This paper concerns the problem of global exponential synchronization for a class of memristor-based Cohen-Grossberg neural networks with time-varying...
534KB Sizes 0 Downloads 3 Views