690

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

Lagrange Stability of Memristive Neural Networks With Discrete and Distributed Delays Ailong Wu and Zhigang Zeng, Senior Member, IEEE

Abstract— Memristive neuromorphic system is a good candidate for creating artificial brain. In this paper, a general class of memristive neural networks with discrete and distributed delays is introduced and studied. Some Lagrange stability criteria dependent on the network parameters are derived via nonsmooth analysis and control theory. In particular, several succinct criteria are provided to ascertain the Lagrange stability of memristive neural networks with and without delays. The proposed Lagrange stability criteria are the improvement and extension of the existing results in the literature. Three numerical examples are given to show the superiority of theoretical results. Index Terms— Hybrid systems, Lagrange stability, memristive neural networks, nonsmooth analysis.

I. I NTRODUCTION

M

EMRISTIVE neural networks made of hybrid complementary metal–oxide–semiconductors have a very wide range of uses in bioinspired engineering [1]–[9]. Memristive neural networks are well suited to characterize the nonvolatile feature of the memory cell because of hysteresis effects. Analysis and synthesis of memristive neural networks are very attractive for neuromorphic systems in which the bionic memories are appropriate for innovative designs. The development of high-performance memristive neural networks would benefit a number of important applications in neural learning crcuits [1]–[4], associative memories [5], new classes of artificial neural systems [6]–[9], and so forth. From a systems-theoretic point of view, a memristive neural network is a state-dependent nonlinear system family [7]–[9]. Such system family can reveal coexisting solutions, jumped, transient chaos of rich and complex nonlinear behaviors. Over the years, a lot of pioneering works on nonlinear systems have been reported, [10]–[34]. Whereas, in the past decades, Manuscript received April 4, 2013; accepted August 27, 2013. Date of publication September 16, 2013; date of current version March 10, 2014. This work was supported by the Natural Science Foundation of China under Grant 61304057 and Grant 61125303, and the 973 Program of China under Grant 2011CB710606. The work of A. Wu was supported by the School of Automation, Huazhong University of Science and Technology, Wuhan, China. A. Wu is with the College of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China, and also with the Institute for Information and System Science, Xi’an Jiaotong University, Xi’an 710049, China, and also with the School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China (e-mail: [email protected]). Z. Zeng is with the School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China, and also with the Key Laboratory of Image Processing and Intelligent Control of the Education Ministry of China, Wuhan 430074, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2013.2280458

state-dependent nonlinear system family has not received considerable attention. With the development and application of memristors, the studies of such state-dependent nonlinear system family with its various generalizations may be an active area of research, to allow the memristors to be readily used in emerging technologies. It is well known that Lyapunov stability is one of important issues of dynamic systems, [10]–[13], [15]–[17], [20]–[25], [32]–[34]. From the perspective of system cybernetics, globally stable systems in Lyapunov sense are monostable systems. In network computing, however, monostable systems have been found to be computationally restrictive, which will be hard to govern multiobject decision-making processes. Specifically, to emulate and explain biological behaviors, multistable dynamics are essential to deal with the desired neural computation. It is noted that unlike Lyapunov stability, Lagrange stability refers to the stability of the total system, not the stability of equilibria. A Lagrange stable system may have multistable property because the Lagrange stability is considered on the basis of the boundedness of solutions and the existence of global attractive sets [35]–[40]. In addition, with regard to Lagrange stability, outside the global attractive sets, there is no equilibrium point, periodic state, almost periodic state, or chaos attractor [35]–[40]. No doubt, the basic characteristics for memristive neurodynamic systems are vitally important. We also note that the feedback functions play important roles in the dynamical analysis of neurodynamic systems. In deducing some important properties based on neurodynamic systems, the derived conditions depend not only the network parameters, but also the features of feedback functions. Meanwhile, in practical applications, more general feedback functions can provide the designer with an exciting variety of properties, richness of flexibility, and opportunities. In the respect of analysis and design of memristive neurodynamic systems, the bounded and Lurie-type feedback functions are most widespread [7], [8], [41]–[43]. Motivated by the above discussions, in this paper, firstly, we formulate a class of memristive neural networks with discrete and distributed delays. Secondly, we establish some Lagrange stability criteria under various feedback functions. In addition, as the generalization of the obtained results, Lagrange stability of memristive neural networks with and without delays under various feedback functions are discussed in detail, respectively. Roughly stated, the main advantages of this paper include the following four points.

2162-237X © 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

WU AND ZENG: LAGRANGE STABILITY OF MEMRISTIVE NEURAL NETWORKS

1) The Lagrange stability of memristive neural networks in the presence of bounded and Lurie-type feedback functions are discussed, respectively. The generalization of feedback functions will provide a wider scope for neural network designs and applications. 2) The Lagrange stability does not exclude multistability, i.e., the Lagrange stable memristive neural networks may have multistable property. 3) The scope of attractive domains based on memristive neural networks can be obtained, that is, the storage areas can be estimated. 4) The proposed method in this paper can be applied to the general nonlinear hybrid systems. The remainder of this paper is organized as follows. In Section II, the model formulation and some preliminaries are described. In Section III, the Lagrange stability of memristive neural networks is stated. In Section IV, three numerical examples are given to demonstrate the validity of the theoretical results. Finally, concluding remarks are made in Section V. II. P RELIMINARIES A. Model In this paper, consider a class of memristive neural networks with discrete and distributed delays described by the following equations: x˙i (t) = −x i (t) +

n 

+ +

 ci j (x i (t))

j =1

for i, j = 1, 2, . . . , n, where aˆ i j , aˇ i j , bˆi j , bˇi j , cˆi j , and cˇi j are constants. In fact, (2)–(4) on ai j (x i (t)), bi j (x i (t)), ci j (x i (t)) can be derived via voltage difference [7], [41]. B. Notations

bi j (x i (t)) f j (x j (t − τ j ))

j =1 n 

 Rti j represents the memristor between the feedback function t −h i f i (x i (s)) ds and x i (t). As we known, capacitor Ci is changeless, memductances ij , M  i j , respond to changes in pinched hysteresis Mi j , M loops. Thus, ai j (x i (t)), bi j (x i (t)), and ci j (x i (t)) will change, as pinched hysteresis loops change [8], [9], [41]–[43]. According to the feature of the memristor and the current–voltage characteristic, then ⎧ d f (x (t)) dx i (t) ⎪ ⎨aˆ i j , sgini j j j − ≤0 dt dt (2) ai j (x i (t)) = ⎪ ⎩aˇ i j , sgin d f j (x j (t)) − dx i (t) > 0 ij dt dt ⎧ d f (x (t − τ )) dx i (t) j ⎪ ⎨bˆi j , sgini j j j − ≤0 dt dt bi j (x i (t)) = (3) ⎪ ⎩bˇi j , sgin d f j (x j (t − τ j )) − dx i (t) > 0 ij dt dt ⎧

⎪ cˆi j , sgini j f j (x j (t)) − f j (x j (t − h j )) ⎪ ⎪ ⎪ ⎪ dx i (t) ⎪ ⎨ ≤0 − dt ci j (x i (t)) =

(4) ⎪ cˇi j , sgini j f j (x j (t)) − f j (x j (t − h j )) ⎪ ⎪ ⎪ ⎪ dx i (t) ⎪ ⎩ >0 − dt

ai j (x i (t)) f j (x j (t))

j =1 n 

691

t t −h j

f j (x j (s))ds + Ii

t ≥ 0, i = 1, 2, . . . , n

(1)

where x i (t) is the voltage of the capacitor Ci , time delays τ j and h j satisfy 0 ≤ τ j ≤ τ , 0 ≤ h j ≤ h (τ ≥ 0 and h ≥ 0 are constants), Ii denotes external input, f j (·) is the feedback function, ai j (x i (t)), bi j (x i (t)), and ci j (x i (t)) represent memristive synaptic weights, and Mi j × sgini j Ci ij M bi j (x i (t)) = × sgini j Ci ij M ci j (x i (t)) = × sgini j C  i 1, i = j sgini j = −1, i = j ai j (x i (t)) =

 i j , and M  i j denote the memductances in which Mi j , M of memristors Ri j ,  Ri j , and  Ri j , respectively. In addition, Ri j represents the memristor between the feedback Ri j represents the memristor function f i (x i (t)) and x i (t),  between the feedback function f i (x i (t − τi )) and x i (t), and

Throughout this paper, solutions of all the systems considered in the following are intended in the Filippov’s sense.    denotes closure of [·, ·] represents the interval. co ,  and .  En the convex hull generated by real numbers  is an n × n identity matrix. QT represents the transpose of matrix Q. D + denotes upper right Dini derivative. Let a i j = max{aˆ i j , aˇ i j }, a i j = min{aˆ i j , aˇ i j }, bi j = max{bˆi j , bˇi j }, bi j = min{bˆi j , bˇi j }, ci j = max{cˆi j , cˇi j }, ci j = min{  cˆi j , cˇi j },     ˆ  ˇ       ci j =  ai j = max{ aˆ i j , aˇ i j }, bi j = max{bi j  , bi j },           max{ cˆi j , cˇi j }, for i, j = 1, 2, . . . , n. Denote A = ( ai j )n×n ,  = (  ci j )n×n ,  = diag{τ1 , τ2 , . . . , τn }, B = ( bi j )n×n , C  = diag{h 1 , h 2 , . . . , h n }. Let C be the Banach space of continuous functions σ : [−ζ, 0] → n with the norm σ  = sups∈[−ζ,0] |σ (s)|, where ζ = max{τ, h}. For a given constant T > 0, CT is defined as the subset {σ ∈ C : σ  ≤ T }. Let C be the set of all nonnegative functionals K : C → [0, +∞), mapping bounded sets in C into bounded sets in [0, +∞). For any initial condition φ ∈ C, the solution of network (1) that starts from the initial condition φ will be denoted by x(t; φ). If there is no need to emphasize the initial condition, any solution of network (1) will also simply be denoted by x(t). In electronic implementation of memristive neural networks, the bounded and Lurie-type feedback functions are widely adopted. Based on this, we will consider these two kinds of feedback functions for network (1). To this end, we define the vector function f ∈

692

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

C(n , n ) by f (x) := ( f 1 (x 1 ), f2 (x 2 ), . . . , f n (x n )), where x = (x 1 , x 2 , . . . , x n ) ∈ n . Firstly, consider the bounded feedback functions, which can be given in the form B  { f (·) | fi ∈ C(, ), ∃ ki > 0, | fi (x i )| ≤ ki ∀ x i ∈ , i = 1, 2, . . . , n}

i j , and  i j Of course, the above parameters i j ,  (i, j = 1, 2, . . . , n) in (6) depend on the initial conditions of network (1) and time t. It is obvious that for i ∈ {1, 2, . . . , n}, the set-valued map x i (t)  − x i (t) +

L  { f (·) | f i ∈ M , ∃ ki > 0, x i f i (x i ) ≤ ki x i2

+

where the constants ki , i = 1, 2, . . . , n, are generally called to be as Lurie constants, and

M  G ∈ C(, ) | sG(s) ≥ 0 and D + G(s) ≥ 0, s ∈  . The initial conditions of network (1) are assumed to be x i (t) = φi (t), t ∈ [−ζ, 0], i = 1, 2, . . . , n,

+

j =1

j =1

t t −h j

f j (x j (s))ds + Ii

t ≥ 0, i = 1, 2, . . . , n

(5)

or equivalently, for i, j = 1,  2, . . . , n, there exist i j ∈

 ˆ ˇ co aˆ i j , aˇ i j , i j ∈ co bi j , bi j , and  i j ∈ co cˆi j , cˇi j , such that n  x˙i (t) = − x i (t) + i j f j (x j (t)) j =1

+

n 

 i j f j (x j (t − τ j ))

j =1

+

n  j =1

 i j



t t −h j

f j (x j (s))ds + Ii

t ≥ 0, i = 1, 2, . . . , n. Clearly, for i, j = 1, 2, . . . , n

co aˆ i j , aˇ i j = [a i j , a i j ]   co bˆi j , bˇi j = [bi j , bi j ]

co cˆi j , cˇi j = [ci j , ci j ].

t

t −h j

f j (x j (s))ds + Ii

n 

co aˆ i j , aˇ i j f j (x j (t))

j =1 n 

  co bˆi j , bˇi j f j (x j (t − τ j ))

co cˆi j , cˇi j



t

t −h j

f j (x j (s))ds + Ii

t ≥ 0, i = 1, 2, . . . , n.

  co bˆi j , bˇi j f j (x j (t − τ j )) 

+

n 

j =1



co aˆ i j , aˇ i j f j (x j (t))

co cˆi j , cˇi j



j =1

Through the theories of differential inclusions and set-valued maps, from (1), it follows that

+



co cˆi j , cˇi j

has nonempty compact convex values. Furthermore, it is upper-semicontinuous. A solution x(t) = (x 1 (t), x 2 (t), . . . , x n (t))T (in the sense of Filippov) of network (1) with initial conditions x i (s) = φi (s), s ∈ [−ζ, 0], is absolutely continuous on any compact interval of [0, + ∞), and

+

j =1 n 

n 

x˙i (t) ∈ − x i (t) +

C. Properties

+

  co bˆi j , bˇi j f j (x j (t − τ j ))

j =1

where φi (t) ∈ C([−ζ, 0], ) ζ = max {τ, h}.

n 

n 

j =1

∀ x i ∈ , i = 1, 2, . . . , n}

x˙i (t) ∈ − x i (t) +

co aˆ i j , aˇ i j f j (x j (t))

j =1

where the constants ki , i = 1, 2, . . . , n, are generally called to be as saturation constants. Secondly, consider the Lurie-type feedback functions, which can be given in the form

n 

n 

(6)

Now we define some concepts that are needed later. Definition 1: The trajectory of network (1) is said to be uniformly stable in Lagrange sense (or uniformly bounded), if for any H > 0, there exists a constant K = K(H) > 0 such that |x(t; φ)| < K for all φ ∈ CH and t ≥ 0. Definition 2: If there exist a radially unbounded and positive definite function V (x), a functional K ∈ C, positive constants  and α, such that for any solution x(t) = x(t; φ) of network (1), V (x(t)) > , t ≥ 0, implies V (x(t)) −  ≤ K(φ) exp {−αt} then the trajectory of network (1) is said to be globally exponentially attractive with respect to V , and the compact set := {x ∈ n | V (x) ≤ } is called to be a globally exponentially attractive set of network (1). Definition 3: The trajectory of network (1) is called globally exponentially stable in Lagrange sense, if it is both uniformly stable in Lagrange sense and globally exponentially attractive. In the following, we end this section with three basic lemmas, which will be used in the proofs of the main results. Lemma 1 (Young inequality): Let ρ1 > 0, ρ2 > 0, ρ3 > 1, ρ4 > 1, and 1/ρ3 + 1/ρ4 = 1. Then for any > 0, we have 1  ρ3 1  1 ρ4 ρ1 ρ2 ≤ ρ1 ρ2 + . ρ3 ρ4 The equality holds if and only if (ρ1 )ρ3 = (ρ2 1/ )ρ4 .

WU AND ZENG: LAGRANGE STABILITY OF MEMRISTIVE NEURAL NETWORKS

Lemma 2 [44]: Let G ∈ C([t0 , +∞), ), and there exist positive constants κ1 and κ2 such that +

D G(t) ≤ −κ1 G(t) + κ2 , t ≥ t0 then G(t) −

  κ2 κ2 ≤ G(t0 ) − exp {−κ1 (t − t0 )} , t ≥ t0 . κ1 κ1

In particular, if G(t) ≥ κ2 /κ1 , t ≥ t0 , then G(t) exponentially approaches κ2 /κ1 as t increases. Lemma 3 [45]: For any constant matrix X ∈ n×n , X = X T , scalar ν > 0, vector function  : [0, ν] → n such that the integrations concerned are well-defined, then  ν T   ν   ν ν T (s)X (s)ds ≥ (s)ds X (s)ds . 0

0

0

III. M AIN R ESULTS A. Bounded Feedback Functions In this section, we consider network (1) with the bounded feedback functions f i , i = 1, 2, . . . , n, i.e., f ∈ B. Theorem 1: Assume that f (·) ∈ B, then the trajectory of network (1) is globally exponentially stable in Lagrange sense. In addition, the compact sets i (i = 1, 2) are globally exponentially attractive sets of network (1), where n 1 1  p ⏐ n p Mi p ⏐ p ε |x | i=1 i i   ≤

1 = x ∈ n ⏐ ⏐ p/ p−1 p min p − ( p − 1)εi i=1



1≤i≤n

where p > 1, εi > 0,    p/ p−1 < p/ p − 1 and max εi 1≤i≤n

⏐ n  n  ⏐ |x |

2 = x ∈ n ⏐ ≤ M i i ⏐ 

i=1

i=1

with Mi =

n 

( ai j +  bi j + h j  ci j )k j + |Ii |

j =1

in which ki > 0, i = 1, 2, . . . , n, are the saturation constants of f (·). Proof: See Appendix II. Remark 1: In Theorem 1, it is interesting to illustrate the relationship of 1 and 2 . Clearly, the globally exponentially attractive set 1 only applies to the case p > 1 and cannot deal with the case p = 1. Furthermore, the globally exponentially attractive set 2 fills the gap. Thus, sets 1 and 2 will complement and enrich both. Remark 2: According to Theorem 1, the global exponential stability in Lagrange sense of the trajectory of network (1) with bounded feedback functions is derived without any external conditions. In addition, the globally exponentially attractive sets can be easily computed. Therefore, for analysis and design of memristive neurodynamic systems via neurodynamic approaches, the obtained results can narrow the search scope of attractors, which may bring grievous advantage for the practicing application.

693

B. Lurie-Type Feedback Functions In this section, we consider network (1) with Lurie-type feedback functions f i , i = 1, 2, . . . , n, i.e., f ∈ L . Theorem 2: Assume that f (·) ∈ L . If the matrix Q given by in (7) is negative definite, then the trajectory of network (1) is globally exponentially stable in Lagrange sense with respect  x to W (x) = ni=1 0 i fi (y)dy, where ⎛ ⎞   C B  ⎜ Q ⎟ 2 2 ⎜ ⎟ ⎜   T ⎟ ⎜ B ⎟ Q=⎜ (7) ⎟ −E +  0 n ⎜ 2 ⎟ ⎜ ⎟ ⎝ C ⎠  T 0 −E n +  2 3n×3n with    T  = A + A + 2 + E n − diag 1 , 1 , . . . , 1 (8) Q 2 k1 k2 kn in which ki > 0, i = 1, 2, . . . , n, are the Lurie constants of f (·). In addition, there exist positive constants γ and η such that the set (γ , η) is a globally exponentially attractive set of network (1), where   γ n

(γ , η) = x ∈  | W (x) ≤ . (9) η Proof: See Appendix IV. Remark 3: In Theorem 2, how to choose the appropriate parameters γ and η is an essential problem. Only by choosing γ and η we can get the globally exponentially attractive set

(γ , η) of network (1). We provide a specific computational method, see Appendix III. Remark 4: According to Theorem 2, the global exponential stability in Lagrange sense of the trajectory of network (1) with Lurie-type feedback functions only depends on the parameters of network. Thus, the computational burden is greatly reduced. In addition, the globally exponentially attractive set can be effectively estimated. When searching the ideal patterns via neurodynamic approaches, the time cost of our obtained results is less. Remark 5: When feedback function is both bounded and Lurie-type, for instance f (ϑ) = 1/2(|ϑ + 1| − |ϑ − 1|), f (ϑ) = ϑ/ϑ + 1, obviously, according to Theorems 1 and 2, we can derive different attractive sets. One may, however, wonder which one is better to find the attractive sets via Theorem 1 or 2. Generally, currently it is an open issue. This issue will be the topic of future research. Remark 6: In the existing literature, the Lagrange stability of conventional neural networks is guaranteed [35]–[40]. However, a memristive neural network model is a system family. The existing method does not yield any Lagrange stability criteria. Whereas, our criteria guarantee the global exponential stability in Lagrange sense of memristive neural networks with various feedback functions. Therefore, our method is less conservative. Remark 7: In [9] and [41]–[43], monostability analysis have been discussed for memristive neural networks with bounded feedback functions. In this paper, by contrast, two

694

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

characteristics of the presented results are: 1) memristive neural networks with general feedback functions (bounded and Lurie-type feedback functions) are studied and 2) some new criteria for the global exponential stability in Lagrange sense are established. Lagrange stable memristive neurodynamic systems may have multistable property. Generally, the usual monostability conditions are not adequately applicable to multistable systems. Hence, Theorems 1 and 2 are the improvement and extension of the existing results in the literature. Remark 8: In [8], multistability of linear threshold memristive neural networks was investigated. It is, however, worth observing that linear threshold feedback function is only a special case of Lurie-type feedback function. Furthermore, the Lyapunov functional constructed in this paper can make full use of the information of feedback functions and the involved mixed delays. Thus, the Lyapunov functional presented in this paper is much more general and desirable than that in [8]. Therefore, the result in this paper is less conservative than that in [8]. C. Extension to Memristive Neural Networks With Discrete Delays As an application of the results obtained in the preceding section, a class of memristive neural networks with discrete delays is discussed in this subsection. x˙i (t) = −x i (t) +

n 

ai j (x i (t)) f j (x j (t))

Corollary 2: Assume that f (·) ∈ L . If the matrix Q given by in (11) is negative definite, then the trajectory of network (10) is globally exponentially   x stable in Lagrange sense with respect to W (x) = ni=1 0 i f i (y)dy, where ⎛ ⎞  B  Q ⎜ ⎟ 2 ⎟ (11) Q=⎜ T ⎝ ⎠  B −E n +  2 2n×2n with    T  = A + A + E n − diag 1 , 1 , . . . , 1 Q (12) 2 k1 k2 kn in which ki > 0, i = 1, 2, . . . , n, are the Lurie constants of f (·). In addition, there exist positive constants γ and η such that the set (γ , η) is a globally exponentially attractive set of network (10), where   γ . (13)

(γ , η) = x ∈ n | W (x) ≤ η Proof: The proof is a direct result of Theorem 2. D. Extension to Memristive Neural Networks Without Delays In this section, we extend the results obtained to a class of memristive neural networks without delays. n  x˙i (t) = −x i (t) + ai j (x i (t)) f j (x j (t)) + Ii j =1

t ≥ 0, i = 1, 2, . . . , n.

j =1

+

n 

bi j (x i (t)) f j (x j (t − τ j )) + Ii

j =1

t ≥ 0, i = 1, 2, . . . , n.

(10)

Corollary 1: Assume that f (·) ∈ B, then the trajectory of network (10) is globally exponentially stable in Lagrange sense. In addition, the compact sets i (i = 1, 2) are globally exponentially attractive sets of network (10), where n 1 1  p p Mi p εi i=1

⏐ n ⏐  |x i | p  

1 = x ∈ n ⏐ ≤ ⏐ p/ p−1 p p − ( p − 1)ε min i=1 i 

1≤i≤n

 and

max

1≤ i≤ n

 p/ p−1

εi

where p > 1, εi > 0,  < p/ p − 1

⏐  n  ⏐ n |x | ≤

2 = x ∈ n ⏐ M i i ⏐ 

i=1

i=1

with Mi =

n 

( ai j +  bi j )k j + |Ii |

j =1

in which ki > 0, i = 1, 2, . . . , n, are the saturation constants of f (·). Proof: The proof is an immediate consequence of Theorem 1.

(14)

Corollary 3: Assume that f (·) ∈ B, then the trajectory of network (14) is globally exponentially stable in Lagrange sense. In addition, the compact sets i (i = 1, 2) are globally exponentially attractive sets of network (14), where n 1 1  p ⏐  n p Mi p ⏐ p ε |x | i=1 i i  

1 = x ∈ n ⏐ ≤ ⏐ p/ p−1 p min p − ( p − 1)εi i=1 1≤i≤n

 and

p/ p−1



εi

max

1≤i≤n

where p > 1, εi > 0,  < p/ p − 1

⏐ n  n  ⏐ |x i | ≤ Mi

2 = x ∈  ⏐ 

n⏐

i=1

i=1

with Mi =

n 

 ai j k j + |Ii |

j =1

in which ki > 0, i = 1, 2, . . . , n, are the saturation constants of f (·). Corollary 4: Assume that f (·) ∈ L . If the matrix Q is negative definite, then the trajectory of network (14) is globally exponentially stable in Lagrange sense with respect to W (x) = n  x i f (y)dy, where i i=1 0   + A T A 1 1 1 + E n − diag Q= , ,..., 2 k1 k2 kn

WU AND ZENG: LAGRANGE STABILITY OF MEMRISTIVE NEURAL NETWORKS

in which ki > 0, i = 1, 2, . . . , n, are the Lurie constants of f (·). In addition, there exist positive constants γ and η such that the set (γ , η) is a globally exponentially attractive set of network (14), where   γ n

(γ , η) = x ∈  | W (x) ≤ . η IV. I LLUSTRATIVE E XAMPLES In this section, we will give three examples to illustrate the effectiveness of the proposed Lagrange stability criteria. Example 1: Zeng and Zheng [46] introduced the analogous example that is used for multistability analysis. Here, consider its generalizing form with memristor characteristics ⎧ x˙1 (t) = −x 1 (t) + a11 (x 1 (t)) f (x 1 (t)) ⎪ ⎪ ⎪ ⎪ ⎪ +a12(x 1 (t)) f (x 2 (t)) ⎪ ⎪ ⎪ ⎪ ⎪ +b11(x 1 (t)) f (x 1 (t − 0.1)) ⎪ ⎪ ⎪ ⎪ +b12(x 1 (t)) f (x 2 (t − 0.1)) ⎪ ⎪  t ⎪ ⎪ ⎪ ⎪ ⎪ f (x 1 (s))ds + 0.0001 +c11(x 1 (t)) ⎨ t −0.01

⎪ x˙2 (t) = −x 2 (t) + a21(x 2 (t)) f (x 1 (t)) ⎪ ⎪ ⎪ ⎪ ⎪ +a22(x 2 (t)) f (x 2 (t)) ⎪ ⎪ ⎪ ⎪ ⎪ +b 21 (x 2 (t)) f (x 1 (t − 0.2)) ⎪ ⎪ ⎪ ⎪ ⎪ +b22(x 2 (t)) f (x 2 (t − 0.2)) ⎪ ⎪  t ⎪ ⎪ ⎪ ⎩ +c22 (x 2 (t)) f (x 2 (s))ds + 0.0001 t −0.01

(15) where a11 (x 1 (t)) =

a12 (x 1 (t)) =

b11 (x 1 (t)) =

b12 (x 1 (t)) =

c11 (x 1 (t)) =

a21(x 2 (t)) =

⎧ ⎪ ⎨−0.5,

d f (x 1 (t)) dx 1 (t) − − ≤0 dt dt ⎪ ⎩−0.45, − d f (x 1 (t)) − dx 1 (t) > 0 dt dt ⎧ d f (x dx (t)) (t) 2 1 ⎪ ⎨0.9, − ≤0 dt dt ⎪ ⎩0.85, d f (x 2 (t)) − dx 1(t) > 0 dt dt ⎧ d f (x 1 (t − 0.1)) dx 1 (t) ⎪ ⎨2.3, − − ≤0 dt dt ⎪ ⎩2.25, − d f (x 1 (t − 0.1)) − dx 1 (t) > 0 dt dt ⎧ d f (x 2 (t − 0.1)) dx 1(t) ⎪ ⎨−0.95, − ≤0 dt dt ⎪ ⎩−0.9, d f (x 2 (t − 0.1)) − dx 1(t) > 0 dt dt ⎧ ⎪ 0.001, − { f (x 1 (t)) − f (x 1 (t − 0.01))} ⎪ ⎪ ⎪ dx 1(t) ⎪ ⎪ ⎨ ≤0 − dt ⎪ 0.005, − { f (x 1 (t)) − f (x 1 (t − 0.01))} ⎪ ⎪ ⎪ ⎪ dx 1(t) ⎪ ⎩ >0 − dt ⎧ d f (x 1 (t)) dx 2 (t) ⎪ ⎨−0.2, − ≤0 dt dt ⎪ ⎩−0.15, d f (x 1 (t)) − dx 2 (t) > 0 dt dt

695

a22 (x 2 (t)) =

b21 (x 2 (t)) =

b22 (x 2 (t)) =

c22 (x 2 (t)) =

⎧ ⎪ ⎨2.6,

d f (x 2 (t)) dx 2 (t) − ≤0 dt dt ⎪ ⎩2.55, − d f (x 2 (t)) − dx 2 (t) > 0 dt dt ⎧ d f (x 1 (t − 0.2)) dx 2 (t) ⎪ ⎨0.25, − ≤0 dt dt ⎪ ⎩0.2, d f (x 1 (t − 0.2)) − dx 2 (t) > 0 dt dt ⎧ d f (x 2 (t − 0.2)) dx 2(t) ⎪ ⎨−0.8, − − ≤0 dt dt ⎪ ⎩−0.75, − d f (x 2 (t − 0.2)) − dx 2(t) > 0 dt dt ⎧ ⎪ { 0.001, − f (x (t)) − f (x (t − 0.01))} 2 2 ⎪ ⎪ ⎪ (t) dx ⎪ 2 ⎪ ⎨ ≤0 − dt ⎪ 0.005, − { f (x 2 (t)) − f (x 2 (t − 0.01))} ⎪ ⎪ ⎪ ⎪ dx 2(t) ⎪ ⎩ > 0. − dt −

If we choose feedback function ⎧ 3, ϑ ⎪ ⎪ ⎪ ⎪ ⎪ ϑ − 2, ϑ ⎪ ⎪ ⎪ ⎪ ⎪ 2, ϑ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ϑ − 1, ϑ f (ϑ) = 1, ϑ ⎪ ⎪ ⎪ ϑ, ϑ ⎪ ⎪ ⎪ ⎪ ⎪ −1, ϑ ⎪ ⎪ ⎪ ⎪ ⎪ ϑ + 1, ϑ ⎪ ⎪ ⎩ −2, ϑ

∈ [5, +∞) ∈ [4, 5) ∈ [3, 4) ∈ [2, 3) ∈ [1, 2) ∈ (−1, 1) ∈ (−2, −1] ∈ (−3, −2] ∈ (−∞, −3]

(16)

obviously, f ∈ B, so the trajectory of network (15) is globally exponentially stable in Lagrange sense from Theorem 1. Through the saturation constants k1 = k2 = 3, thus we can calculate the parameters M1 = 13.95025, M2 = 11.55025, let ε1 = ε2 = 1, p = 2, then we can obtain the global exponential attractive sets as follows:   ⏐ 2 2 ⏐ 2 ⏐ x1 + x2 ≤ 164.0089

1 = x ∈  ⏐ 2 ⏐   ⏐ 2⏐

2 = x ∈  ⏐ |x 1 | + |x 2 | ≤ 25.5005 . Simulation result of network (15) with the feedback function (16) from 500 initial values is shown in Fig. 1. By comparing, Zeng and Zheng [46] results are similar to the number of multistable equilibria of network (15) with the feedback function (16). However, the scope of attractive domains is also obtained. Therefore, for analysis and design of associative memories based on neural networks, one can easily understand that the desired memory patterns must be located in the attractive sets, meanwhile, the attractive domains of such stable memory patterns can be easily estimated, which may provide a highly efficient design strategy. On the other hand, if we choose the feedback function f (ϑ) =

ϑ + tanh(ϑ) 16

(17)

696

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

Fig. 1. Transient behaviors of network (15) with the feedback function (16).

Fig. 2. Transient behaviors of network (15) with the feedback function (17).

which is unbounded and the Lurie constants is k1 = k2 = 1/8. Then ⎞ ⎛ −6.4999 0.55 1.15 0.475 0.0025 0 ⎜ 0.55 −4.3999 0.125 0.4 0 0.0025⎟ ⎟ ⎜ ⎜ 1.15 0.125 −0.9 0 0 0 ⎟ ⎟ ⎜ Q=⎜ 0.4 0 −0.8 0 0 ⎟ ⎟ ⎜ 0.475 ⎝ 0.0025 0 0 0 −0.99 0 ⎠ 0 0.0025 0 0 0 −0.99

where f (ϑ) = 1/4(|ϑ + 1| − |ϑ − 1|) ⎧ d f (x(t − 0.1)) dx(t) ⎪ ⎨ 1.3, − ≤0 − dt dt b(x(t)) = ⎪ ⎩ −1.3, − d f (x(t − 0.1)) − dx(t) > 0. dt dt It is not difficult to find that the feedback function is both bounded and Lurie-type. Obviously, the trajectory of network (18) is globally exponentially stable in Lagrange sense from Corollary 1. From calculating the parameter M = 0.651, take ε = 1, p = 2, then we can obtain the global exponential attractive sets as follows: ⏐ 2   ⏐x ⏐ ≤ 0.2119

1 = x ∈ ⏐ 2 ⏐   ⏐ |x| ≤ 0.651 .

2 = x ∈ ⏐ ⏐

where the eigenvalues of matrix Q are −6.8597, −4.3687, −0.99, −0.99, −0.8065, −0.5649, and also matrix Q is negative definite. Hence, analogously we gain that the trajectory of network (15) is globally exponentially stable in Lagrange sense from Theorem 2. In addition, choose ξ = 0.6, then ⎛ ⎞ −3.2999 0.55 1.15 0.475 0.0025 0 ⎜ 0.55 −1.1999 0.125 0.4 0 0.0025⎟ ⎜ ⎟ ⎜ 1.15 0.125 −0.9 0 0 0 ⎟ ⎜ ⎟ Q(ξ ) = ⎜ 0.4 0 −0.8 0 0 ⎟ ⎜ 0.475 ⎟ ⎝ 0.0025 0 0 0 −0.99 0 ⎠ 0 0.0025 0 0 0 −0.99 so the eigenvalues of matrix Q(ξ ) are −3.8891, −1.4672, −0.99, −0.99, −0.7366, −0.1069, so we can get μ = 0.1069. Choose ε = 0.01, one can easily calculate the parameter γ = 0.005, then we select an appropriate η = 0.01. Therefore, the globally exponentially attractive set is  = x ∈ 2

    x1 ϑ + tanh(ϑ)  x2 ϑ + tanh(ϑ)  dϑ + dϑ ≤ 0.5 . 0  0 16 16

Simulation result of network (15) with the feedback function (17) from 80 initial values is shown in Fig. 2. In Figs. 1 and 2, the program DDE23.m in MATLAB is used to integrate the delay differential equation, and the statedependent nonsmooth characteristic is solved numerically via conditional control statements if-else. Example 2: Consider linear saturation memristive neurodynamic system x(t) ˙ = −x(t) + b(x(t)) f (x(t − 0.1)) + 0.001

(18)

According to Corollary 2, choose ξ = 0.75,  ε = 0.01, we can get γ = 0.000025 and η = 0.001. Thus, the globally exponentially attractive set is x

= {x ∈  | 0 14 (|ϑ + 1| − |ϑ − 1|)dϑ ≤ 0.025}. Clearly, one cannot easily compare the pros and the cons of different attractive sets { 1 , 2 } and . The general rule is that the attractive sets 1 and 2 are simple and explicit, relatively speaking, the attractive set is somewhat complicated. The simulation result of network (18) with 60 initial values is described in Fig. 3. Example 3: Consider Ikeda-type oscillator with memristor characteristics x(t) ˙ = −x(t) + b(x(t)) sin(x(t − 1)) where

⎧ ⎪ ⎨ 1.3,

d sin(x(t − 1)) dx(t) − ≤0 dt dt b(x(t)) = ⎪ ⎩ −1.3, − d sin(x(t − 1)) − dx(t) > 0. dt dt Obviously, the trajectory of network in Example 3 is globally exponentially stable in Lagrange sense from Corollary 1. From calculating the parameter M = 1.3, take ε = 1, p = 2, −

WU AND ZENG: LAGRANGE STABILITY OF MEMRISTIVE NEURAL NETWORKS

697

of memristive neural networks with discrete and distributed delays. It is clear that the proposed methodology is potentially applicable to analyze other general nonlinear hybrid systems. Memristive neural networks have demonstrated high efficiency in numerous applications, and it would be an interesting and important research topic for further investigation. A PPENDIX I L EMMA 4 AND I TS P ROOF

Fig. 3.

Transient behaviors of network (18) with 60 initial values.

Lemma 4: Let x(t) := x(t; φ) be a solution of network (1) and f i be with x i f i (x i ) ≥ 0, x i ∈ , i = 1, 2, . . . , n. Define the function n  n    x i (t ) t 2 V (t) = fi (y)dy + 0 t −τi f i (x i (s))ds i=1

+

n 

hi

i=1

0 t

i=1

−h i

2 t +θ f i (x i (s))dsdθ,

t ≥0

where 0 ≤ τi ≤ τ , 0 ≤ h i ≤ h, i = 1, 2, . . . , n, are constants. If there exist positive constants α, ˆ βˆ and γˆ such that n  x i (t ) n   fi (y)dy − βˆ f i2 (x i (t)) + γˆ , t ≥ 0 D + V (t) ≤−αˆ 0

i=1

i=1

then for any ηˆ > 0 with ηˆ ≤ αˆ and max1≤i≤n {ητ ˆ i exp ητ ˆ i + ˆ we have ˆ i } ≤ β, ηh ˆ 3i exp ηh n  x i (t ) 

f i (y)dy − γˆ /ηˆ ≤ K(φ) exp −ηt ˆ ,t ≥ 0 i=1

Fig. 4.

Transient behaviors of network in Example 3 with 80 initial values.

where K ∈ C is given by n  φi (0)  f i (y)dy K(φ) = i=1

then we can obtain the global exponential attractive sets as follows: ⏐ 2   ⏐x ⏐ ≤ 0.845

1 = x ∈ ⏐ 2 ⏐   ⏐ ⏐

2 = x ∈ ⏐ |x| ≤ 1.3 . The simulation result of network in Example 3 with 80 initial values is shown in in Fig. 4. As we known, a plethora of complex nonlinear behaviors, including chaos appear in Ikeda-type oscillator, a analytical study of the basic oscillator is necessary. Here, the location characteristic of oscillating attractors and the scope of attractive domains are obtained, and consequently, these theoretical results can narrow the search field of optimization computation and chaos control. Remark 9: It is worth pointing out that the results above in Examples 1–3 can not be obtained using any existing results. V. C ONCLUSION Memristive neurodynamic systems have received considerable attention over the last few years. Differently from traditional neural systems, memristive neural systems are characterized by state-dependent nonlinear system families. In this paper, we exploit the Lagrange stability of a class

0

+ +

0

n  0 

i=1 −τi  0 n 

hi

i=1

+

  2 ˆ i 1 + ητ ˆ i exp ητ f i (φi (s))ds

n 



−h i

0 θ

f i2 (x i (φ))dsdθ

ηh ˆ 3i exp ηh ˆ i

i=1



0 −h i

f i2 (x i (φ))ds.

Proof: Consider the following function   γˆ , t ≥0 W (t) = exp ηt ˆ V (t) − ηˆ then

D + W (t) = ηˆ exp ηt ˆ V (t) − exp ηt ˆ γˆ + + exp ηt ˆ D V (t) ≤ ηˆ exp ηt ˆ V (t) − exp ηt ˆ γˆ n  x i (t )  + exp ηt ˆ − αˆ fi (y)dy −βˆ

i=1 n 

0

fi2 (x i (t)) + γˆ

i=1 = ηˆ exp ηt ˆ V (t) + exp ηt ˆ

!

(19)

698

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

× − αˆ

n  

x i (t )

f i (y)dy − βˆ

0

i=1

x i (t )

+

+



hi



0 −h i

t +θ

+ exp ηt ˆ − αˆ −βˆ n  = exp ηt ˆ (−αˆ + η) ˆ

−βˆ

n  

t

i=1

n  x i (t ) 

i=1 n 

+ ηˆ

i=1

+ηˆ

n 

!

hi

i=1





+

ηˆ exp ηt ˆ

0

i=1 n 



≤ =

hi

0



−h i

−h i

n 

t t +θ

hi

i=1

−h i



t t +θ

f i2 (x i (s))dsdθ

! .

t t +θ

f i2 (x i (s))dsdθ dt. (20)

Through changing the order of integration of double integral  t˜ t 2 term 0 ηˆ exp ηt ˆ t −τi f i (x i (s))dsdt in (20), it follows that   t˜ t ηˆ exp ηt ˆ f i2 (x i (s))dsdt 0







−τi t˜

= ηˆ ≤ ηˆ

−τi

t −τi min{s+τi ,t˜}



max{s,0}



0





exp ηt ˆ dt f i2 (x i (s))ds



τi exp η(s ˆ + τi ) f i2 (x i (s))ds

(21)

t

fi2 (x i (s))dsdθ dt t +θ  t˜  0  t ηˆ exp ηt ˆ hi f i2 (x i (s))dsdθ dt −h i t −h i 0  t˜  2 t ηˆ exp ηt ˆ hi f i2 (x i (s))dsdt t −h i 0  0 2 ηh ˆ 3i exp ηh ˆ i f i (x i (s))ds −h i  t˜ 2 +ηh ˆ 3i exp ηh ˆ i exp ηs ˆ f i (x i (s))ds. 0 −h i

(22)

Recalling that

s ∈ [−ζ, 0] and x i (s) 3= φ i (s) for ˆ from (20)–(22), ˆ i exp ητ ˆ i exp ηh ≤ β, max ητ ˆ i + ηh ˆ i

then fi2 (x i (t))

i=1



0

0

ηˆ exp ηt ˆ hi

0

Let t˜ > 0 be arbitrarily given, integrating the above inequality from 0 to t˜, then   γˆ ˜ ˜ exp ηˆ t V (t ) − ηˆ  t˜ n  t  γˆ ≤ V (0) − + ηˆ exp ηt ˆ f i2 (x i (s))dsdt ηˆ t −τ 0 i i=1  t˜ n  βˆ exp ηt ˆ − f i2 (x i (t))dt 0



≤ 

2 exp ηs ˆ fi (x i (s))ds.

1≤i≤n

f i2 (x i (s))ds − βˆ 



f i2 (x i (t))

n 



0

fi (y)dy

0

i=1

!

t −τi



Similar to (21), we have

× f i2 (x i (s))dsdθ ≤ exp ηt ˆ ηˆ

f i2 (x i (s))dsdθ

f i (y)dy

fi2 (x i (t))

n  t 

+ητ ˆ i exp ητ ˆ i

f i2 (x i (s))ds

t −τi

i=1 n 

−τi



0

i=1





τi exp η(s ˆ + τi ) f i2 (x i (s))ds 0  0 2 ˆ i f i (x i (s))ds ≤ ητ ˆ i exp ητ

i=1 x i (t )



−τi



!

t



τi exp η(s ˆ + τi ) f i2 (x i (s))ds

0

+ηˆ

f i2 (x i (s))ds

t −τi

i=1

+ηˆ

= ηˆ

fi (y)dy

i=1 0 n  t  i=1 n 



! f i2 (x i (t))

i=1

n  

= ηˆ exp ηt ˆ

n 

  γˆ exp ηˆ t˜ V (t˜) − ηˆ  n 0 2 γˆ  ≤ V (0) − + ητ ˆ i exp ητ ˆ i f i (x i (s))ds ηˆ −τ i i=1  n  t˜ 2 + ητ ˆ i exp ητ ˆ i exp ηs ˆ f i (x i (s))ds i=1 t˜

 −

0

βˆ exp ηt ˆ

0

+

n 

n  i=1

ηh ˆ 3i exp



ηh ˆ i

i=1

+

n 

ηh ˆ 3i exp ηh ˆ i

f i2 (x i (t))dt  

γˆ = V (0) − + ηˆ +

n 



f i2 (x i (s))ds

2 exp ηs ˆ f i (x i (s))ds

ητ ˆ i exp ητ ˆ i

i=1

ηh ˆ 3i exp ηh ˆ i

i=1

−h i

0

i=1 n 

0



0 −h i



0 −τi

f i2 (x i (s))ds

f i2 (x i (s))ds

 n   ˆ 3i exp ηh + ˆ i + ηh ˆ i − βˆ ητ ˆ i exp ητ i=1 t˜

 ×

0

2 exp ηs ˆ f i (x i (s))ds γˆ  + ητ ˆ i exp ητ ˆ i ηˆ n

≤ V (0) −

i=1



0 −τi

f i2 (x i (s))ds

WU AND ZENG: LAGRANGE STABILITY OF MEMRISTIVE NEURAL NETWORKS

+

n 

ηh ˆ 3i exp ηh ˆ i



γˆ + ηˆ

= V (0) −

+

i=1 n 

Let

fi2 (x i (s))ds

−h i

i=1 n 

0



ητ ˆ i exp ητ ˆ i

0 −τi

ηh ˆ 3i exp ηh ˆ i



i=1

0 −h i

699

K(φ) =

f i2 (φi (s))ds

+

f i2 (φi (s))ds.

+

 n 0 2 γˆ  γˆ V (t˜)− ≤ V (0) − + ητ ˆ i exp ητ ˆ i f i (φi (s))ds ηˆ ηˆ −τi i=1 !  n  0 2

+ ηh ˆ 3i exp ηh ˆ i fi (φi (s))ds exp −ηˆ t˜ . −h i

i=1

Since t˜ is arbitrary, it follows that

i=1

x i (t )

f i (y)dy −

0

γˆ ≤ V (t) − ηˆ γˆ + ηˆ

≤ V (0) − +

n 

n 

+

ητ ˆ i exp ητ ˆ i

i=1



ητ ˆ i exp ητ ˆ i 

ηh ˆ 3i exp ηh ˆ i

=

i=1

+

φi (0)

 hi

−h i

i=1

+

n 

0 θ

ητ ˆ i exp ητ ˆ i

+

ηh ˆ 3i exp

=

i=1

+ +

φi (0)

0

hi

0 −τi



ηh ˆ i

0 −h i

f i2 (φi (s))ds

0

i=1

−h i

0 θ

n  

x i (t )

f i (y)dy −

0

−τi

fi2 (φi (s))ds =−



ηh ˆ i



0 −h i

f i2 (x i (φ))ds.



γˆ ≤ K(φ) exp −ηt ˆ , t ≥ 0. ηˆ

j =1 n 

|x i (t)| p +

i=1

f i2 (φi (s))ds

n 

Mi |x i (t)| p−1 .

i=1

p/ p−1

Mi |x i (t)| p−1 ≤ p − 1/ p εi !

f i2 (φi (s))ds



exp −ηt ˆ

n 

|x i (t)| p +

i=1

f i2 (φi (s))dsdθ 0

−h i

! f i2 (φi (s))ds

|x i (t)| p +

1 1 p M p εip i

then

2 (1 + ητ ˆ i exp ητ ˆ i ) f i (φi (s))ds 

ηh ˆ 3i exp

θ

f i2 (x i (φ))dsdθ

Obviously, K ∈ C, and

D + V (x(t)) ≤ −

 n  3 + ηh ˆ i exp ηh ˆ i i=1

−τi

fi (y)dy

i=1 −τi  0 n 

−h i

0

We first prove that the trajectory of network (1) is uniformly stable in Lagrange sense. n p Let V (x(t)) = i=1 |x i (t)| / p, where p > 1. Choose p/ p−1 εi > 0 (i = 1, 2, . . . , n) such that max1≤i≤n {εi } < p/ p − 1. Evaluating the upper right Dini derivative of V along the trajectory of (5) or (6) gives    n n    + p−1  dx i (t)  |x i (t)| ≤ − |x i (t)| p D V (x(t)) =  dt  i=1 i=1  ! n   ( a i j + + bi j +h j ci j )k j +|Ii | |x i (t)| p−1

f i2 (φi (s))ds

0

n  

i=1 n 



According to Lemma 1



i=1

n  

hi

f i2 (φi (s))dsdθ

i=1 n 

0

!

f i2 (φi (s))ds exp −ηt ˆ

0

i=1



0 −τi

−h i

fi (y)dy + 0



n  

0

n 

i=1 −τi  0 n 

P ROOF OF T HEOREM 1



!

f i2 (φi (s))ds exp −ηt ˆ

0 −h i

i=1

n  

2 (1 + ητ ˆ i exp ητ ˆ i ) f i (φi (s))ds

0

A PPENDIX II

i=1 n 

n  

The proof is completed.

i=1

≤ V (0) +

fi (y)dy

i=1

i=1

γˆ ηˆ

ηh ˆ 3i exp ηh ˆ i n 

+

φi (0)

0

i=1

Therefore

n  

n  



exp −ηt ˆ , t ≥ 0. (23)

n 

p/ p−1

p − 1/ p εi

|x i (t)| p

i=1

n  1 1 p + M p εip i i=1  n  n   1 1 p p/ p−1 |x i (t)| p+ =− M 1− p−1/ p εi p εip i i=1 i=1   p/ p−1 V (x(t)) ≤ − min p − ( p − 1)εi

+

1≤i≤n n  i=1

1 1 p M . p εip i

700

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014

Through Lemma 2



V (x(t)) − min

1≤i≤n



n 1 1  p p Mi p ε i=1 i p/ p−1

p − ( p − 1)εi

Through Definition 2, network (1) is globally exponentially attractive and 1 is a globally exponentially attractive set. This proves the global exponential stability in Lagrange sense of the trajectory of network (1). Next, proving that  2 also is a global exponentially attractive set. Let V (x(t)) = ni=1 |x i (t)|, evaluating the upper right Dini derivative of V along the trajectory of (5) or (6) gives  n  n   D + V (x(t)) ≤ a i j + bi j +h j ci j )k j +|Ii | −|x i (t)|+ (

 ⎤

n 1 1  p p Mi p ε i=1 i

⎢ ⎥ ⎢ ⎥  ⎥ ≤ ⎢V (x(0)) − p/ p−1 ⎦ ⎣ min p − ( p − 1)εi 1≤i≤n

i=1

   p/ p−1 ×exp − min p−( p − 1)εi t , t ≥ 0.

=−

1≤i≤n

|x i (t)| +

i=1

(24)

n  |x i (t)| p





p

min

1≤i≤n



n 1 1  p p Mi i=1 p εi p/ p−1

p − ( p − 1)εi

 ⎤

1≤i≤n

    p/ p−1 × exp − min p − ( p − 1)εi t 1≤i≤n ⎤ ⎡ n 1 1  p p Mi n ⎥ ⎢ |φi (0)| p i=1 p εi ⎥ ⎢  ⎥ ≤⎢ − p/ p−1 ⎦ ⎣ p min p − ( p − 1)ε

Mi .

i=1

i=1

n

(25)

Let K(φ) = V (x(0)) = V (φ(0)) = i=1 |φi (0)|, i = 1, 2, . . . , n, then K ∈ C, and from (25) it implies that V (x(t)) −

n 

Mi ≤ K(φ) exp {−t} , t ≥ 0.

i=1

Through Definition 2, 2 is a globally exponentially attractive set. The proof is completed.

i

1≤i≤n

A PPENDIX III

so we can get

C OMPUTATIONAL M ETHOD

|x i (t)| p ≤ n max |φi (0)| p 1≤i≤n

i=1

p for n any H p > 0, φ ∈ CH , let K = nH , we have i=1 |x i (t)| ≤ K. This immediately implies the uniform boundedness of the solutions of network (1). Hence, the trajectory of network (1) is uniformly stable in Lagrange sense. In addition, noticing that



V (x(0)) − min

1≤i≤n

n 1 1  p p Mi i=1 p εi p/ p−1

p − ( p − 1)εi



n n 1 1 p |x i (0)| = |φi (0)| p := K(φ) ≤ V (x(0)) = p p i=1

i=1

then K ∈ C, and from (24) it implies that



V (x(t)) −

n 

Applying Lemma 2 again n n ) (   V (x(t))− Mi ≤ V (x(0)) − Mi exp {−t} , t ≥ 0.

n ⎢ ⎥ |φi (0)| p ⎢ ⎥  ⎥ − ≤⎢ p/ p−1 ⎣ ⎦ p min p − ( p − 1)εi i=1

n 

Mi

i=1

n 1 1  p p Mi i=1 p εi

i=1

j =1 n  i=1

= −V (x(t)) +

Then

i=1

n 

n 1 1  p p Mi i=1 p εi p/ p−1



min p − ( p − 1)εi     p/ p−1 ≤ K(φ) exp − min p − ( p − 1)εi t , t ≥ 0. 1≤i≤n

1≤i≤n

 is also Since Q given by in (7) is negative definite, so Q  ) negative definite, then there exists 0 < ξ < 1 such that Q(ξ and Q(ξ ) are also negative definite, where   + A T 1 1 1 A 2   +  + E n − ξ diag , ,..., (26) Q(ξ ) = 2 k1 k2 kn ⎞ ⎛   B C  ) ⎟ ⎜ Q(ξ 2 2 ⎟ ⎜ ⎟ ⎜   T ⎟ ⎜ B (27) Q(ξ ) = ⎜ ⎟. −E n +  0 ⎟ ⎜ 2 ⎟ ⎜ ⎠ ⎝ C  T  0 −E n +  2 3n×3n Let −μ be the maximal eigenvalue  of Q(ξ ), where μ > 0. Choose 0 <  ε < μ, define γ = ni=1 Ii 2 /4 ε. Let 0 < η ≤ (1−ξ ) be such that max1≤i≤n {ητi exp {ητi }+ηh 3i exp {ηh i }} ≤ μ − ε, then the process of constructing parameters γ and η is completed. A PPENDIX IV P ROOF OF T HEOREM 2 In the following, we will show that the trajectory of network (1) is uniformly stable in Lagrange sense and (γ , η) as defined in (9) is a globally exponentially attractive set.

WU AND ZENG: LAGRANGE STABILITY OF MEMRISTIVE NEURAL NETWORKS

701

For any given solution x(t) = x(t; φ) of network (1), consider n  

V (t) =

fi (y)dy +

0

i=1

+

x i (t )

n 

 hi

0 −h i

i=1

n  

t −τi

i=1



t t +θ

t

+

D + V (x(t)) n n      ai j | fi (x i (t))|  f j (x j (t)) ≤ − x i (t) f i (x i (t)) +

+

+ +

n  j =1 n 

 bi j || f i (x i (t))|| f j (x j (t − τ j ))   ci j

t

t −h j

j =1

n 

n 

+ +

i=1 n 



i=1

=

n 

n 

+

hi

−(1−ξ )x i (t) fi (x i (t)) +

t t −h i

i=1 n 

f i2 (x i (s))ds

   ai j | f i (x i (t))|  f j (x j (t))

j =1

i=1

+

n 

   bi j | f i (x i (t))|  f j (x j (t − τ j ))

j =1

+

n 

  ci j

j =1

t

t −h j

   f j (x j (s)) | f i (x i (t))| ds !

+ | f i (x i (t))| |Ii | + −

n  i=1 n 

f i2 (x i (t))



0

n 

j =1 n 

+

n 

  ci j

− τi ))

i=1 0 n n  

+

j =1

n 

i=1

 n  − hi

i=1

t t −h i

f i2 (x i (s))ds

t −h i

i=1

fi2 (x i (t − τi ))

f i2 (x i (s))ds

(28)

where parameter ξ as it is defined in computational method of the Appendix III. On the basis of Lemma 3, we have  t f i2 (x i (s))ds hi t −h i  t   t  ≥ f i (x i (s))ds fi (x i (s))ds . (29) t −h i

From (28) and (29)

+

n  i=1

n  

x i (t )

fi (y)dy +

0

i=1 n 

j =1 n 

+

n  j =1

n  i=1

| fi (x i (t))| |Ii |

i=1

   bi j | f i (x i (t))|  f j (x j (t − τ j ))

j =1

+

n 

   ai j | f i (x i (t))|  f j (x j (t))

+

f i (y)dy

   bi j | fi (x i (t))|  f j (x j (t − τ j ))

!

i=1 t

   ai j | fi (x i (t))|  f j (x j (t))

j =1 n 

t −h j

   f j (x j (s)) | fi (x i (t))| ds

i=1

D + V (x(t)) ≤ −(1− ξ )

ξ x i (t) fi (x i (t))

n  x i (t ) 

t

n n   − ξ x i (t) fi (x i (t))+ h 2i f i2 (x i (t))

i=1

i=1

i=1

j =1 n 

   bi j | f i (x i (t))|  f j (x j (t − τ j ))

fi2 (x i (t)) −

i=1

+

   ai j | f i (x i (t))|  f j (x j (t))

t −h i

f i2 (x i (t

 n n   2 2 + h i f i (x i (t))− h i ≤ −(1 − ξ )

i=1

n 

i=1

+

| f i (x i (t))| |Ii |

f i (y)dy +

i=1 n 

i=1



fi2 (x i (t − τi ))

ξ x i (t) fi (x i (t))

j =1

f i2 (x i (t − τi ))

n  i=1

+

i=1

h 2i f i2 (x i (t))

fi2 (x i (t)) −

= −(1−ξ )

!

f i2 (x i (t)) −

!

 t n  2 2 + h i f i (x i (t))− h i fi2 (x i (s))ds t −h i i=1 i=1 n  x i (t ) n  

   f j (x j (s)) | f i (x i (t))| ds

n 

   f j (x j (s)) | fi (x i (t))| ds

i=1 n 

+ | f i (x i (t))| |Ii | n 

t −h j

i=1

j =1

i=1

t

+ | fi (x i (t))| |Ii |

− Evaluating the upper right Dini derivative of V along the trajectory of (5) or (6) gives

  ci j

j =1

f i2 (x i (s))ds

fi2 (x i (s))dsdθ, t ≥ 0.

n 

  ci j

t t −h j

f i2 (x i (t)) −

!    f j (x j (s)) | fi (x i (t))| ds

n  i=1

f i2 (x i (t − τi ))

702

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 4, APRIL 2014



n 

ξ x i (t) f i (x i (t)) +

i=1

  fi (x i (s))ds

t

t −h i

≤ −(1 − ξ )

n  x i (t ) 

+

i=1

Ii2 + 4 ε

t t −h i

 f i (x i (s))ds

i=1

f i (y)dy

0

i=1 n 

similarly, where positive parameters γ and η are defined as those in computational method of the Appendix III. From (31), we have n  x i (t )  γ fi (y)dy ≤ K(φ) + η 0

h 2i fi2 (x i (t))

i=1

n   − i=1

n 

n 

 ε f i2 (x i (t))

i=1

⎞T ⎞ ⎛ | f (x(t))| | f (x(t))| ⎟ ⎜ | f (x(t − τ ))| ⎟ ⎜ ⎟ Q(ξ )⎜ | tf (x(t − τ ))| ⎟  +⎜ ⎠ ⎠ ⎝ t ⎝ | f (x(s))| ds | f (x(s))| ds ⎛

t −h

t −h

n   = −(1− ξ )

x i (t )

f i (y)dy + γ +

0

i=1



⎞T

n 

 ε f i2 (x i (t))

i=1





| f (x(t))| | f (x(t))| ⎜ | f (x(t − τ ))| ⎟ ⎜ | f (x(t − τ ))| ⎟ ⎟ Q(ξ )⎜ t ⎟  +⎜ ⎠ ⎠ ⎝ t ⎝ | f (x(s))| ds | f (x(s))| ds t −h

≤ −(1 − ξ ) −

t −h

n   i=1

x i (t )

f i (y)dy

0

n 

(μ − ε) f i2 (x i (t)) + γ

(30)

i=1

where parameters ξ ,  ε, γ , μ, matrix Q(ξ ) are defined as those in computational method of the Appendix III, and | f (x(t))| = (| f 1 (x 1 (t))| , | f 2 (x 2 (t))| , . . . , | f n (x n (t))|)T 

t t −h

| f (x(t − τ ))| = (| f 1 (x 1 (t − τ1 ))| , | f 2 (x 2 (t − τ2 ))| , . . . , | f n (x n (t − τn ))|)T ,  t  t | f (x(s))| ds = | f 1 (x 1 (s))| ds, | f 2 (x 2 (s))| ds, t −h 1

t −h 2

...,

t

T

t −h n | f n (x n (s))| ds

.

Through Lemma 4, there exists K ∈ C given by K(φ) =

n   i=1

+ +

f i (y)dy

0

n  

0

(1 + ητi exp {ητi }) f i2 (φi (s))ds

i=1 −τi  0 n 

hi

i=1

+

φi (0)

n 

−h i

such that n   i=1

x i (t ) 0



0 θ

f i2 (x i (φ))dsdθ

ηh 3i exp {ηh i }

i=1

f i (y)dy −

for any H > 0, φ ∈ CH , so K(φ) + γ /η is bounded. Combining with f (·) ∈ L , we can get that x(t) is uniformly bounded, hence the trajectory of network (1) is uniformly stable in Lagrange sense. In addition, by Definition 2 and inequality (31), the set (γ , η) as defined in (9) is a globally exponentially attractive set. This proves the global exponential stability in Lagrange sense of the trajectory of network (1) and the proof is completed.



0 −h i

f i2 (x i (φ))ds

γ ≤ K(φ) exp {−ηt} , t ≥ 0 (31) η

ACKNOWLEDGMENT The authors would like to thank the Associate Editor and the anonymous reviewers for their constructive comments and suggestions to improve the quality of the paper. R EFERENCES [1] K. D. Cantley, A. Subramaniam, H. J. Stiegler, R. A. Chapman, and E. M. Vogel, “Hebbian learning in spiking neural networks with nanocrystalline silicon TFTs and memristive synapses,” IEEE Trans. Nanotechnol., vol. 10, no. 5, pp. 1066–1073, Sep. 2011. [2] K. D. Cantley, A. Subramaniam, H. J. Stiegler, R. A. Chapman, and E. M. Vogel, “Neural learning circuits utilizing nanocrystalline silicon transistors and memristors,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 4, pp. 565–573, Apr. 2012. [3] M. Itoh and L. O. Chua, “Memristor cellular automata and memristor discrete-time cellular neural networks,” Int. J. Bifurcat. Chaos, vol. 19, no. 11, pp. 3605–3656, Nov. 2009. [4] H. Kim, M. P. Sah, C. J. Yang, T. Roska, and L. O. Chua, “Neural synaptic weighting with a pulse-based memristor circuit,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 59, no. 1, pp. 148–158, Jan. 2012. [5] Y. V. Pershin and M. Di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Netw., vol. 23, no. 7, pp. 881–886, Sep. 2010. [6] F. Z. Wang, N. Helian, S. N. Wu, X. Yang, Y. K. Guo, G. Lim, and M. M. Rashid, “Delayed switching applied to memristor neural networks,” J. Appl. Phys., vol. 111, no. 7, pp. 07E317-1–07E317-3, 2012. [7] A. L. Wu, S. P. Wen, and Z. G. Zeng, “Synchronization control of a class of memristor-based recurrent neural networks,” Inf. Sci., vol. 183, no. 1, pp. 106–116, Jan. 2012. [8] A. L. Wu and Z. G. Zeng, “Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays,” Neural Netw., vol. 36, pp. 1–10, Dec. 2012. [9] A. L. Wu and Z. G. Zeng, “Exponential stabilization of memristive neural networks with time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 12, pp. 1919–1929, Dec. 2012. [10] H. Huang, G. Feng, and J. Cao, “Robust state estimation for uncertain neural networks with time-varying delay,” IEEE Trans. Neural Netw., vol. 19, no. 8, pp. 1329–1339, Aug. 2008. [11] T. W. Huang, “Robust stability of delayed fuzzy Cohen-Grossberg neural networks,” Comput. Math. Appl., vol. 61, no. 8, pp. 2247–2250, Apr. 2011. [12] T. W. Huang, C. D. Li, S. K. Duan, and J. A. Starzyk, “Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 6, pp. 866–875, Jun. 2012. [13] X. Liu and J. Cao, “Robust state estimation for neural networks with discontinuous activations,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 40, no. 6, pp. 1425–1437, Dec. 2010.

WU AND ZENG: LAGRANGE STABILITY OF MEMRISTIVE NEURAL NETWORKS

[14] D. Liu, Z. Pang, and S. R. Lloyd, “A neural network method for detection of obstructive sleep apnea and narcolepsy based on pupil size and EEG,” IEEE Trans. Neural Netw., vol. 19, no. 2, pp. 308–318, Feb. 2008. [15] Y. Liu, Z. Wang, and X. Liu, “State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays,” Phys. Lett. A, vol. 372, no. 48, pp. 7147–7155, Dec. 2008. [16] Y. Shen and J. Wang, “Almost sure exponential stability of recurrent neural networks with Markovian switching,” IEEE Trans. Neural Netw., vol. 20, no. 5, pp. 840–855, May 2009. [17] Y. Shen and J. Wang, “Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 1, pp. 87–96, Jan. 2012. [18] D. Wang, D. Liu, and Q. Wei, “Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach,” Neurocomputing, vol. 78, no. 1, pp. 14–22, Feb. 2012. [19] Z. Wang and D. Liu, “Data-based controllability and observability analysis of linear discrete-time systems,” IEEE Trans. Neural Netw., vol. 22, no. 12, pp. 2388–2392, Dec. 2011. [20] Z. Wang, Y. Liu, and X. Liu, “State estimation for jumping recurrent neural networks with discrete and distributed delays,” Neural Netw., vol. 22, no. 1, pp. 41–48, Jan. 2009. [21] W. Wu and T. P. Chen, “Global synchronization criteria of linearly coupled neural network systems with time-varying coupling,” IEEE Trans. Neural Netw., vol. 19, no. 2, pp. 319–332, Feb. 2008. [22] W. Wu, W. J. Zhou, and T. P. Chen, “Cluster synchronization of linearly coupled complex networks under pinning control,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 56, no. 4, pp. 829–839, Apr. 2009. [23] X. Yang, J. Cao, and J. Lu, “Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 1, pp. 60–71, Jan. 2012. [24] X. Yang, J. Cao, and J. Lu, “Stochastic synchronization of complex networks with nonidentical nodes via hybrid adaptive and impulsive control,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 59, no. 2, pp. 371–384, Feb. 2012. [25] X. Yang, J. Cao, and J. Lu, “Synchronization of randomly coupled neural networks with Markovian jumping and time-delay,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 60, no. 2, pp. 363–376, Feb. 2013. [26] Z. Yi, “Foundations of implementing the competitive layer model by Lotka-Volterra recurrent neural networks,” IEEE Trans. Neural Netw., vol. 21, no. 3, pp. 494–507, Mar. 2010. [27] Z. Yi, L. Zhang, J. L. Yu, and K. K. Tan, “Permitted and forbidden sets in discrete-time linear threshold recurrent neural networks,” IEEE Trans. Neural Netw., vol. 20, no. 6, pp. 952–963, Jun. 2009. [28] W. Yu, P. C. Francisco, and X. Li, “Two-stage neural sliding mode control of magnetic levitation in minimal invasive surgery,” Neural Comput. Appl., vol. 20, no. 8, pp. 1141–1147, Nov. 2011. [29] W. Yu and X. Li, “Automated nonlinear system modeling with multiple fuzzy neural networks and kernel smoothing,” Int. J. Neural Syst., vol. 20, no. 5, pp. 429–435, Oct. 2010. [30] H. Zhang, J. Liu, D. Ma, and Z. Wang, “Data-core-based fuzzy min–max neural network for pattern classification,” IEEE Trans. Neural Netw., vol. 22, no. 12, pp. 2339–2352, Dec. 2011. [31] H. Zhang, Y. Luo, and D. Liu, “Neural-network-based near-optimal control for a class of discrete-time affine nonlinear systems with control constraints,” IEEE Trans. Neural Netw., vol. 20, no. 9, pp. 1490–1503, Sep. 2009. [32] H. Zhang, T. Ma, G. Huang, and Z. Wang, “Robust global exponential synchronization of uncertain chaotic delayed neural networks via dualstage impulsive control,” IEEE Trans. Syst. Man Cybern., B, Cybern., vol. 40, no. 3, pp. 831–844, Jun. 2010. [33] H. Zhang and Y. Wang, “Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays,” IEEE Trans. Neural Netw., vol. 19, no. 2, pp. 366–370, Feb. 2008. [34] Q. Zhu and J. Cao, “Stability analysis of Markovian jump stochastic BAM neural networks with impulse control and mixed time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 3, pp. 467–479, Mar. 2012.

703

[35] X. X. Liao, Q. Luo, and Z. G. Zeng, “Positive invariant and global exponential attractive sets of neural networks with timevarying delays,” Neurocomputing, vol. 71, nos. 4–6, pp. 513–518, Jan. 2008. [36] X. X. Liao, Q. Luo, Z. G. Zeng, and Y. X. Guo, “Global exponential stability in Lagrange sense for recurrent neural networks with time delays,” Nonlinear Anal. Real World Appl., vol. 9, no. 4, pp. 1535–1557, Sep. 2008. [37] Q. Luo, Z. G. Zeng, and X. X. Liao, “Global exponential stability in Lagrange sense for neutral type recurrent neural networks,” Neurocomputing, vol. 74, no. 4, pp. 638–645, Jan. 2011. [38] Z. W. Tu, J. G. Jian, and K. Wang, “Global exponential stability in Lagrange sense for recurrent neural networks with both timevarying delays and general activation functions via LMI approach,” Nonlinear Anal. Real World Appl., vol. 12, no. 4, pp. 2174–2182, Aug. 2011. [39] B. X. Wang, J. G. Jian, and M. H. Jiang, “Stability in Lagrange sense for Cohen-Grossberg neural networks with time-varying delays and finite distributed delays,” Nonlinear Anal. Hybrid Syst., vol. 4, no. 1, pp. 65–78, Feb. 2010. [40] X. H. Wang, M. H. Jiang, and S. L. Fang, “Stability analysis in Lagrange sense for a non-autonomous Cohen-Grossberg neural network with mixed delays,” Nonlinear Anal. Theory Methods Appl., vol. 70, no. 12, pp. 4294–4306, Jun. 2009. [41] S. P. Wen, Z. G. Zeng, and T. W. Huang, “Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 233–240, Nov. 2012. [42] G. D. Zhang, Y. Shen, and J. W. Sun, “Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 149–154, Nov. 2012. [43] G. D. Zhang, Y. Shen, Q. Yin, and J. W. Sun, “Global exponential periodicity and stability of a class of memristor-based recurrent neural networks with multiple delays,” Inf. Sci., vol. 232, pp. 386–396, May 2013. [44] X. X. Liao, Theory and Applications of Stability for Dynamical Systems. Beijing, China: National Defence Industry Press, 2000. [45] K. Q. Gu, “An integral inequality in the stability problem of time-delay systems,” in Proc. 39th Proc. IEEE Conf. Decision Control, Sydney, Australia, Dec. 2000, pp. 2805–2810. [46] Z. G. Zeng and W. X. Zheng, “Multistability of neural networks with time-varying delays and concave-convex characteristics,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 2, pp. 293–305, Feb. 2012.

Ailong Wu received the Ph.D. degree in systems analysis and integration from Huazhong University of Science and Technology, Wuhan, China, in 2013. He has been a Post-Doctoral Research Fellow with the Institute for Information and System Science, Xi’an Jiaotong University, Xi’an, China, since 2013. He is currently a Lecturer with the College of Mathematics and Statistics, Hubei Normal University, Huangshi, China. His current research interests include nonlinear dynamics, hybrid systems, and associative memories.

Zhigang Zeng (SM’07) received the Ph.D. degree in systems analysis and integration from Huazhong University of Science and Technology, Wuhan, China, in 2003. He is currently a Professor with the School of Automation, Huazhong University of Science and Technology, Wuhan, China, and also with the Key Laboratory of Image Processing and Intelligent Control of the Education Ministry of China, Wuhan, China. His current research interests include neural networks, switched systems, computational intelligence, stability analysis of dynamic systems, pattern recognition, and associative memories.

Lagrange stability of memristive neural networks with discrete and distributed delays.

Memristive neuromorphic system is a good candidate for creating artificial brain. In this paper, a general class of memristive neural networks with di...
1MB Sizes 3 Downloads 3 Views