Neural Networks 60 (2014) 74–83

Contents lists available at ScienceDirect

Neural Networks journal homepage:

Noise cancellation of memristive neural networks Shiping Wen a,b , Zhigang Zeng a,b,∗ , Tingwen Huang c , Xinghuo Yu d a

Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China


Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China


Texas A & M University at Qatar, Doha 5825, Qatar


Platform Technologies Research Institute, RMIT University, VIC 3001, Australia

highlights • • • •

Scalable massively parallel architecture improves the hardware functionality. The state of A-GST memristive devices will decay over long time. The computation results can be read out without altering the states of synapses. A-GST memristive devices make the memristive synapses robustness.



Article history: Received 2 May 2014 Received in revised form 20 June 2014 Accepted 31 July 2014 Available online 8 August 2014 Keywords: Memristor Stability Neural network

abstract This paper investigates noise cancellation problem of memristive neural networks. Based on the reproducible gradual resistance tuning in bipolar mode, a first-order voltage-controlled memristive model is employed with asymmetric voltage thresholds. Since memristive devices are especially tiny to be densely packed in crossbar-like structures and possess long time memory needed by neuromorphic synapses, this paper shows how to approximate the behavior of synapses in neural networks using this memristive device. Also certain templates of memristive neural networks are established to implement the noise cancellation. © 2014 Elsevier Ltd. All rights reserved.

1. Introduction The sequential processing of fetch, decode, and execution of instructions through the classical von Neumann bottleneck of conventional digital computers has resulted in less efficient machines as their eco-systems have grown to be increasingly complex (Jo et al., 2010). Though the current digital computers can now possess the computing speed and complexity to emulate the brain functionality of animals like a spider, mouse, and cat (Ananthanarayanan, Eser, Simon, & Modha, 2009; Smith, 2006), the associated energy dissipation in the system grows exponentially along the hierarchy of animal intelligence. For example, to perform certain cortical simulations at the cat scale even at 83 times slower firing rate, the IBM team in Ananthanarayanan et al. (2009) has

∗ Corresponding author at: Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China. Tel.: +86 18971124190; fax: +86 27 87543130. E-mail addresses: [email protected] (S. Wen), [email protected] (Z. Zeng), [email protected] (T. Huang), [email protected] (X. Yu). 0893-6080/© 2014 Elsevier Ltd. All rights reserved.

to employ Blue Gene/P (BG/P), a super computer equipped with 147,456 CPUs and 144 TBs of main memory. On the other hand, the human brain not only contains more than 100 billion neurons and each neuron has more than 20,000 synapses, but also consumes infinitesimal power and maintains its memory for a long period, even decades. Therefore, it is very important to build a brain-like machine. Furthermore, the implementation of neuromorphic circuits and chips has long been hindered by challenges related to area and power consumption restrictions. More than tens of transistors and capacitors are needed to estimate a synapse (Rachmuth & Poon, 2008). In particular, when neural connections become high level, a large part of neuromorphic chips are utilized for synapses, whereas neurons take only a small portion compared to that of synapses. However, shrinking the current transistor size is very difficult. Therefore, it is critical to introduce a more efficient approach to implement neuromorphic circuits and chips. Memristors, as the fourth electrical elements theoretically proposed by Leon Chua in 1971 (Chua, 1971), are two-terminal electronic devices that memorize the flowing charge. In fact, they are resistive in essence, but their resistance can be altered electrically

S. Wen et al. / Neural Networks 60 (2014) 74–83

with nonlinear properties. Since the first realization of a working memristor was announced in a Pt/TiO2 /Pt by Hewlett–Packard Laboratories (Strukov, Snider, Stewart, & Wiliams, 2008), memristors and memristive devices have been widely investigated and discussed for their prospective applications in nonvolatile memories (Kwon et al., 2010; Muenstermann, Menke, Dittmann, & Waser, 2010), logic devices (Borghetti et al., 2010; Linn, Rosezin, Tappertzhofen, BÖttger, & Waser, 2012), neuromorphic devices (Chang, Jo, & Lu, 2011; Jo et al., 2010; Krzysteczko, Münchenberger, Schäfers, Reiss, & Thomas, 2012), and neuromorphic self-organized computation and learning (Snider, 2007, 2011). Furthermore, researchers have packed memristors into crossbars to form dense memories (Shin, Kim, & Kang, 2012), and designed integrated circuitry compatible with CMOS processes (Xia et al., 2009). Therefore, memristors can be integrated with conventional integrated circuitry. Although the state of memristor will decay over time, the time constant can be as long as weeks to decades, Kim et al. reported stable retention characteristics for more than 104 s at 85 °C (Kim, Park et al., 2011). All these make the memristors candidates to be used as synaptic circuits. Due to the promising applications in wide areas, various memristive materials, such as ferroelectric materials (Jiang et al., 2011), chalcogenide materials (Li et al., 2013; Pandian et al., 2009; Soni et al., 2011; Sun, Hou, Wu, & Tang, 2009), metal oxides (Yang et al., 2010; Yu, Wu, & Wong, 2011), have been attracted great attention. And several physical mechanisms are proposed to illustrate the memristive behaviors, such as electronic barrier modulation from migration of oxygen vacancies (Chang et al., 2011; Kim, Siddik et al., 2011; Krzysteczko et al., 2012; Yang et al., 2008, 2010), voltage-controlled domain configuration (Chanthbouala et al., 2012), formation and annihilation of conducting filaments via diffusion of oxygen vacancies (Kwon et al., 2010; Yan, Guo, Zhang, & Liu, 2011), trapping of charge carriers (Fujimoto, Koyama, Nishi, & Suzuki, 2007; Peng et al., 2012) and metal ions from electrodes (Bishop et al., 2011; Huang, Shi, Yeo, Yi, & Zhao, 2012). The model of voltage controlled memristor with thresholds was first introduced by Pershin and Ventra (2010). As an example to illustrate the feasibility of memristor model, the memristance of an Ovonic chalcogenide device was selected by Chua (1971), and has been investigated recently, such as the filamentary bipolar memristive switching in Ge2 Sb2 Te5 (GST) (Woo et al., 2011). And GST has been proved to be an ideal choice for memristive materials, however, the resistance switching behaviors in Sb-rich GST based devices rely on the formation of conductive paths (Pandian, Kooi, Palasantzas, & De Hosson, 2007). The intrinsic memristance of stoichiometric crystalline GST was revealed by Li et al. (2013). A voltage-controlled memristive system with symmetric voltage thresholds is considered as the resistance which could be modulated within a certain range depending on the voltage polarity and amplitude to make the filament conduct or rupture. It is worth noting that neural networks have been widely investigated in recent years, for their immense application prospective (Guo, Wang, & Yan, 2013, 2014; He, Li, Huang, & Li, 2014; He, Li, & Shu, 2012; Li, Liao, Li, Huang, & Li, 2011; Liu & Wang, 2008, 2013; Luo & Wu, 2012; Wen, Zeng, & Huang, 2013a, 2013b; Wu & Luo, 2012). A creative information processing system called cellular neural network (CNN) was proposed by Chua and Yang in 1988, which came from the HNN and cellular automata as an effective combination of both characteristics (Chua & Yang, 1988a, 1988b). Many applications have been developed in different areas such as combinatorial optimization, knowledge acquisition and pattern recognition. In recent years, more and more attentions have been attracted to the design and analysis of memristive neural networks, such as memristor-based spiking neuromorphic


networks to implement the Spike-Timing-Dependent-Plasticity (STDP) rule (Afifi, Ayatollahi, & Raissi, 2009), using memristors to implement the neighborhood connections of a cellular neural network (Ebong & Mazumder, 2012; Lehtonen & Laiho, 2010), and design of simple associative memory circuit based on memristorbased synapses (Pershin & Ventra, 2010). These pioneering works are meaningful for further investigation of memristive neural networks, and provide novel ideas to implement memristive artificial intelligences. However, there also exist several concerns in the design of memristive neural networks, such as

• which specific materials can be used as memristive devices in these networks;

• how to solve the problems of memristive device variation; • how to realize real-time monitoring the memristor state and off-line training, as directly programming the resistance of a single memristor to the target value is likely impossible (Jo et al., 2010); • how to design the memristive neuromorphic circuits to implement the function of noise cancellation. To overcome these difficulties, GST memristors are used as binary memories, and corresponding algorithms are proposed to implement the transfer function. Meanwhile, employing the stable retention and threshold characteristics, GST memristors are taken into account to implement the synaptic circuits for neural networks. Furthermore, the state evolution is implemented via the advantages of the voltage-controlled threshold properties. 2. Memristance of amorphous-Ge2 Sb2 Te5 (A-GST) A typical crossbar-structure A-GST based memristor is fabricated by micro/nano processes as shown in Fig. 1. A-GST is sandwiched between Cu and Cu/Ag, where Ag layer serves as the cation source. This two terminal device can be used as a neuromorphic synapse. And an electrical system is built up to perform electrical measurements. The states of this device can be switched to amorphous or crystalline ones. To investigate the memristance of A-GST, this memristive device is first set to behavior in crystalline state. Then, the memristive behavior can be observed in crystallineGST under clockwise voltage sweep 0.6 V → −1.5 V → 0.6 V, as demonstrated in Fig. 1. It is obvious that the resistance can be modulated with a certain range via switching the voltage polarity. The lowest resistance state (LRS) is below 3 K, while the highest resistance state (HRS) is over 18 K. And the switching voltage is 0.35 V. To simplify the model of the A-GST memristive device, a threshold memristive model (Pershin & Ventra, 2010) is considered as follows: I = M −1 VM ,

     M M ˙ M = f (VM ) θ (VM )θ − 1 + θ (−VM )θ 1 − , R1


 β − α f (V ) = −β V + | V + VL | − | V − VR | + VR − VL , 2


where I and VM represent the current through and voltage drop on the device, respectively, M is the internal state variable as the memristance R, α and β characterize the rate of memristance change when |VM | is less or greater than threshold voltage, respectively, there VL and VR are threshold voltages, and the unit step functions θ (·) guarantee the memristance can change only ˙ = 0, the between R1 and R2 . According to (1), if VL ≤ V ≤ VR , M device keeps constant resistance under relatively low operation voltage. Based on the threshold feature, Querlioz et al. investigated cancellation to device variations in a spiking neural network with


S. Wen et al. / Neural Networks 60 (2014) 74–83

Fig. 1. I-V characteristics of the device, exhibiting a memristive hysteresis loop. The blue arrows show the directions of sweeping voltage, and HRS and LRS represent high and low resistance states respectively.

memristive nanodevices (Querlioz, Bichler, Dollfus, & Gamrat, 2013), Gao et al. proposed a hybrid CMOS/memristor implementation of a programmable threshold logic gate (Gao, Alibart, & Strukov, 2013). These mentioned works develop a new way to investigate in the application field of memristive devices. Based on these excellent works, this paper proposes a scheme to realize realtime monitor of the memristor state and off-line training in the case that directly programming the resistance of a single memristor to the target value is likely impossible.

Fig. 2. (a) Memristive synapse, (b) discrete approximation implementation of this synapse, much of this circuitry can be shared by multiple synapses.

3. Neuromorphic learning and circuit implementation

M = x(N + 1),

As memristor can record the electrical excitations on itself and occur corresponding resistance change like the biologic synapses with very little decay for long periods of time. Several breakthroughs have been motivated in the design of memristive neuromorphic systems and neural networks (Jo et al., 2010). However, it is difficult to realize real-time monitoring the memristor state and off-line training, as directly programming the resistance of a single memristor to the target value is likely impossible. Then it is necessary to discretely implement the memristive synapses in order to maximize noise cancellation, minimize power dissipation and so on. Therefore, it is particularly desirable to implement the weights through binary memristive devices that communicate via spikes rather than analog voltages, as biology itself takes discrete spikes to communicate. Hence, a scheme is needed to approximate continuous signals and models with discrete ones. Based on the threshold A-GST memristive device, each A-GST memristive device is used to store a single bit, such as using HRS to present a logic 0 bit value, and the LRS to present a logic 1 bit value. As the existence of threshold voltages, these devices can be rapidly altered their resistance, and low voltages have negligible effect on this kind of memristive devices, while larger voltages can make them rapidly change their resistance. Therefore, a positive voltage more than the positive ‘On’ threshold will make A-GST memristive devices switch into the LRS, meanwhile, a negative voltage below the negative ‘Off’ threshold will make A-GST switch into HRS. When an analog voltage input x drops through an A-GST memristive synapse whose memconductive is ω, we can get the output

x is the analog input value. Hence, the precision of the discrete approximation of the analog input value x is determined by the value chosen for N. An auxiliary continuous variable xˆ can be defined as

y = ω x.


where y denotes the desired synaptic current. An abstract circuit is given to implement memristive synapses in Fig. 2. This schematic is a clocked or synchronous circuit that is evaluated at discrete-time steps. First, the continuous analog input signal x is transformed to a discrete digital approximation by a ‘Thermometer’ code. This

coding method can convert the analog signals within the range [0 1] into a discrete version. Although this encoding is less efficient than a binary numerical one, it is much simpler to be implemented, and an example is given in Table 1. In this coding method, we set the first M bits of an N bit codeword to be 1 and the rest to be 0, where

xˆ ≡

N 1 

N i=1

xi ,



where xˆ ∼ = x. In the same way, the weight variable ω will be replaced with N binary variables ω1 , . . . , ωN , then

ωˆ ≡

N 1 

N i=1

ωi ,


where ω ˆ ∼ = ω. To implement the circuit in Fig. 2, an analog input x is encoded by thermometer code to produce the code vector (x1 , . . . , xN ), which is sent in parallel to a circular shift register. Then, this shift register is clocked N times to send spikes through the binary switches ωi to an integrator which accumulates the spikes weighted in variable y. The algorithm is presented as follows: Algorithm 1 Discrete implementation of transfer function Initialization: Encode x → x1 , . . . , xN ; ω → ω1 , . . . , ωN ; Set y ← 0; i ← 1; j ← 0; Iteration: 1: while i ̸= N do 2: while j ̸= N − 1 do 3: y ← y + N12 xmod(i+j,N ) ωi ; j ← j + 1; 4: end while i ← i + 1; 5: end while

S. Wen et al. / Neural Networks 60 (2014) 74–83


Table 1 An example of 4 bit thermometer code. Analog in Digital out

0.00 0000

0.25 1000

0.35 1000

0.41 1100

0.65 1110

0.80 1111

0.99 1111

Fig. 3. Circuit implementation of a single memristive switch.

From this algorithm, we can get y =



N −1 N 1 

N 2 j =0 i =1 N 1 

N i =1 N 1 

N i =1


xmod(i+j,N ) wi

N −1 1 

N j =0

xmod(i+j,N )

ωi xˆ

Fig. 4. Circuitry input signals f (xi ) and g (xi ) (N = 4).

= ωˆ ˆx ≈ ωx.


For a single memristive switch as shown in Fig. 3, during the evaluation of the transfer function, S1 is switched to state position and S2 is closed. A narrow spike is sent in each step when xi = 1, otherwise, no spike is sent when xi = 0, and these spikes are below the threshold voltages that cannot alter the value of the A-GST memristive device. And during the learning stage, which occurs at last in a major cycle, S1 is switched to learn position, f (xi ) and g (xi ) cooperate to refresh the state of ωi . S2 keeps closed in order to discharge the integrator and reset variable y to 0. To implement Fig. 2, each major cycle is divided into N + 1 steps as shown in Fig. 4. The first N steps are used to implement the evaluation of the transfer function an approximation of the desired output signal xω can be computed, and the final step is used to implement the synaptic weight update. Fig. 5. Two-dimensional memristive neural network grid.

4. Memristive neural networks and applications In this section, a circuit scheme of a two-dimensional memristive neural network is designed, An MNN is composed of several processing units, and each unit, denoted as U (i, j) is connected to its adjacent ones, therefore, only the neighboring ones can interact directly with each other. Meanwhile, all units are arranged in grids with M rows and N columns as shown in Fig. 5. For each processing unit U (i, j), the following set Nr (i, j), named r-neighborhood, can be defined as follows:

Nr (i, j) = U (i, j)|


1≤k≤M ,1≤l≤N

(|k − i|, |l − j|) ≤ r


where r is a positive integer denoting the neighborhood radius of each unit, and the pairs (i, j) and (k, l) are indices which express the position of units in the grid. In practice, the units which belong to the r-neighborhood of U (i, j) are arranged in a maximum (2r + 1) × (2r + 1) grid whose central element coincides with U (i, j).

One unit of the connected neural networks can be presented as in Fig. 6. The node voltage Vxij of U (i, j) denotes the state of the unit and its initial state is assumed to be not greater than 1. The node voltage Vuij presents the input of U (i, j), and is assumed to be less than or equal to 1. And Vyij is the output of U (i, j). After certain mathematical transformation, we can get the dynamics of each processing unit as following equations:

   Aij,kl ykl (t ) x˙ ij = −xij (t ) +    U (k,l)∈Nr (i,j)    + Bij,kl ukl (t ) + Iij ,  U (k,l)∈Nr (i,j)     yij (t ) = f (xij (t )) = 1 (|xij (t ) + 1| − |xij (t ) − 1|),



where i = 1, . . . , M , j = 1, . . . , N ; xij (t ), uij (t ) and yij (t ) are the state, the input and the output of the (i, j)th processing unit in


S. Wen et al. / Neural Networks 60 (2014) 74–83

Fig. 6. (a) The structure of memristive unit circuit, where C , Rx , and Ry are the capacitor and resistors; I is an independent current source; Ixu (i, j; k, l) and Ixy (i, j; k, l) are linear voltage-controlled current sources with the characteristics Ixy (i, j; k, l) = A(i, j; k, l)Vykl and Ixu (i, j; k, l) = B(i, j; k, l)Vukl , ∀U (i, j) ∈ N (i, j); Iyx =

f (Vxij ) Ry

is a piecewise-

linear voltage-controlled current source; (b) a simplified unit with the basic elements and Ixy (i, j; k, l) = A(i, j; k, l)Vykl ; (c) a possible circuit implementation of above unit, in which the controlled current source Ixy (i, j; k, l) can be presented as Ixy (i, j; k, l) =

−M1 Vykl R1 R2

under the condition that

can be adjusted through the memristors M2 and M1 . The piecewise function can be realized by op amps as

the grid. The initial condition |xij (0)| ≤ 1 and static input |uij | ≤ 1. Aij,kl , Bij,kl denote the connection weight from processing unit U (k, l) to unit U (i, j); Ii,j represents the bias of the (i, j)th unit in the grid. From Eq. (8), it infers that the state and the output of each unit are affected by the inputs and the outputs of its adjacent units. The working structure of Eq. (8) and the output function are depicted in Fig. 7. Eq. (8) can be rewritten in a compact form by the following matrix equation:

 ˆ (x(t )) + Bu ˆ + I, x˙ = −x(t ) + Ay y(x(t )) = f (x(t )) =



(|x(t ) + 1| − |x(t ) − 1|),


Vxij (M3 +R4 ) R4


R2 +R3 M1 R1 M2 Vyij (M4 +R5 )



, then the transmission weights of Vykl

+ |Vcc |, Vcc is the supply voltage.


 n n     aˆ ij yj (xj (t )) + bˆ ij uj + ˆI , x˙ i = −xi (t ) + j =1

j =1

  yi (xi (t )) = f (xi (t )) = 1 (|xi (t ) + 1| − |xi (t ) − 1|),



where the state vector x(t ) = [x1 (t ) . . . , xn (t )]T , the output vector y(x(t )) = [y1 (x1 (t )) . . . , yn (xn (t ))]T , the static input vector u = [u1 . . . , un ]T , the output function vector f (x(t )) = [f1 (x1 (t )) . . . , fn (xn (t ))]T , Aˆ = {ˆaij } ∈ Rn×n , Bˆ = {bˆ ij } ∈ Rn×n are feedback and control matrices respectively, n = M × N. ˆI = [I , . . . , I ]T is the vector containing the bias of each unit.

S. Wen et al. / Neural Networks 60 (2014) 74–83


Fig. 7. (a) System structure of U (i, j); (b) its corresponding output function.

Suppose the templates A, B and the bias I are given as


a A =  0,−1 a 1,−1

b−1,−1 b0,−1 B= b1,−1

a noisy image (represented as the static input vector u) and a corresponding desired image (represented as the output vector y∗ that we want at steady state). The nonlinear output function of memristive neural network always guarantees the existences of equilibrium point of system (9). In order to simplify the analysis, we shift the equilibrium point x∗ = [x∗1 , . . . , x∗n ]T of (9) to the origin. Let zi (t ) = xi (t ) − x∗i , system (9) can be transformed into

a− 1 , 0 a0 , 0 a1 , 0

a−1,1 a0,1  , a1,1 

b−1,0 b0,0 b1,0

b − 1 ,1 b 0 ,1  , b 1 ,1 

z˙ = −z (t ) + Aˆ Φ (z (t )), y(z (t ) + x∗ ) = f (z (t ) + x∗ ),

I = Ii,j . Then the entries of the matrices Aˆ = {ˆai,j } ∈ Rn×n , and Bˆ = {bˆ i,j } ∈ Rn×n can be computed as follows:

 aˆ i,j =

aq−k,w−l , 0,

if |q − k| ≤ 1 and |w − l| ≤ 1, otherwise,


Φi2 (zi (t )) ≤ zi (t )Φi (zi (t )),


Φ T (z (t ))Φ (z (t )) ≤ Φ T (z (t ))z (t ).

bˆ i,j =

bq−k,w−l , 0,

if |q − k| ≤ 1 and |w − l| ≤ 1, otherwise,


where i = 1, . . . , M , j = 1, . . . , N ; k, l, q and w are given as follows:


where z (t ) = [z1 (t ), . . . , zn (t )]T is the state vector, Φ (z (t )) = [Φ1 (z1 (t )), . . . , Φn (zn (t ))]T represents the output vector of the transformed system and Φi (zi (t )) = yi (zi (t ) + x∗ ) − yi (x∗i ), |Φi (zi (t ))| ≤ |zi (t )|, which implies or





, l = i − (k − 1)N ,

N j N

, w = j − (q − 1)N ,

in which operator [·] denotes the ceiling function and the pairs (k, l) and (q, w) indicate the position of the interconnected units in the r-neighborhood. In this paper, an approach is proposed to design templates used to reduce the noise in the images. The templates are trained by


In order to derive the main result, Schur Complement Lemma will be utilized. Lemma 1 (Boyd, Ghaoui, Feron, & Balakrishnan, 1994). Given constant matrices Y1 , Y2 and Y3 of appropriate dimensions, where Y1 and Y3 are symmetric and Y2 > 0, then Y1 + Y2T Y3−1 Y2 < 0 if and only if

Y1 Y2

Y2T −Y3

< 0,

 or

−Y3 Y2T

Y2 Y1

< 0.


In the next step, a criterion for the uniqueness and global asymptotic stability of the equilibrium point of the memristive neural network (9) will be derived on the base of Lyapunov stability theorem (the template A is solved).


S. Wen et al. / Neural Networks 60 (2014) 74–83

Theorem 1. Consider the dynamical behavior of memristive neural network (9), if there exists Aˆ T + Aˆ < 0,

(16) ∗

then, the equilibrium point x of system (9) is uniquely and globally ˆ + ˆI. asymptotically stable for every Bu Proof. The proof of this theorem is divided into two steps: in the first step the uniqueness of the equilibrium point is discussed, and in the second step the proof of global asymptotic stability criterion of the equilibrium will be accomplished.

• Step 1. The uniqueness of the equilibrium point will be proved by the contradiction method. The equilibrium point z ∗ of system (10) satisfies the following equation z ∗ − Aˆ Φ (z ∗ ) = 0,


which implies that if Φ (z ) = 0. In the following part, we assume that Φ (z ∗ ) ̸= 0. From (14), we can get ∗

2Φ T (z ∗ )z ∗ − 2Φ T (z ∗ )Aˆ Φ (z ∗ ) = 0.


which can be rewritten as Fig. 8. Training images. (a) Corrupted image with 5% noise; (b) corrupted image with 10% noise; (c) desired image.

2Φ T (z ∗ )Aˆ Φ (z ∗ ) = 2Φ T (z ∗ )z ∗

= 2Φ T (z ∗ )[z ∗ − Φ (z ∗ )] + 2Φ T (z ∗ )Φ (z ∗ ).(19) With inequality (14), we can infer from Eq. (19) that

Φ T (z ∗ )(Aˆ T + Aˆ )Φ (z ∗ ) ≥ 0.


On the other hand, from the criterion (16) for the global asymptotic stability, it follows

Φ T (z ∗ )(Aˆ T + Aˆ )Φ (z ∗ ) < 0.


Obviously, (20) contradicts with Eq. (21), which implies that Φ (z ∗ ) = 0 and z ∗ = 0. Thus, Eq. (10) has a unique equilibrium for every u. • Step 2. In order to prove the global asymptotic stability of the origin, we choose the following positive definite Lyapunov functional V (z (t )) = z (t )z (t ) + 2ϕ T

zi (t )

n   i=1

Φi (s)ds,



where ϕ is a positive constant. The time derivative of V (z (t )) along trajectory of system (9) is obtained as V˙ (z (t )) = −2z T (t )z (t ) + 2Φ T (z (t ))Aˆ T z (t )

− 2ϕ Φ T (z (t ))z (t ) + 2ϕ Φ T (z (t ))Aˆ Φ (z (t )) = −2z T (t )z (t ) + 2Φ T (z (t ))Aˆ T z (t ) + 2ϕ Φ T (z (t ))Aˆ Φ (z (t )) − 2ϕ Φ T (z (t ))Φ (z (t ))

(23) Therefore, if Aˆ

ϕ(Aˆ T + Aˆ )

Noise cancellation of memristive neural networks.

This paper investigates noise cancellation problem of memristive neural networks. Based on the reproducible gradual resistance tuning in bipolar mode,...
1MB Sizes 0 Downloads 12 Views