IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

435

Distributed Synchronization of Coupled Neural Networks via Randomly Occurring Control Yang Tang, Member, IEEE, and Wai Keung Wong

Abstract— In this paper, we study the distributed synchronization and pinning distributed synchronization of stochastic coupled neural networks via randomly occurring control. Two Bernoulli stochastic variables are used to describe the occurrences of distributed adaptive control and updating law according to certain probabilities. Both distributed adaptive control and updating law for each vertex in a network depend on state information on each vertex’s neighborhood. By constructing appropriate Lyapunov functions and employing stochastic analysis techniques, we prove that the distributed synchronization and the distributed pinning synchronization of stochastic complex networks can be achieved in mean square. Additionally, randomly occurring distributed control is compared with periodically intermittent control. It is revealed that, although randomly occurring control is an intermediate method among the three types of control in terms of control costs and convergence rates, it has fewer restrictions to implement and can be more easily applied in practice than periodically intermittent control. Index Terms— Bernoulli stochastic variables, complex dynamical networks, pinning control, randomly occurring control, stochastic disturbances.

I. I NTRODUCTION

M

ANY large-scale systems, natural or social, can be modeled by networks, such as genetic regulatory networks, neuronal networks, food webs, and the Internet. Networks can be described as graphs in which each vertex stands for a node therein, and edges represent interactions among them [1]–[4]. Synchronization of large-scale complex networks has been extensively investigated in various fields of science and engineering, since it can describe many natural phenomena and has many potential applications to image processing, neuronal synchronization, secure communication, etc. [5]–[9]. One of the most interesting topics of synchronization of complex networks is enhancement of synchronizability of complex networks [10]. Barahona and Pecora [11] proposed master stability functions to characterize synchronizability of Manuscript received April 29, 2012; accepted December 19, 2012. Date of publication January 9, 2013; date of current version January 30, 2013. This work was supported by the General Research Fund of the Research Grants Council of Hong Kong under Project 531708, the National Natural Science Foundation of China under Project 61203235, and the Alexander von Humboldt Foundation of Germany. Y. Tang is with the Institute of Physics, Humboldt University of Berlin, Berlin 12489, Germany, and also with the Potsdam Institute for Climate Impact Research, Potsdam 14415, Germany (e-mail: [email protected]). W. K. Wong is with the Institute of Textiles and Clothing, The Hong Kong Polytechnic University, Hong Kong (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2012.2236355

complex networks. In [12], synchronizability of complex networks was statistically analyzed. In [13], a modified simulated annealing approach was used to detect optimal synchronization networks. Researchers have further studied the synchronization of complex networks with aging order [14], time-delay coupling [15], [16], weighted coupling matrices [17], community structure [18], linearly coupled form [19], adaptive updating weights [20], randomly occurring nonlinearities [21], stochastic coupling [22], time-delay [23]–[25], and hybrid coupling [26]. In [27], the problem of synchronization of general complex networks was investigated using adaptive strategies when network topology is slowly time-varying. However, synchronization of complex networks may not be ensured if control is not introduced [28]. Many control-based approaches have been developed to synchronize complex dynamical networks, such as continuous feedback and discontinuous feedback [28], [29]. When synchronizing and controlling networks, pinning control has been shown to be a simple but effective technique for stabilization and synchronization [28], [30]–[33]. Li et al. [34] investigated the pinning control of complex dynamical networks. Pinning synchronization of complex networks was also investigated by using only one controller [35]–[37]. Some studies have discovered how to effectively control complex networks with a fraction of nodes [38]–[41]. The pinning control of fractional-order complex networks was investigated in [42]. The structural controllability of complex networks was also discussed by utilizing control theory in [43]. Distributed synchronization or filtering of networks has been an ongoing research issue attracting increasing attention from researchers in various areas [44]. Vertex i in a network synchronizes the system state based not only on vertex i ’s state, but also on its neighboring vertices’ states according to a given complex network topology [44]. In [45], an adaptive technique was proposed to synchronize complex networks, where only neighborhood information was used to design updating law. In [46], distributed filtering of networks was studied with or without considering pinning mechanisms. Although stochastic coupling or stochastic effect plays an important role in modeling realistic networks [4], [47], the distributed synchronization of stochastic complex networks has received little academic attention. Intermittent control arises naturally in a wide variety of realworld applications [48]. An essential advantage of intermittent control is that it has a nonzero control width, and it is easy to implement [29], [48], [49]. However, intermittent control has to be activated periodically [48], which can restrict its practical applications. On the other hand, controlled plants suffer from

2162–237X/$31.00 © 2013 IEEE

436

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

disturbances from randomly abrupt changes, such as random failures, repairs of actuators, packet dropouts, sudden environmental changes, and modification of the operating points of a vertex (see [21], [44], [47] and the references therein). The first reason for proposing randomly occurring control is that signals in networked systems are not transmitted perfectly or control is not available, as in the cases of packet dropouts, random failures, and repairs of actuators [47], [50]. The other reason is positive. With consideration for an economic or system life, control is suspended from time to time [50]. Therefore, control activation and networked systems may occur in a probabilistic or switching way and are randomly changeable in terms of their types and/or intensity [21], [47], [50], [51]. However, research on synchronization of complex networks has mainly focused on fixed controllers [22], [23], [35], [37], [38], [46], and research on stochastic control of complex networks (randomly occurring control) has received little attention despite its practical significance. This paper examines the distributed synchronization of stochastic complex networks via randomly occurring control. Both controller activation and updating law of control gain occur in a probabilistic way. By using Lyapunov functions, we investigate the problem of the distributed synchronization and pinning distributed synchronization of stochastic complex networks in mean square. The coupling strength and expectations of the two Bernoulli stochastic variables are shown to affect control costs and convergence speed of complex networks. Finally, some comparisons between randomly occurring control and periodically intermittent control are given. The main contributions of this paper are as as follows: 1) the distributed synchronization and distributed pinning synchronization of stochastic complex networks are investigated by considering randomly occurring control and updating law; 2) the effects of parameters on distributed stochastic synchronization or distributed stochastic pinning synchronization performance are analyzed mathematically and by simulations; 3) detailed comparisons between randomly occurring control and periodically intermittent control are presented to show the properties of randomly occurring control. The rest of this paper is organized as follows: Section II briefly outlines some preliminaries of the distributed synchronization of complex networks. Section III presents the main results of the distributed (pinning) synchronization of stochastic complex networks. In Section IV, simulations are presented to show the effectiveness of the proposed method. Conclusions are presented in Section V. II. P RELIMINARIES Notations: Throughout this paper, Rn and Rn×m denote, respectively, the n-dimensional Euclidean space and the set of all real matrices. The superscript “T” denotes matrix transposition, and the notation X ≥ Y (respectively, X > Y ) where X and Y are symmetric matrices, means that X − Y is positive sem-definite (respectively, positive definite). E{x} and E{x|y} denote the expectation of x and the expectation of x

conditional on y.  ·  stands for the Euclidean vector norm in Rn . The Kronecker product of matrices Q ∈ Rm×n and R ∈ R p×q is a matrix in Rmp×nq and denoted as Q ⊗ R. λmin (·) and λmax (·) represent the minimum and maximum eigenvalue of a matrix. In denotes the identity matrix with order n. l = N ∗ l p  denotes the element number of finite set M composed of the vertices to be controlled and N ∗ l p  is the maximum integer close to N ∗ l p , where l p is the percent of the vertices to be controlled. δM (·) denotes the characteristic function of set M, i.e., δM (i ) = 1 if i ∈ M; otherwise, δM (i ) = 0. Define a graph by G = [V, E], where V = {1, . . . , N} denotes the vertex set and E = {e(i, j )} the edge set. N (i ) denotes the neighborhood of vertex i in the sense N (i ) = { j ∈ V : e(i, j ) ∈ E}. A\B represents the set difference from set A to set B. In this paper, graph G is supposed to be undirected [e(i, j ) ∈ E implies e( j, i ) ∈ E] and simple (without self-loops and multiple edges). Let L = [li j ]i,N j =1 be the Laplacian matrix of graph G, which is defined as follows: for any pair i = j, li j = l j i = −1 if e(i, j ) ∈ E;  otherwise, li j = l j i = 0. lii = − Nj=1, j =i li j stands for the degree of vertex i (i = 1, 2, . . . , N). Let (, F , P) be a complete probability space, where  represents a sample space, F is called a σ -algebra, and P is a probability measure. In this paper, we consider the model of an array of linearly coupled stochastic systems, which can be formulated as ⎡





d x i (t) = ⎣ f (x i , t) + c

(x j (t) − x i (t))⎦ dt

j ∈N (i)

+ σ (x i (t), t)dw(t),

i = 1, . . . , N

(1)

where x i (t) = [x i1 (t), x i2 (t), . . . , x in (t)]T ∈ Rn (i = 1, 2, . . . , N) is the state vector of the i th vertex, f (x i , t) = [ f 1 (x i , t), . . . , f n (x i , t)]T is a continuous vector function, c is the coupling strength of the network, and  > 0 is the inner matrix. Furthermore, σ (·, ·) : Rn × R → Rn is the noise intensity function vector, and w(t) is a scalar Brownian motion defined on (, F , P) satisfying E{dw(t)} = 0 and E{[dw(t)]2 } = dt. Equation (1) can be rewritten as ⎡ d x i (t) = ⎣ f (x i , t) − c

N 

⎤ li j x j (t)⎦ dt

j =1

+ σ (x i (t), t)dw(t),

i = 1, . . . , N.

(2)

For a graph G = [V, E], we consider vertex set M ⊆ V. Here, M = V indicates that all the vertices are controlled. M ⊂ V means that only a fraction of nodes are controlled. From the Gershgorin disk theorem, all the eigenvalues of the Laplacian L corresponding to graph G satisfy 0 = λ1 (L) ≤ λ2 (L) ≤ · · · ≤ λ N (L). Additionally, G is connected if and only if λ2 (L) > 0: i.e., L is irreducible. Thus, for M ⊂ V, all vertices V\M can be accessed from vertex set M, i.e., for any vertex i in V\M, there exists a vertex j ∈ M which connects vertex i by an existing path. In order to realize the synchronization of the stochastic complex network in (1) or (2), controllers are added to each

TANG AND WONG: DISTRIBUTED SYNCHRONIZATION OF COUPLED NNs VIA ROC

vertex d x i (t) =

437

  f (x i , t) + c



(x j (t) − x i (t))

j ∈N (i)

i ∈ M, + u i (t) dt + σ (x i (t), t)dw(t), ⎡ ⎤  d x i (t) = ⎣ f (x i , t) + c (x j (t) − x i (t))⎦ dt j ∈N (i)

+ σ (x i (t), t)dw(t),

i∈ /M

(3)

j ∈N (i) σi j (x j (t)−x i (t), t)dw(t) can be viewed as stochastic coupling, where the coupling between the nodes suffers from stochastic disturbances arising from thecommunication. It is certainly interesting to see how the j ∈N (i) σi j (x j (t) − x i (t), t)dw(t) would have an overall impact on the synchronization, which remains one of our future research directions. i (t) in (4) is updated according to the following randomly occurring distributed updating law: ⎡ ⎤T  di (t) = ξ(t)α ⎣ (x j (t) − x i (t))⎦  j ∈N (i)

where u i (t) is a distributed adaptive controller and 1 ≤ l ≤ N. For i th vertex, u i (t) is designed as  u i (t) = ρ(t) i (t)(x j (t) − x i (t)), i ∈ M (4) j ∈N (i)

where i (t) is the control strength of vertex i . In (4), ρ(t) is a stochastic variable that describes the following random events for (3):

Event 1: (3) experiences (4) (5) Event 2: (3) does not experience (4). Let ρ(t) be defined by

1, ρ(t) = 0,

if Event 1 occurs if Event 2 occurs

(6)

where E{ρ(t)} = ρ ∈ [0, 1]. Remark 1: The main characteristic of distributed control algorithms is that a node can utilize local information on its neighbors more efficiently and does not require global information on whole networks [44]. Practical implementation of controller is always disturbed by various uncertainties arising from internal and external environments [47], [50]. Such disturbances widely exist in control implementation and system design, and are due to random abrupt changes [21], [47]. In this paper, stochastic disturbances are taken into account when designing a more realistically distributed controller (4). The distributed controller u i (t) occurs in a probabilistic way and uses feedback information on its neighbors. Different from the conventional adaptive controller, the distributed controller u i (t) is not always implemented and it can model control failure in a stochastic way. In addition, the implementation of the distributed controller u i (t) does not need to switch at given time points, which is different from periodically intermittent control [48]. To summarize, randomly occurring distributed control can effectively utilize information on neighbors [44], models real-world disturbances, and does not require strict activation at certain points. Remark 2: As discussed in the introduction, stochastic disturbances exist very often in complex networks problems whenever there is a need to represent the node in complex networks ordinarily involving noise. There are several typical stochastic disturbances to describe the perturbations in complex networks, such as σ (x i (t), t)dw(t), and  j ∈N (i) σi j (x j (t) − x i (t), t)dw(t). It is worth mentioning that σ (x i (t), t)dw(t) indicates that vertex i itself is subject to stochastic perturbation. σ (x i (t), t)dw(t) may result in unsynchronizing networks, as will be seen later in Theorem 1.





×⎣

⎤ (x j (t) − x i (t))⎦ dt,

i ∈ M (7)

j ∈N (i)

where α > 0 and ξ(t) is a stochastic variable representing the following random events for (7):

Event 3: i (t) experiences (7) (8) Event 4: i (t) does not experience (7). Similarly, let ξ(t) be defined by

1, if Event 3 occurs ξ(t) = 0, if Event 4 occurs

(9)

where E{ξ(t)} = ξ ∈ [0, 1]. We assume that stochastic variables w(t), ρ(t), ξ(t), initial values of x i (t) (i ∈ V), and i (t) (i ∈ M) are independent of each other. Here, two measures are used to characterize the performance of synchronization. The first one is used to describe the control cost , ⎧ N ⎪  ⎪ ⎪ 1 ⎪ i,∞ (i = 1, 2, . . . , N), for M = V ⎨N i=1 (10) = ⎪1  ⎪ ⎪  (i ∈ M), for M ⊂ V i,∞ ⎪ ⎩l i∈M

where i,∞ = limt →∞ Ei (t). The other measure is the convergence rate γ , which can be calculated as

X, for M = V γ = (11) Y, for M ⊂ V where

 N  1  T ¯ [x i (t) − x(t) ¯ dt x i (t) − x(t)] N −1 0 i=1

 ∞    1 Y =E (x i − x j )]T [ (x i − x j ) dt l 0

 X =E



i∈M

x(t) ¯ =

1 N

N 

j ∈N (i)

j ∈N (i)

x i (t).

i=1

From the above measures, it can be concluded that a good synchronization performance means a high convergence rate (a small γ ) and low control cost (a small ). Remark 3: When ρ(t) = 1 and ξ(t) = 1, the distributed controller u i (t) in the form of (4) is activated and the control gain i (t) is updated according to (7). When ρ(t) = 1 and ξ(t) = 0, the distributed controller u i (t) is activated and i (t)

438

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

is not updated. When ρ(t) = 0 and ξ(t) = 1, the distributed controller u i (t) is not activated in (3) and i (t) is updated according to (7). When ρ(t) = 0 and ξ(t) = 0, the distributed controller u i (t) is not activated and i (t) is not updated. In addition, binary sequence switching is widely used to describe stochastic events, such as missing measurement and stochastic delay [52]–[54]. The stochastic variables employed in this paper should follow an unknown but exponential distribution of switching, and the probabilities are known a priori [54]. Remark 4: If ρ and ξ are 1, the control and updating rule will be reduced to a normal control and updating law, as shown in [40], [45]. It is assumed that ρ and ξ cannot be 0 since they appear in the denominator of the upper bounds of  and γ , as shown in Theorems 1 and 2. Additionally, if ρ = 0 and ξ = 0, the problem considered in this paper is reduced to the synchronization of complex networks without a controller. Such models were discussed by [4] and references therein. Remark 5: Some strategies have been proposed to find effective pinned nodes, such as degree-based methods and betweenness-based methods [28], [46]. More recently, in [43], the structural controllability of complex networks was studied. Although [43] presents interesting results of finding effective pinned nodes, the dynamics of complex networks is linear and only the structural controllability is studied. Even now, it is difficult to find effective pinned nodes, as pinning synchronization is a complicated combinational problem. Therefore, this paper only focuses on how to find the minimum l to synchronize complex networks. Examples are shown in Section IV. The following definition and assumptions are needed for deriving the main results. Assumption 1 [45], [55]: A vector-valued continuous function f (x, t) : Rn × R+ → Rn is said to be uniformly decreasing for a matrix  ∈ Rn×n if there exist θ > 0 and

> 0 such that (x − y)T [ f (x, t) − f (y, t) − θ (x − y)] ≤ − (x − y)T (x − y)

(12)

holds for all x, y ∈ Rn and t ≥ 0. Assumption 2 [21]: f (x i , t) and σ (x i , t) are said to be locally uniformly Lipschitz continuous with respect to t if there exist positive constants ϕ and κ such that the following inequalities hold for all x i , x j ∈ Rn :  f (x i , t) − f (x j , t) ≤ ϕx i − x j  √ κx i − x j 

σ (x i , t) − σ (x j , t) ≤

(13)

which indicates that (σ (x i (t), t) − σ (x j (t), t))T (σ (x i (t), t) − σ (x j (t), t)) ≤ κ(x i (t) − x j (t))T (x i (t) − x j (t)). Assumption 3: f (0, t) = σ (0, t) = 0. Remark 6: Assumption 2 can describe many real-world systems very well. It has been widely employed or discussed in [21], [56]. Definition 1: Let x i (t)(1 ≤ i ≤ N) be a solution of the stochastic complex network in (1) or (3), where x i (0) = (x 10 , x 20 , . . . , x n0 ). If there is a nonempty subset  ⊆ Rn ,

with x i (0) ∈ (1 ≤ i ≤ N), such that x i (t) ∈ Rn for all t ≥ t0 , 1 ≤ i ≤ N, lim E

t →∞

N 

x i (t) − x j (t)2 = 0,

i, j = 1, 2, . . . , N,

i=1

then the stochastic complex network is said to achieve synchronization in mean square. III. M AIN R ESULTS In this section, the distributed synchronization of the stochastic complex network in (3) is studied via randomly occurring control and updating law. The pinning problem of the stochastic complex network in (3) via randomly occurring distributed control and updating law is also investigated. In the following, ρ and ξ are assumed to be in the interval (0, 1] for derivation of the upper bounds of  and γ . First, when M = V, the synchronization of the stochastic complex network in (3) under randomly occurring control and updating law is studied and the following results are obtained. Theorem 1: Suppose that f (x, t) is continuous on (x, t) ∈ Rn × R+ , locally uniformly decreasing for matrix  ≥ 0, f (x, t) and σ (x, t) satisfy Assumptions 1–3, graph G is connected, and a is a positive constant for any initial data x i (0) ∈ Rn , i = 1, 2, . . . , N. 1) Then the stochastic complex network in (3) under (4) and (7) will be globally synchronized in mean square and limt →∞ Ei (t) = i,∞ , where i,∞ ∈ R(i = 1, 2, . . . , N) are constants, if the following conditions are satisfied: κ − θ . 2) When i (0) = 0(∀i = 1, . . . , N), the upper bound of  is 

 θ − cλ (L) 2 2 p αξ  θ − cλ2 (L) 2 0 + +  ≤ ¯ = E ρλ2 (L) ρλ2 (L) ρN where p0 =

N 1  ei j (0)2 . Specifically 4 i=1 j ∈N (i)



   θ − cλ (L)  ⎪ 2 p0 αξ ⎪ 2 ⎪ ⎪ E 2 + , ⎪ ⎪ ρλ2 (L) ρN ⎪ ⎪ ⎨ if θ> cλ2 (L), ¯ ≤ ˆ = 

⎪ 2 p0 αξ ⎪ ⎪ ⎪ , E ⎪ ⎪ ρN ⎪ ⎪ ⎩ else.

(15)

3) If  = In , then the upper bound of γ is 

  N 2 p0  2 Z+ Z + γ ≤ γ¯ = E ρ Nαξ (N − 1)λ22 (L) θ − cλ2 (L) . where Z = ρλ2 (L)αξ

TANG AND WONG: DISTRIBUTED SYNCHRONIZATION OF COUPLED NNs VIA ROC

Specifically



   ⎪ N 2 p0  ⎪ ⎪ ⎪ E 2Z + , ⎪ ⎪ ρ Nαξ (N − 1)λ22 (L) ⎪ ⎪ ⎨ if θ > cλ2 (L),  (16) γ¯ ≤ γˆ =  ⎪

2 p0 N ⎪ ⎪ ⎪ , E ⎪ ⎪ (N − 1)λ22 (L) ρ Nαξ ⎪ ⎪ ⎩ else.

Proof: It should be noted that the stochastic dynamical network in (3) with ρ(t) and ξ(t) is a special stochastic system with Markovian switching. Thus, the existence and uniqueness of solutions to (3) can be transformed into the existence and uniqueness of solutions to a stochastic system with Markovian switching. The proof of the existence and uniqueness of solutions to a stochastic system with Markovian switching can be found in [51]. Also, the proof of the existence and uniqueness of solutions to (3) is shown in Supporting Information. Part I: Let ei j = x i − x j ∀i, j = 1, . . . , N and x = [x 1T , . . . , x NT ]T ∈ Rn N . Note that λ2 (L) > 0 since graph G is connected and L is irreducible. Consider the Lyapunov function N N 1  T ρ 1 (i (t) − a)2 V (t) = ei j ei j + 4 2ξ α i=1 j ∈N (i)

(17)

i=1

where a is a positive constant. By the Itô differential formula [56], the stochastic derivative of V can be obtained as N 1  T d V (t) = L V (t)dt + ei j 2

(18)

and the operator L is given according to (3) and (4) as

=

2

f (x i , t) − f (x j , t)

i=1 j ∈N (i)



+c



(x k − x i ) − c

k∈N (i)

+ ρ(t)i (t)

 

− ρ(t) j (t) +

N  ρ i=1



N  i=1

ξ

ξ(t)i (t)

ρa

 

  j ∈N (i)

i=1 j ∈N (i)

N  



j ∈N (i)

  N   E eiTj ρ(t)i (t) eki i=1 j ∈N (i)

 = E −ρ

N 

i (t)

 

ρ  E ξ(t) = ρ. ξ

Then, using Assumptions 1 and 2, (19), and (20), we have EL V (t)

 N    1 =E eiTj f (x i , t) − f (x j , t) − θ ei j 2 i=1 j ∈N (i)

N θ  T + ei j ei j 2 i=1 j ∈N (i)

N    −c [ ei j ]T [ ei j ] i=1 j ∈N (i) N 

ρ[

j ∈N (i)



ei j ]T [

j ∈N (i)

ei j ]

j ∈N (i)

N   1 [σ (x i (t), t) − σ (x j (t), t)]T 4 i=1 j ∈N (i)  × [σ (x i (t), t) − σ (x j (t), t)] N N    θ  T ≤E − eiTj ei j + ei j ei j 2 2 i=1 j ∈N (i)

N 

[



i=1 j ∈N (i)



ei j ]T [

+

N    [ ei j ]T [ ei j ]

N 1 

4

ei j ]

j ∈N (i)

i=1 j ∈N (i)

T   ei j  ei j



+

− aρ

(xl − x j )

j ∈N (i)

(20)

i=1 j ∈N (i)



k∈N (i)

T    ei j  ei j ,

j ∈N (i)

i=1

−c

(x k − x i )

κeiTj ei j

j ∈N (i)



i=1 j ∈N (i)

 κ = E ( − )x T (L ⊗ In )x 2  + x T [(θ I N − aρ L − cL)L ⊗ ]x .

j ∈N (i)

T    ei j j ∈N (i)

T N   1 σ (x i (t), t) − σ (x j (t), t) 4 i=1 j ∈N (i)   × σ (x i (t), t) − σ (x j (t), t) .

T    ei j  ei j = x T (L 2 ⊗ )x,

 j ∈N (i)

i=1

l∈N ( j )

j ∈N (i)

ei j

(xl − x j )

l∈N ( j )

k∈N (i)

 

N 1  T ei j ei j = x T (L ⊗ In )x, 2

i=1

i=1 j ∈N (i)

eiTj

From the definitions of ei j , x, L and taking expectations of ρ(t) and ξ(t), we obtain

−a

×[σ (x i (t), t) − σ (x j (t), t)]dw(t) L V (t) N 1 

439

(21)

Note that (14) in Theorem 1 yields

+

(19)

θ − (aρ + c)λ2 (L) < 0 ⇒ (θ I N − (aρ + c)L)L ≤ 0, κ < . (22) 2

440

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

Therefore, it follows from (20)– (22) that   EL V (t) ≤ E −μx T (L ⊗ In )x ⎧ ⎫ N ⎨ μ ⎬  =E − eiTj ei j ≤ 0 ⎩ 2 ⎭

(23)

i=1 j ∈N (i)

where μ is a positive constant. Thus, the distributed synchronization of the stochastic complex network in (3) via randomly occurring control and updating law can be achieved in mean square. Part II: In the following, the expectations of the upper bounds of  and γ will be derived by following the methods in [45]. Let v i be the eigenvector of L associated with the eigenvalue λi (L) ordered by 0 = λ1 (L) ≤ λ2 (L) ≤ λ3 (L) ≤ · · · ≤ λ N (L). We pick the eigenvectors that correspond to the same eigenvalue with multiplicity such that v 1 , . . . , v N N N compose an orthogonal standard  Nbasis of R . For any v ∈ R , there exist v i such that v = i=1 ri v i , (i = 1, . . . , N). Thus, one has v iT v j = 0∀i = j . We have  θ L 2 − [(aρ + c)L − θ I N ]L v λ2 (L)   N   θ λ2 (L) = v iT v i aρ + c − λ2 (L) i i=1  − (aρ + c)λ2i (L) + θ λi (L) ri2 vT

 (aρ + c) −

+2

N  

 v iT (aρ + c −

i=1 j >i

θ )L 2 λ2 (L)

 − ((aρ + c)L − θ I N )L v j ri r j =

N 

v iT v i [−

i=2

1 λi (L) + 1]θ λi (L)ri2 ≤ 0. λ2 (L)

(24)

By using (7) and the integral method, we have

 N  E =E



i=1 0 N  ∞ i=1

 di (t)

αξ

 

0

j ∈N (i)

ei j

T    ei j dt.

(25)

j ∈N (i)

= V (0), V∞ = limt →∞ V (t) and where V 0 N  2 . By solving the last p0 = 1/4 i=1 e (0) i j j ∈N (i) inequality in (28), we have the upper bound of  as 

 θ − cλ (L) 2 2 p αξ  θ − cλ2 (L) 2 0 .(29) + +  ≤ ¯ = E ρλ2 (L) ρλ2 (L) ρN √ By using the inequality a 2 + b 2 ≤ a + b, where a, b ∈ R ≥ 0, and setting √ ˆ = E{θ − cλ2 (L)/ρλ2 (L) + |θ − cλ2 (L)/ρλ2 (L)| + 2 p0 αξ /ρ N }, one has ⎧ ¯ ≤ ˆ   ⎪  θ − cλ2 (L)  2 p0 αξ ⎪ ⎪ ⎪ E 2 , + ⎪ ⎪ ρλ2 (L) ρN ⎨ θ > cλ2 (L) = if  ⎪ ⎪ ⎪ 2 p0 αξ ⎪ ⎪ ⎪ , else. ⎩E ρN

0

W ≤ (26)

From (21)–(23), one has EL V ≤ E{x T (t)[(θ I N − (aρ + c)L)L ⊗ ]x(t)}. (27)

(30)

Part III: Next, the upper bound of γ will be given. Let U = [u i j ] with u i j = −1/N if i = j and u ii = 1 − 1/N(∀i = 1, 2, . . . , N) and W = 1/(N − 1)U T U . γ can be represented as follows:  ∞ x T (t)(W ⊗ In )x(t)dt. (31) γ =E As shown in [45], the following inequality holds:

Therefore,  can be obtained from (25) as

  αξ ∞ T 2 =E x (L ⊗ )xdt . N 0

Thus, using (17), (24), (26), and (27), it can be obtained that 

 αξ ∞ T 2 x (t)(L ⊗ )x(t)dt =E N 0

 ∞ αξ λ2 (L) ≤E x T (t) N[(aρ + c)λ2 (L) − θ ] 0  × [((aρ + c)L − θ I N )L ⊗ ]x(t)dt

  ∞ αξ λ2 (L) ≤ −E [ L V dt] N[(aρ + c)λ2 (L) − θ ] 0

 αξ λ2 (L) =E [V0 − V∞ ] N[(aρ + c)λ2 (L) − θ ]

αξ λ2 (L) [ p0 =E N[(aρ + c)λ2 (L) − θ ]  N ρ  2 + (2ai,∞ − i,∞ )] 2αξ i=1

αξ λ2 (L) [ p0 ≤E N[(aρ + c)λ2 (L) − θ ]  a Nρ Nρ 2 + (28) −  ] αξ 2αξ

1 L2. (N − 1)λ22 (L)

Therefore, when  = In , (31) yields  ∞ 1 x T (t)(L 2 ⊗ In )x(t)dt. γ ≤E (N − 1)λ22 (L) 0 We have from (26) that γ ≤

1 . (N − 1)λ22 (L)αξ

TANG AND WONG: DISTRIBUTED SYNCHRONIZATION OF COUPLED NNs VIA ROC

Thus, by setting γ¯ = N/(N − 1)λ22 (L)αξ ¯ and considering the form of ¯ in (29), one has the upper bound of γ as follows:

441

2) When i (0) = 0(∀i ∈ M), the upper bound of  is

γ ≤ γ¯ N ¯ (N − 1)λ22 (L)αξ

 θ − cλ (L) N 2 =E ρλ2 (L) (N − 1)λ22 (L)αξ   θ − cλ (L) 2 2 p αξ  2 0 + + . ρλ2 (L) ρN √ By utilizing a 2 + b 2 ≤ a + b, we have from (32)

 θ − cλ (L) N 2 γ¯ ≤ E 2 (N − 1)λ2 (L) ρλ2 (L)αξ   θ − cλ2 (L) 2 p0  |+ +| . ρλ2 (L)αξ ρ Nαξ



 μ 2 2 p αξ  μ 0 +  ≤ ¯ = E − + ρ ρ ρl

=

(32)

(33)

By setting γˆ = E{N/(N − 1)λ√22 (L)[θ − cλ2 (L)ρλ2 (L)/αξ + |θ − cλ2 (L)/ρλ2 (L)αξ | + 2 p0 /ρ Nαξ ]}. Specifically, it follows from (33) that: ⎧

    θ − cλ (L)  ⎪ N 2 p0  ⎪ 2 ⎪ ⎪E + , 2 ⎪ ⎪ ρλ2 (L)αξ ρ Nαξ (N − 1)λ22 (L) ⎪ ⎪ ⎨ if θ > cλ2 (L)  γ¯ ≤ γˆ =

 ⎪ N 2 p0 ⎪ ⎪ ⎪ E . ⎪ ⎪ (N − 1)λ22 (L) ρ Nαξ ⎪ ⎪ ⎩ else. This completes the proof. Remark 7: It should be pointed out that, when ρ = 0 or ξ = 0, one  modify Tthe Lyapunov function V (t) as  N can V (t) = 1/4 i=1 j ∈N (i) ei j ei j . Of course, when ρ = 0 or ξ = 0, the conditions can also be developed for ensuring the synchronization of coupled neural networks in mean square. The corresponding corollaries are omitted here in order to keep the paper concise. The following theorem studies the distributed pinning synchronization problem of the stochastic complex network in (3), i.e., M ⊂ V. Theorem 2: Suppose that f (x, t) is continuous on (x, t) ∈ Rn × R+ , locally uniformly decreasing for matrix  ≥ 0, f (x, t) and σ (x, t) satisfy Assumptions 1–3, graph G is connected and a is a positive constant, for any initial data x i (0) ∈ Rn , i = 1, 2, . . . , N. If all vertices in V \ M can be accessed from M, i.e., for any vertex i ∈ V \ M, there exists a vertex j in M which connects vertex i by an existing path. 1) The stochastic complex network in (3) will be globally synchronized in mean square and lim t →∞ Ei (t) = i,∞ , where i,∞ ∈ R(i ∈ M) are constants, if the following conditions are satisfied: κ − 0 since graph G is connected and L is irreducible. Since L is irreducible, we conclude that (L ⊗ In )x = 0 if and only if ei j = 0 holds for ∀i, j = 1, . . . , N. Consider the Lyapunov function

V (t) =

N 1  T ρ  1 (i (t) − a)2 (37) ei j ei j + 4 2ξ α i=1 j ∈N (i)

i∈M

where a is a positive constant. Then, using the inequalities in Assumptions 1 and 2, (3), (4), and (20), the operator L is

442

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

given from (37) as follows:

Similar to the proof of Theorem 1, we have the following inequalities using (37), (40), (41), and (43): 

 αξ ∞ T x (t)( L˜ T L˜ ⊗ )x(t)dt =E l 0

 ∞ αξ x T (t)[(cL 2 + aρ L˜ T L˜ ≤E (aρ + μ)l 0  − θ L) ⊗ ]x(t)dt

  ∞ αξ ≤ −E [ L V dt] (aρ + μ)l 0

 αξ =E [V0 − V∞ ] (aρ + μ)l

  αξ ρ  2 =E p0 + (2ai,∞ − i,∞ ) (aρ + μ)l 2αξ i∈M

  lρ 2 αξ alρ −  ≤E (44) p0 + (aρ + μ)l αξ 2αξ

EL V (t)

 N    1 eiTj f (x i , t) − f (x j , t) − θ ei j =E 2 i=1 j ∈N (i)

N θ  T ei j ei j 2

+

i=1 j ∈N (i)

−c

N   





 

i∈M

T    ei j

j ∈N (i)

i=1



ei j

j ∈N (i)

T   ei j  ei j

j ∈N (i)

j ∈N (i)

 1 [σ (x i (t), t) − σ (x j (t), t)]T 4 i=1 j ∈N (i)  × [σ (x i (t), t) − σ (x j (t), t)] +

N 

where V = V (0), V∞ = limt →∞ V (t) and 0 N  2 . By solving the last e (0) p0 = 1/4 i=1 ij j ∈N (i) inequality in (44), we have 

 μ 2 2 p αξ  μ 0 ¯ = E − + . (45) + ρ ρ ρl

 ≤ E − x T (L ⊗ In )x + x T (θ L ⊗ )x − cx T (L 2 ⊗ )x − aρx T ( L˜ T L˜ ⊗ )x  κ + x T (L ⊗ In )x   2κ − x T (L ⊗ In )x =E 2  + x T ((θ I N − cL − aρ L I˜N )L ⊗ )x

(38)

where L˜ = I˜N L. By utilizing (34) in Theorem 2, we have EL V ≤ 0.

˜T

(aρ + μ) L L˜ ≤ (aρ L L˜ + cL − θ L) 2

(40)

where μ > −aρ is related to c, θ , L, and I˜N . It is worth mentioning that μ can be determined by the linear matrix inequality (LMI) approach when the structure of complex networks, the pinned vertices, c, and θ are fixed. According to (7), it can be checked that 

 ∞ di (t) E i∈M 0

=E



i∈M

∞ 0

αξ

  j ∈N (i)

 ≤ ¯ ≤ ˆ

(39)

Therefore, the distributed pinning synchronization of the stochastic complex network in (3) can be achieved in mean square. Part II: In the following, we give the upper bound of . There exists a constant μ such that ˜T

√ By using the inequality a 2 + b2 ≤ a + b and setting ˆ = √ E{−μ/ρ + |μ/ρ| + 2 p0 αξ /ρl}, where a and b ∈ R are positive, it follows from (45) that:

T   ei j  ei j dt. (41) j ∈N (i)

 can be computed as

  αξ ∞ T ˜ T ˜ =E x ( L L ⊗ )xdt . l 0

(42)

From (38) and (39), one has ˜ ⊗ ]x(t)}. (43) EL V ≤ E{x T (t)[(θ L − cL 2 − aρ L˜ T L)

and

ˆ =

⎧  ⎪ 2 p0 αξ ⎪ ⎪ ⎪ , ⎨E ρl

 2μ ⎪ ⎪ ⎪ ⎪ + E − ⎩ ρ



if μ ≥ 0 2 p0 αξ  , ρl

(46) else.

Part III: When  = In , we have the following equation from the definition of γ:  1  ∞ 1 . x T (t)( L˜ T L˜ ⊗ In )x(t)dt = γ =E l 0 αξ Then, by setting γ¯ = and (45)

1 αξ ¯ ,

one has from the above equation



  μ 2 2 p αξ  μ 1 0 − + . + γ ≤ γ¯ = E αξ ρ ρ ρl Specially, we have, by letting γˆ = E{1/αξ [−μ/ρ + |μ/ρ| + √ 2 p0 αξ /ρl]}, ⎧ ⎨ E 2 p0 , if μ ≥ 0,  αξρl  γ¯ ≤ γˆ = 2 p 2μ 0 ⎩E − αξρ + αξρl , else. This completes the proof.

TANG AND WONG: DISTRIBUTED SYNCHRONIZATION OF COUPLED NNs VIA ROC

2.5

synchronize (3). The coupling graph used is a scale-free network, which is constructed according to the algorithm of [2]. The growth starts from three nodes without edges. At each step, a new node with three edges is added to the existing network. Repeating this rule produces a scale-free network. The parameters are set as α = 1, c = 0.5, ρ = 0.8, ξ = 0.8, and N = 100. It should be noted that other parameters can also be chosen, but they will affect convergence speed and control costs. The time evolutions of ρ(t) and ξ(t) are given in Fig. 1, which shows that variables ρ(t) and ξ(t) switch from values of 0 and 1 according to their expectations. By calculating the eigenvalues of the scale-free network, λ2 (L) = 0.9409 is obtained. In order to use Assumption 1, the inequality (29) in [58] should be satisfied. Together with the two inequalities in Theorem 1, the following inequalities can be solved:

2 1.5

ρ(t)

1 0.5 0 −0.5 −1 −1.5 0

0.05

0.1

t

0.15

0.2

0.25

(a) 2.5 2 1.5

ξ(t)

1

⎧κ ⎪ − < 0, ⎪ ⎪ ⎨2 θ − (aρ + c)λ2 (L) < 0,  ⎪ 2(C + θ I3 ) −  − 2 I3 −A ⎪ ⎪ >0 ⎩  −A T

0.5 0 −0.5 −1 −1.5 0

443

0.05

0.1

t

0.15

0.2

0.25

(b) Fig. 1. Time evolutions of (a) ρ(t) and (b) ξ(t). ρ(t) and ξ(t) switch from values of 0 and 1 according to their expectations.

In this section, several examples are given to verify the performance of the proposed randomly occurring control and updating law. The effects of parameters on synchronization dynamics are also discussed. The following chaotic Hopfield neural network is selected as each node of the stochastic complex network in (3): d x(t) = f (x, t)dt = [−C x + Ag(x)]dt

where a > 0, θ > 0, > 0 are constants and  > 0 ∈ Rn×n . By using the MATLAB LMI toolbox, a feasible solution is obtained as follows: a = 294.6794, = 68.9321, θ = 213.9404,  = diag{154.5750, 154.5750, 154.5750}.

IV. E XAMPLES

(47)

(t)]T ,

A, and C are picked as where x(t) = [x 1(t), x 2 (t), x 3 follows: ⎛ ⎞ ⎛ ⎞ 1.25 −3.2 −3.2 100 A = ⎝ −3.2 1.1 −4.4 ⎠ , C = ⎝ 0 1 0 ⎠. −3.2 4.4 1 001 The activation function g(x) = [g(x 1), g(x 2 ), g(x 3)]T is chosen as (|x + 1| − |x − 1|) . g(x) = 2 Under such parameters, the neural network in (47) can display chaotic behavior [57]. The noise intensity function is adopted as σ (x i (t), t) = 0.1x i (t)(i ∈ V) and  = diag{1, 1, 1} ≥ 0. Therefore, we have κ = 0.01. The simulation time is picked as T = 10 and the step size is 0.001. From the expression of (47), Assumptions 1–3 hold. A. Example 1 In the first example, the distributed adaptive controller in (4) and the adaptive updating law in (7) are used to

(48)

(49)

Therefore, when M = V, the synchronization of the stochastic complex network in (3) via (4) and (7) can be ensured in mean square. When the distributed adaptive controller in (4) is not added to the network, the synchronization of the stochastic complex network cannot be achieved in mean square, as shown in Fig. 2(a). The synchronization errors under (4) are shown in Fig. 2(b), indicating that the synchronization of the stochastic complex network can be realized in mean square. On the other hand, the effects of α, c, ρ, and ξ are investigated on  and γ . The network scale is set as N = 25 and α, c, ρ, and ξ are adjusted gradually. Fig. S1 shows that the simulations confirm the upper bounds of (15) and (16) in Theorem 1.

B. Example 2 In the second example, the pinning controller in (4) and the updating law in (7) are used to synchronize the stochastic complex network in (3). The parameters are set as α = 1, ρ = 0.8, ξ = 0.8, c = 11.1, and N = 100. The coupling graph used is a scale-free network, which is the same as the one in Example 1. Pinning nodes are selected according to degree information in a descending way [28]. Finding the minimum number of pinned nodes for synchronizing the stochastic complex network in (3) can be transformed into the following optimization problem by using the criteria in

444

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

10

5 4

5

3 eij

eij

0

2 1

−5

0 −10 0

5

10

t

15

20

−1 0

25

(a) 6

0.5

1 t

1.5

2

Fig. 3. Pinning synchronization of (3) when N = 100, l p = 0.01 (l = 1), c = 11.1, α = 1, ρ = 0.8, ξ = 0.8, and κ = 0.1. The figure shows that under the distributed adaptive controller in (4), the pinning synchronization of (3) can be achieved in mean square. x1 j − xi j , i = 1, 2, . . . , N, j = 1, . . . , n under distributed adaptive control.

4

eij

2 0

C. Example 3

−2 −4 0

2

4

t (b)

6

8

10

Fig. 2. Synchronization of the stochastic complex network in (3) when N = 100, l p = 1(l = 100), c = 0.5, α = 1, ρ = 0.8, ξ = 0.8, and κ = 0.1. The figures show that under the distributed adaptive controller in (4), the synchronization of (3) can be achieved, whereas the synchronization cannot be achieved without distributed adaptive control. (a) x1 j − xi j , i = 1, 2, . . . , N, j = 1, . . . , n without distributed control. (b) x1 j − xi j , i = 1, 2, . . . , N, j = 1, . . . , n under distributed control.

Theorem 2: min l ⎧ κ ⎪ − 0 −A T  where a > 0, θ > 0, > 0 are constants and  > 0 ∈ Rn×n . Once the pinned nodes are fixed by using degree information, a feasible solution can be obtained by LMI and Yalmip [59]: l = 1, a = 558.9486, = 0.6011, θ = 6.9542,  = diag{8.2984, 8.7314, 7.5843}.

(51)

Therefore, according to Theorem 2, the stochastic complex network can become synchronized when only a single distributed controller is used. Synchronizing errors arising from adding the distributed controller in (4) to the network are shown in Fig. 3. It can be seen that the synchronization of the stochastic complex network can be realized in mean square. The effects of l p , ρ, and ξ are studied on  and γ . The network scale is picked as N = 25 and l p , ρ, and ξ vary. Simulations shown in Fig. S2 verify the results of (35) and (36) in Theorem 2.

In the third example, we compare two different control schemes, i.e., randomly occurring control (ROC) and periodically intermittent control (PIC). Here, two types of PIC are considered. PIC was used to synchronize chaotic systems or complex networks [29], [49]. The first type of PIC, named PIC1, is given as follows: ⎧ ⎨ j ∈M i (t)(x j (t) − x i (t)), u i (t) = t ∈ [kT p , kT p + ρT p ), ⎩ 0, t ∈ [kT p + ρT p , (k + 1)T p ). The periodically intermittent updating law of PIC1 can be written as follows: ⎧ ⎨ α T (t)(t)dt, di (t) = t ∈ [kT p , kT p + ξ T p ), ⎩ 0, t ∈ [kT p + ξ T p , (k + 1)T p )  where (t) = [ j ∈M (x j (t) − x i (t))]. The second type of periodically intermittent control, named PIC2, is given as follows: ⎧ ⎨ 0,  t ∈ [kT p , kT p + (1 − ρ)T p ), i (t)(x j (t) − x i (t)), u i (t) = ⎩ j ∈M t ∈ [kT p + (1 − ρ)T p , (k + 1)T p ) where k = 0, 1, . . . , m, T p is a period, and ρ ∈ [0, 1]. The periodically intermittent updating law of PIC2 can be written as follows: ⎧ ⎨ 0, t ∈ [kT p , kT p + (1 − ξ )T p ), di (t) = α T (t)(t)dt, ⎩ t ∈ [kT p + (1 − ξ )T p , (k + 1)T p )  where (t) = [ j ∈M (x j (t) − x i (t))], k = 0, 1, . . . , m, T p is a period and ξ ∈ [0, 1]. The above equations show that the control and gain updating of PIC1 are implemented in the former period, while the control and gain updating of PIC2 are implemented in the latter period. Fig. 4 shows that, when control occurs periodically or randomly, both  and γ decrease when ρ ∈ [0, 1] increases. Randomly occurring control is an intermediate method among the three types of

TANG AND WONG: DISTRIBUTED SYNCHRONIZATION OF COUPLED NNs VIA ROC

1.4

18 ROC PIC1 PIC2

16

ROC PIC1 PIC2

1.2 1

12

γ

ε

14

0.8

10

0.6

8 6 0

445

0.2

0.4

ρ

0.6

0.8

0.4 0

1

0.2

0.4

ρ

0.6

0.8

1

0.8

1

(b)

(a) 1.4

7 6

1.2

ROC PIC1 PIC2

5 1

ε

γ

4 3

0.8 ROC PIC1 PIC2

2 1 0 0

0.2

0.4

ξ

0.6

0.8

0.6

1

(c)

0.4 0

0.2

0.4

ξ

0.6

(d)

Fig. 4. Comparison between ROC and PIC on  and γ when T = 10, α = 1, c = 1, N = 100, l = 100, and κ = 0.1. (a) Tuning ρ on  when ξ = 1. (b) Tuning ρ on γ when ξ = 1 . (c) Tuning ξ on  when ρ = 1. (d) Tuning ξ on γ when ρ = 1. From this figure, the effects of  and γ on ρ and ξ are illustrated. ROC ranks second among the three control methods in both convergence rates and control costs. Periodically intermittent control 1 (PIC1) achieves the fastest convergence rate while it has the highest control cost. Periodically intermittent control 2 (PIC2) has the slowest convergence rate while it has the smallest control cost.

control. PIC1 performs best on  and γ . When gain updating occurs periodically or randomly,  increases when ξ ∈ [0, 1] increases, and γ decreases when ξ ∈ [0, 1] increases. Fig. 4 shows that PIC1 achieves the fastest convergence rate and the highest control cost, and PIC2 produces the slowest convergence rate and the smallest control cost. Randomly occurring gain updating ranks second among the three updating methods in terms of convergence rates and control costs. It should be noted that PIC has to be activated periodically. The advantage of randomly occurring control (ROC) over PIC is that ROC has fewer restrictions on implementation since it does not need to act at certain fixed points. Hence, ROC is more flexible than PIC. V. C ONCLUSION This paper was devoted to the distributed synchronization of stochastic complex networks by randomly occurring control. Bernoulli stochastic variables were used to describe the occurrences of distributed adaptive control and updating law. For pinning and non-pinning cases, Lyapunov functions and stochastic analysis techniques were used to derive the sufficient conditions to ensure the distributed synchronization and the distributed pinning synchronization of stochastic complex networks in mean square. Meanwhile, the upper bounds of the control cost and the convergence speed were analytically derived by Lyapunov functions and inequality techniques.

Finally, simulations verified the feasibility of our theoretical results, showing that randomly occurring control has fewer restrictions than periodically intermittent control in terms of implementation. For further research topics, it is recommended that the occurrences of distributed adaptive control be described using more complex stochastic variables or probability density functions [60]. Also, it is important to extend our results to networks with coupling delay [4], [16] or directed networks/multiagent systems. ACKNOWLEDGMENT The authors would like to thank the Editor-in-Chief, the Associate Editor, and the anonymous reviewers for their careful reading of the manuscript and constructive comments. They would also like to thank Dr. Lu, Mr. Li, and Dr. Zou for helpful discussions. R EFERENCES [1] D. Watts and S. Strogatz, “Collective dynamics of ‘small-world’ networks,” Nature, vol. 393, pp. 440–442, Jun. 1998. [2] A. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, pp. 509–512, Oct. 1999. [3] S. Boccaletti, V. Latora, Y. Morenod, M. Chavezf, and D. Hwang, “Complex networks: Structure and dynamics,” Phys. Rep., vol. 424, nos. 4–5, pp. 175–308, 2006. [4] A. Arenas, A. Guilera, J. Kurths, Y. Moreno, and C. Zhou, “Synchronization in complex networks,” Phys. Rep., vol. 469, no. 3, pp. 93–153, 2008.

446

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

[5] E. Ott, Chaos in Dynamical Systems, 2nd ed. Cambridge, U.K.: Cambridge Univ. Press, 2002. [6] A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization: A Universal Concept in Nonlinear Sciences. Cambridge, U.K.: Cambridge Univ. Press, 2001. [7] Y. Tang, H. Gao, W. Zou, and J. Kurths, “Identifying controlling nodes in neuronal networks in different scales,” PLoS ONE, vol. 7, no. 7, p. e41375, Jul. 2012. [8] W. Zhang, Y. Tang, J. Fang, and X. Wu, “Stability of delayed neural networks with time-varying impulses,” Neural Netw., vol. 36, pp. 59–63, Dec. 2012. [9] L. Wu, Z. Feng, and W. X. Zheng, “Exponential stability analysis for delayed neural networks with switching parameters: Average dwell time approach,” IEEE Trans. Neural Netw., vol. 21, no. 9, pp. 1396–1407, Sep. 2010. [10] J. Lu and D. Ho, “Globally exponential synchronization and synchronizability for general dynamical networks,” IEEE Trans. Syst., Man Cybern. B, Cybern., vol. 40, no. 2, pp. 350–361, Apr. 2010. [11] M. Barahona and L. Pecora, “Synchronization in small-world systems,” Phys. Rev. Lett., vol. 89, no. 5, pp. 054101-1–054101-4, 2002. [12] T. Nishikawa, A. Motter, Y. Lai, and F. Hoppensteadt, “Heterogeneity in oscillator networks: Are smaller worlds easier to synchronize?” Phys. Rev. Lett., vol. 91, no. 1, pp. 014101-1–014101-4, 2003. [13] L. Donetti, P. Hurtado, and M. Munoz, “Entangled networks, synchronization, and optimal network topology,” Phys. Rev. Lett., vol. 95, no. 18, pp. 188701-1–188701-4, 2005. [14] D. U. Hwang, M. Chavez, A. Amann, and S. Boccaletti, “Synchronization in complex networks with age ordering,” Phys. Rev. Lett., vol. 94, no. 13, pp. 138701-1–138701-4, 2005. [15] T. Oguchi, H. Nijmeijer, and T. Yamamoto, “Synchronization in networks of chaotic systems with time-delay coupling,” Chaos, vol. 18, no. 3, pp. 037108-1–037108-14, 2008. [16] D. Hunt, G. Korniss, and B. Szymanski, “Network synchronization in a noisy environment with time delays: Fundamental limits and trade-offs,” Phys. Rev. Lett., vol. 105, no. 6, pp. 068701-1–068701-4, 2010. [17] M. Chavez, D. Hwang, A. Amann, H. Hentschel, and S. Boccaletti, “Synchronization is enhanced in weighted complex networks,” Phys. Rev. Lett., vol. 94, no. 21, pp. 218701-1–218701-4, 2005. [18] V. Belykh, G. Osipov, V. Petrov, J. Suykens, and J. Vandewalle, “Cluster synchronization in oscillatory networks,” Chaos, vol. 18, no. 3, pp. 037106-1–037106-6, 2008. [19] C. Wu and L. Chua, “Synchronization in an array of linearly coupled dynamical systems,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 42, no. 8, pp. 430–447, Aug. 1995. [20] C. Zhou and J. Kurths, “Dynamical weights and enhanced synchronization in adaptive complex networks,” Phys. Rev. Lett., vol. 96, no. 16, pp. 164102-1–164102-4, 2006. [21] Z. Wang, Y. Wang, and Y. Liu, “Global synchronization for discrete-time stochastic complex networks with randomly occurred nonlinearities and mixed time-delays,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 11–25, Jan. 2010. [22] J. Liang, Z. Wang, Y. Liu, and X. Liu, “Robust synchronization of an array of coupled stochastic discrete-time delayed neural networks,” IEEE Trans. Neural Netw., vol. 19, no. 11, pp. 1910–1921, Nov. 2008. [23] H. Zhang, Y. Xie, Z. Wang, and C. Zheng, “Adaptive synchronization between two different chaotic neural networks with time delay,” IEEE Trans. Neural Netw., vol. 18, no. 6, pp. 1841–1845, Nov. 2007. [24] H. Li, “New criteria for synchronization stability of continuous complex dynamical networks with non-delayed and delayed coupling,” Commun. Nonlinear Sci. Numer. Simul., vol. 16, no. 2, pp. 1027–1043, 2011. [25] W. Zhang, Y. Tang, J. Fang, and W. Zhu, “Exponential cluster synchronization of impulsive delayed genetic oscillators with external disturbances,” Chaos, vol. 21, no. 4, pp. 043137-1–043137-12, 2011. [26] W. He and J. Cao, “Exponential synchronization of hybrid coupled networks with delayed coupling,” IEEE Trans. Neural Netw., vol. 21, no. 4, pp. 571–583, Apr. 2010. [27] F. Sorrentino and E. Ott, “Adaptive synchronization of dynamics on evolving complex networks,” Phys. Rev. Lett., vol. 100, no. 11, pp. 114101-1–114101-4, 2008. [28] X. Wang and G. Chen, “Pinning control of scale-free dynamical networks,” Phys. A, vol. 310, nos. 3–4, pp. 521–531, 2002. [29] W. Xia and J. Cao, “Pinning synchronization of delayed dynamical networks via periodically intermittent control,” Chaos, vol. 19, no. 1, pp. 013120-1–013120-8, 2009. [30] R. Grigoriev, M. Cross, and H. Schuster, “Pinning control of spatiotemporal chaos,” Phys. Rev. Lett., vol. 79, no. 15, pp. 2795–2798, 1997.

[31] J. Lu, J. Kurths, J. Cao, N. Mahdavi, and C. Huang, “Synchronization control for nonlinear stochastic dynamical networks: Pinning impulsive strategy,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 2, pp. 285–292, Feb. 2012. [32] Y. Tang, H. Gao, J. Kurths, and J. Fang, “Evolutionary pinning control and its application in UAV coordination,” IEEE Trans. Ind. Inf., vol. 8, no. 4, pp. 828–838, Nov. 2012. [33] Y. Tang, Z. Wang, H. Gao, S. Swift, and J. Kurths, “A constrained evolutionary computation method for detecting controlling regions of cortical networks,” IEEE/ACM Trans. Comput. Biol. Bioinf., vol. 9, no. 6, pp. 1569–1581, Nov.–Dec. 2012. [34] X. Li, X. Wang, and G. Chen, “Pinning a complex dynamical network to its equilibrium,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 51, no. 10, pp. 2074–2087, Oct. 2004. [35] T. Chen, X. Liu, and W. Lu, “Pinning complex networks by a single controller,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 54, no. 6, pp. 1317–1326, Jun. 2007. [36] J. Lu, D. Ho, and L. Wu, “Exponential stabilization in switched stochastic dynamical networks,” Nonlinearity, vol. 22, no. 4, pp. 889–911, 2009. [37] J. Lu, D. Ho, and Z. Wang, “Pinning stabilization of linearly coupled stochastic neural networks via minimum number of controllers,” IEEE Trans. Neural Netw., vol. 20, no. 10, pp. 1617–1629, Oct. 2009. [38] J. Zhou, J. Lu, and J. Lu, “Pinning adaptive synchronization of a general complex dynamical network,” Automatica, vol. 44, no. 4, pp. 996–1003, 2008. [39] M. Porfiri and M. Bernardo, “Criteria for global pinning-controllability of complex networks,” Automatica, vol. 44, no. 12, pp. 3100–3106, 2008. [40] W. Yu, G. Chen, and J. Lu, “On pinning synchronization of complex dynamical networks,” Automatica, vol. 45, no. 2, pp. 429–435, 2009. [41] P. Lellis, M. Bernardo, and G. Russo, “On QUAD, Lipschitz, and contracting vector fields for consensus and synchronization of networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 58, no. 3, pp. 576–583, Mar. 2011. [42] Y. Tang, Z. D. Wang, and J. Fang, “Pinning control of fractionalorder weighted complex networks,” Chaos, vol. 19, no. 1, pp. 013112-1–013112-9, 2009. [43] Y. Liu, J. Slotine, and A. Barabasi, “Controllability of complex networks,” Nature, vol. 473, pp. 167–173, May 2011. [44] R. Saber, J. Fax, and R. Murray, “Consensus and cooperation in networked multi-agent systems,” Proc. IEEE, vol. 95, no. 1, pp. 215–233, Jan. 2007. [45] W. Lu, “Adaptive dynamical networks via neighborhood information: Synchronization and pinning control,” Chaos, vol. 17, no. 2, pp. 023122-1–023122-18, 2007. [46] W. Yu, G. Chen, Z. Wang, and W. Yang, “Distributed consensus filtering in sensor networks,” IEEE Trans. Syst., Man Cybern. B, Cybern., vol. 39, no. 6, pp. 1568–1577, Dec. 2009. [47] J. Hespanha, P. Naghshtabrizi, and Y. Xu, “A survey of recent results in networked control systems,” Proc. IEEE, vol. 95, no. 1, pp. 138–162, Jan. 2007. [48] M. Zochowski, “Intermittent dynamical control,” Phys. D, vol. 145, nos. 3–4, pp. 181–190, 2000. [49] T. Huang, C. Li, and X. Liu, “Synchronization of chaotic systems with delay using intermittent linear state feedback,” Chaos, vol. 18, no. 3, pp. 033122-1–033122-8, 2008. [50] X. Sun, G. Liu, D. Rees, and W. Wang, “Stability of systems with controller failure and time-varying delay,” IEEE Trans. Autom. Control, vol. 53, no. 10, pp. 2391–2396, Nov. 2008. [51] X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching. London, U.K.: Imperial College Press, 2008. [52] Z. Wang, F. Yang, D. Ho, and X. Liu, “Robust H∞ control for networked systems with random packet losses,” IEEE Trans. Syst., Man Cybern. B, Cybern., vol. 37, no. 4, pp. 916–924, Aug. 2007. [53] Z. Wang, D. Ho, and X. Liu, “Variance-constrained filtering for uncertain stochastic systems with missing measurements,” IEEE Trans. Autom. Control, vol. 48, no. 7, pp. 1254–1258, Jul. 2003. [54] D. Yue, E. Tian, Z. Wang, and J. Lam, “Stabilization of systems with probabilistic interval input delays and its applications to networked control systems,” IEEE Trans. Syst., Man Cybern. A, Syst. Humans, vol. 39, no. 4, pp. 939–945, Jul. 2009. [55] W. Yu, P. Lellis, G. Chen, M. Bernardo, and J. Kurths, “Distributed adaptive control of synchronization in complex networks,” IEEE Trans. Autom. Control, vol. 57, no. 8, pp. 2153–2158, Aug. 2012. [56] X. Mao, Stochastic Differential Equations and Applications, 2nd ed. Chichester, U.K.: Horwood, 2007.

TANG AND WONG: DISTRIBUTED SYNCHRONIZATION OF COUPLED NNs VIA ROC

[57] F. Zou and J. Nosse, “Bifurcation and chaos in cellular neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 40, no. 3, pp. 166–173, Mar. 1993. [58] W. Lu and T. Chen, “New approach to synchronization analysis of linearly coupled ordinary differential systems,” Phys. D, vol. 213, no. 2, pp. 214–230, 2006. [59] J. Loefberg, “YALMIP : A toolbox for modeling and optimization in MATLAB,” in Proc. CACSD Conf., Taipei, Taiwan, 2004, pp. 284–289. [60] Y. Tang, W. Zou, J. Lu, and J. Kurths, “Stochastic resonance in an ensemble of bistable systems under stable distribution noises and heterogeneous coupling,” Phys. Rev. E, vol. 85, no. 4, pp. 046207-1–046207-4, 2012.

Yang Tang (M’11) received the B.S. and Ph.D. degrees in electrical engineering from Donghua University, Shanghai, China, in 2006 and 2011, respectively. He was a Research Associate with The Hong Kong Polytechnic University, Kowloon, Hong Kong, from 2008 to 2010. He has been an Alexander von Humboldt Research Fellow with Humboldt University of Berlin, Berlin, Germany, and a Visiting Scientist with the Potsdam Institute for Climate Impact Research, Potsdam, Germany, since 2011. He was a Visiting Research Fellow with Brunel University, Uxbridge, U.K., in 2012. He has authored or co-authored more than 30 papers in refereed international journals and conferences. His current research interests include synchronization and consensus, networked control systems, evolutionary computation, bioinformatics, and their applications. Dr. Tang is an Active Reviewer of many international journals.

447

Wai Keung Wong received the Ph.D. degree from The Hong Kong Polytechnic University, Hung Hom Kowloon, Hong Kong. He is currently an Associate Professor with The Hong Kong Polytechnic University. He has authored or co-authored more than 60 papers in refereed journals and conferences, including the IEEE T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS , the IEEE T RANSAC TIONS ON S YSTEMS , M AN , AND C YBERNETICS — PART B: C YBERNETICS and the IEEE T RANSAC TIONS ON S YSTEMS , M AN , AND C YBERNETICS —PART C: A PPLICATIONS AND R EVIEWS , Pattern Recognition, CHAOS, the International Journal of Production Economics, the European Journal of Operational Research, the International Journal of Production Research, Computers in Industry, among others. His current research interests include artificial intelligence, pattern recognition, and optimization of manufacturing scheduling, planning and control.

Distributed synchronization of coupled neural networks via randomly occurring control.

In this paper, we study the distributed synchronization and pinning distributed synchronization of stochastic coupled neural networks via randomly occ...
3MB Sizes 0 Downloads 3 Views