PHYSICAL REVIEW E 89, 042705 (2014)

Modular network evolution under selection for robustness to noise Yusuke Ikemoto Department of Mechanical and Intellectual Systems Engineering, University of Toyama, 3190 Gofuku, Toyama 930-8555, Japan

Kosuke Sekiyama Department of Micro-Nano Systems Engineering, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan (Received 10 March 2013; revised manuscript received 16 December 2013; published 15 April 2014) Real networks often exhibit modularity, which is defined as the degree to which a network can be decomposed into several subnetworks. The question of how a modular network arises is still open to discussion. The leading hypothesis is that high modularity evolves under multiple goals, which are decomposable to subproblems, as well as under the evolutionary constraint that selection prefers sparse links in a network. In the present study, we investigate an alternative evolutionary constraint entailing increased robustness to noise. To examine this, we present noise-interfused network models involving an analytically solvable linear system and biologically inspired nonlinear systems. The models demonstrate that it is possible to evolve a modular network under both modularly changing goal orientations and enhancing robustness to noise, thereby reducing sensitivity to noise. By performing theoretical analyses of linear systems, it is shown that the evolutionary constraint enforces the establishment of well-balanced noise sensitivities of multiple noise sources and leads to a modular network underlying a modular structure in goals. Moreover, computer simulations confirm that the presented mechanisms of modular network evolution are robust to variations of nonlinearity in network functions. Our findings suggest a positive role for the presence of noise in network evolution. DOI: 10.1103/PhysRevE.89.042705

PACS number(s): 87.18.Tt, 89.75.Fb

I. INTRODUCTION

Biological networks favor architectures that are partitioned into subnetwork modules rather than random networks [1,2]. This degree of partitioning is referred to as modularity, which is defined as tight linkages within subnetworks and weak linkages between them [2]. Modular structure can be found in various hierarchical networks such as gene expressions [3], protein-protein interactions [4], and the interactions between working partners [5]. Although it is important to understand how nature designs networks with high modularity, the question about the evolutionary origin of modular networks is still open to discussion. The leading hypothesis is that a modular structure arises because of changing environments that have multiple goals sharing common subproblems [6,7]. This evolutionary scenario is called modularly varying goals (MVG) [7]. Simulationbased research has illustrated that a modular structure can spontaneously evolve under conditions with MVG. This modularly changing environment is considered to be a plausible mechanism for the evolution of modular networks. However, most research dealing with the emergence of modularity under MVG is based on the premise of the evolutionary constraint directly relevant to sparse links in a network. The notable evolutionary constraint is based on the cost, which decreases network fitness because of the burden of producing and maintaining network linkages [6,8], where sparse linkage introduces higher selective advantages. In this framework, the effect of reducing the link cost may prompt evolving networks to be sparse and play a substantial role in sustaining only significant links for satisfying goals. Another evolutionary constraint is based on selection with biased mutations, which enforces limitations on the number of degrees of input to a node [9] and perhaps promotes 1539-3755/2014/89(4)/042705(11)

sparse linkage in evolving networks. This framework is firmly reinforced by empirical data, wherein a node has two or three input degrees at most in gene regulatory networks [10,11]. The features of these evolutionary constraints are directly implicated in the processes by which nodes develop linkages. In contrast, we focus on an alternative evolutionary constraint that prompts a system to increase its robustness to noise. This is because it is expected that improving robustness to noise may work on the deletion of insignificant links in a network and on confining the significant links for satisfying goals. Recently, it was reported that modular structure affects network robustness to noise [12], which reflects its ability to sustain required outputs [13]. In the real world, noise is ubiquitous, even in genetic circuits [14], synaptic circuits [15], and metabolic pathways [16]. Because networks mediate both nominal signals and noise, the link topology of a network affects its robustness to noise [17]. Several computer simulation studies have shown that robustness increases with the emergence of modular networks [18–20]. However, there have been few discussions regarding a mathematical framework establishing how robustness to noise can contribute to network evolution. Actually, networks are required to handle multiple types of noise and sustain stable outputs. A network must manage noise [14,21] such as source noise and sensor noise [22] on network dynamics. These types of noise are classified in terms of how they originate. Source noise is an internal system noise that is associated with fluctuations in the actional events of a node. Sensor noise is an external system noise that is generated by fluctuations in mediators used for signal transductions and it includes noise associated with signals from an environment. In gene expression networks, source and sensor noise may correspond to intrinsic and extrinsic noise, respectively [21]. It

042705-1

©2014 American Physical Society

YUSUKE IKEMOTO AND KOSUKE SEKIYAMA

PHYSICAL REVIEW E 89, 042705 (2014)

Evolved networks Under modularly vary goals

Increase of robustness to noise

Inputs

Selection to reduce link costs

Outputs

1

FIG. 1. Schematic of evolutionary constraints, based on the assumption that robustness to noise leads to the evolution of modular networks. The region in gray represents one of the leading hypotheses based on reduction of link costs. The region with diagonal lines represents the area we are presently investigating wherein the network evolves under the imperative of increasing robustness to noise.

is possible to increase robustness by reducing noise sensitivity; however, in this approach, the system needs to find a balance among system gains because the noise interfused in a signal is easily amplified in another signal [14]. This trade-off between the sensitivity and complementary sensitivity functions gives rise to inevitable evolutionary constraints. Our interest lies in the mathematical relationships between robustness to noise and the evolution of modular networks. We therefore investigate network evolution, not from the perspective of direct selection that favors sparse networks, but under conditions of increasing robustness to noise in the framework of MVG, as described in Fig. 1. To accomplish our investigation, we present noise-interfused network models, involving an analytically solvable linear system and biologically inspired nonlinear systems using the hyperbolic tangent and Hill function models. The models demonstrate that increasing robustness to noise leads to the evolution of a modular network in every case. We found that the increase of robustness to noise enforces a fair trade-off among noise sensitivities of multiple noise sources and can be the evolutionary constraint, which manifests a modular structure relevant to modular goals.

FIG. 2. (Color online) Network model configurations. The squares and the circles indicate input and output nodes, respectively. The recurrently connected line in red represents the links with respect to aij ; the line in blue from inputs to outputs represents the links with respect to bik . The fi take the place of Lfi , T fi , and Lfi depending on the function model.

linkages among outputs and feedforward linkages from inputs to outputs. By using the vector notation u = [u1 , . . . ,uNI ]T , x = [x1 , . . . ,xNO ]T , and f = [f1 , . . . ,fNO ]T , the model can be expressed as x˙ = −x + f(x,u).

Let aij and bik be a link weight from output j to output i and from input k to output i, respectively. The link weight is a continuous value and indicates activation if positive and repression if negative. Using matrix notation, we define network matrices as A (NO × NO ) with elements aij and B (NO × NI ) with elements bik . In the present study, we investigate network evolution with linear and nonlinear models as shown in Fig. 3 because there are several types of network function models applicable to biological phenomena. The nonlinear models comprise the hyperbolic tangent and Hill functions. The hyperbolic tangent function is often used to model nervous systems [23]. The Hill function is based on chemical reactions and is often used to model gene regulatory networks [24]. For the linear model [Fig. 3(a)], the action of output node i is expressed as Lfi , defined by

II. MODEL L

fi =

A. Network model

We consider the evolution of networks with a fixed number NO of outputs that receive NI inputs as shown in Fig. 2. We define xi as the actual output, pi as the required output of the output node i (for i = {1,2, . . . ,NO }), and uk as the kth input (for k = {1,2, . . . ,NI }). The goal comprises pairs of pi and uk . In this model, for instance, these pairs are assumed to be the frequency of spiking in nerve systems or the rate of biochemical reactions in gene expressions. The nominal dynamics of a node activity xi are expressed as x˙i = −xi + fi (x1 , . . . ,xNO ,u1 , . . . ,uNI ). The first term on the right-hand side is assumed to be a damping phenomenon with a coefficient of 1. The second term on the right-hand side expresses the network function determined by recurrent

(1)

NO 

aij xj +

j =1

NI 

bik uk .

(2)

k=1

This model is regarded as a linear approximation of several nonlinear functions around a convergent point and it allows the xi to have values within [−∞,∞]. We define the on and off states of the required output as +1 and −1, respectively. In the hyperbolic tangent model [Fig. 3(b)], the action of output node i is expressed as T fi , defined by ⎞⎤ ⎡ ⎛ NO NI   T (3) fi = tanh⎣β ⎝ aij xj + bik uk ⎠⎦ , j =1

k=1

where the parameter β determines the precipitousness of the nonlinear function. The limit β → ∞ provides the steplike

042705-2

MODULAR NETWORK EVOLUTION UNDER SELECTION FOR . . .

(a)

1 0.5 0 −0.5 −1 −1

(b)

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1 0.5 0 −0.5 −1 −1

(c) 1

PHYSICAL REVIEW E 89, 042705 (2014)

where | | indicates an absolute value. Here Hfi is close to 1 rep if fiact  fi because both the denominator and numerator increase with the same order. In contrast, Hfi is close to 0 if rep fiact  fi because of the increase only in the denominator H of fi . In the absence of activation, Hfi is fixed at 0 whether or not repression exists. Accordingly, the range of xi is restricted within [0, + 1). Because of the range limitation, the on and off states in goals are defined by +0.9 and 0, respectively. The parameter γ is the Hill coefficient that determines the nonlinearity of activation or repression. The limit γ → ∞ provides the steplike function of inputs. It is known that this parameter is set to a value from 1 to 4 to fit actual behaviors [24]. Here we choose γ = 4 for the same reason as setting β in the hyperbolic tangent function model. The inverses of |aij | and |bik |, i.e., 1/|aij | and 1/|bik |, represent the activation coefficient if aij ,bik > 0 and the repression coefficient if aij ,bik < 0. These coefficients determine the level of the effectual activity of a node; thus, a large coefficient prompts large activity from small inputs at node i.

0.8 0.6

B. Network dynamics with noise

0.4 0.2 0 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

FIG. 3. Network function models: (a) linear function model f (x), (b) hyperbolic tangent function model T f (x) with β variations, and (c) the Hill function model Hf (x) with γ variations.

L

function of inputs. In practice, this parameter has a effect on the evolutionary speed rather than the quality of evolving network structures because β also determines the output sensitivity to the changes of link weights. Here we set β = 1 based on preliminary simulations to a computational cost. In this model, the range of the node activity xi is restricted within (−1, + 1) and xi will never attain ±1. Because of the range limitation, the on and off states in goals are defined as +0.9 and −0.9, respectively. For the Hill model [Fig. 3(c)], we employ the multipleinput Hill function. The details of its correspondence to biological phenomena are provided elsewhere [24]. Here a brief explanation of the mathematical behavior is provided. In the Hill model, the action of output node i is expressed as Hfi , defined by H

fi =

fiact rep , 1 + fiact + fi

Now we introduce several sources of noise to the network dynamics. Specifically, we inject sources of sensor noise ξai and ξbk into signals xi and uk , respectively, and introduce source noise ξci at action events occurring at node i, as shown in Fig. 4. The distributions of all the noise sources are set to Gaussian distributions with a mean of 0 and a variance of 1. Given noise interfusion, the nominal system given by Eq. (1) is modified on the right-hand side such that, for the first term, x → x + σc ξ c , and, for the second term, f(x,u) → f(x + σa ξ a ,u + σb ξ b ), where ξ a , ξ b , and ξ c are column vectors with elements ξai , ξbk , and ξci , respectively. The coefficients σa , σb , and σc are indicative of noise strengths and were set to 0.1. If we assume that noise strengths are small, the effect of noise on network dynamics can be approximately expressed by linear terms via a Taylor expansion around x [25]: x˙ = −x + f(x,u) + σa Ja ξ a + σb Jb ξ b − σc ξ c ,

(7)

where Ja and Jb are Jacobian matrices whose components are derived by [Ja ]ij = ∂fi /∂xj and [Jb ]ik = ∂fi /∂uk ,

(4) rep

where fiact expresses a set of activator inputs and fi expresses a set of represser inputs to node i:   (|aij |xj )γ + (|bik |uk )γ , (5) fiact = {j |aij >0} rep

fi

=

 {j |aij 0}

(|aij |xj )γ +

 {k|bik 0 (bik > 0) otherwise.

(14)

The effect of noise on network dynamics is mainly characterized by the above Jacobian matrices. C. Fitness

To evaluate how noise affects the output variance, we employed a fitness based on the  τ expectation of a second-order energy function E[˜xT x˜ ] = τ1 0 x˜ T x˜ dt|τ →∞ , where x˜ = x − p is the output error. In practice, the fitness contains the sum of −E[˜xT x˜ ] with respect to every goal. To calculate the fitness, it is necessary to record the value x over an extended period. To reduce computational costs, we considered deriving the fitness from a mathematical framework, as in the following sequence. By the formula x − p = (x − E[x]) + (E[x] − p), the fitness can be separated into two terms: E[˜xT x˜ ] = Tr[H] + Tr[V], where H = (p − E[x])(p − E[x])T is the output error in the nominal model and V = (x − E[x])(x − E[x])T is the covariance matrix of outputs. In the limit τ → ∞, the covariance matrix satisfies the following Lyapunov equation [25]:  : = (Ja − I)V + V(Ja − I)T + σa2 Ja JTa + σb2 Jb JTb + σc2 I = 0.

(15)

Thus, the fitness F of a network is defined as the summation with respect to every goal, F =−

NG  g=1

Tr[Hg ] −

NG 

Tr[Vg ],

for the one-dimensional case discussed in Sec. IVA). The population size was fixed at 100 at each generation. The top 10% of the networks passed to the next generation and reproduced the same networks without crossover and with mutations of mutation probability of 2% per link. If mutation occurred, a Gaussian random value with a mean of 0 and a variance of 0.01 was added to the link weight. It should be noted that biased operations aimed at selections favoring sparse networks were never performed. The initial networks comprised matrices A and B whose elements were set to uniform random values within [−0.3,0.3]. In each simulation, we recorded the evolving networks for 30 000 generations. The fitness was individually derived based on Eq. (16) at each generation. First, the squared errors in the nominal system H were derived by the asymptotic values of actual outputs x. They were calculated for every goal by using the Euler method with t = 0.1, with Eq. (1) and Eqs. (2)–(4). The numerical integration was continued for 10 s or until the maximum element in |˙x| obtained a value less than 0.001. The initial state of x was set to the outputs p plus uniform random values within [−0.1,0.1]. This initial condition demonstrates a convergence performance to an attractor and robustness to initial perturbations [9]. When the outputs fluctuated away from the required outputs, the network was prevented from passing to the next generation. Moreover, the covariance matrix of actual outputs V was derived by using Jacobian matrices. Jacobian matrices were obtained by substituting asymptotic values of x into Eqs. (8)–(13). The covariance matrices were numerically calculated by using Eq. (15) [26]. This sequence was performed for each goal. E. Goal setting

The goal was set based on the scenario of MVG, which is defined as sharing subproblems among the several sets of goals. In recent studies on evolution of modularity, the condition of MVG was mainly applied to logic circuit models with functional components such as AND, OR, and XOR [7,8,20] and to linear mapping models with inputs and outputs [6]. We followed the definition in the linear mapping model because our network model comprises a continuous dynamical system. Here we attempt to implement MVG for goal setting. The set of goals comprises pairs of P and U whose columns correspond to the pairs of required outputs and inputs. The task of networks is to determine the mapping function from inputs to required outputs. We describe this function as linear mapping with matrix M (NO × NI ) satisfying P = MU.

(16)

g=1

where the first term corresponds to the squared errors in the nominal model and the second term represents the sum of variances of the actual outputs introduced by noise. D. Evolutionary simulations

To investigate evolutionary dynamics, we performed evolutionary simulation wherein the selection is based on a standard genetic algorithm with elite preservation. For every model, simulations were performed under the same settings (except

(17)

For instance, in the case of the linear model, M may correspond to (I − A)−1 B if the network fits the required goals. According to the definition given elsewhere [6], the set of goals is composed of MVG if there exists a block diagonal matrix M wherein the permutations of the columns are allowed. In the present study, an ill-posed problem is given to the network, where there exists an infinite number of M satisfying Eq. (17). For goal setting, we set P and U first and M is determined subsequently. As a benchmark in overall evolutionary simulations, we investigated the system in the case of NO = 7 outputs, NI = 7 inputs, and NG = 5 goals, as illustrated in

042705-4

MODULAR NETWORK EVOLUTION UNDER SELECTION FOR . . .

1

On

2

On

3

On

4

On

5

On

6

Off

7

Off

2

Goal No. 3 4

1

5 rob.

Subp

2

Goal No. 3 4

1

5 1 2 3

b. 2 bpro

Su

4

Input No.

Output No.

1

5

Subprob. 3

6 7

FIG. 5. Goal setting under MVG, showing the pairs of outputs P (left) and inputs U (right) in the present model. White squares indicate the on state; gray squares indicate the off state in required outputs and inputs. The goals involve three subproblems, which are determined according to the set of invariance states with regard to variation of the number of goals.

Fig. 5. Intuitively, let us determine the common subproblems in the goals by scanning the structural coincidences in the on and off lists of the rows of matrices P and U. We find that three subproblems exist: finding the mapping from input {1,2} to output {1,2,3}, finding the mapping from input {3,4,5} to output {4,5}, and finding the mapping from input {6,7} to output {6,7}. Under these goals, M is defined by M = PU+ ,

(18)

where U+ is the pseudoinverse matrix of U. To determine M, we employed the Moore-Penrose inverse matrix based on minimizing the norm (Tr[MMT ])1/2 [6]. In this manner, although the on and off values vary according to the given network function model, the same block diagonal matrix resulted in each case in the following: ⎡ ⎤ M11 0 ⎦. M=⎣ (19) M22 0 M33 Here Mii is the mapping matrix corresponding to subproblem i: ⎤ ⎡ 1/2 1/2 (20) M11 = ⎣1/2 1/2⎦ , 1/2 1/2   1/3 1/3 M22 = , (21) 1/3 1/3   1/2 1/2 M33 = . (22) 1/2 1/2 The main characteristic of the goals is that the matrix M is expressed as a one-to-one mapping function. III. METRICS OF INTEREST

PHYSICAL REVIEW E 89, 042705 (2014)

present case required some adjustments because the network consists of asymmetric matrices A and B of continuous value elements. We therefore used the following (NO + NI ) × (NO + NI ) interaction matrix [6] to measure modularity:   A B . (23) 0NI ×NO 0NI ×NI This interaction matrix properly describes the linkages among the nodes. If a given element was found to be negative, it was replaced by its absolute value. The modularity Q is defined as  M   wm dmin dmout , (24) − Q= W W2 m=1 where m identifies the module, wm is the sum of link weights within module m, W is the sum of link weights of the interaction matrix, and dmin and dmout are the sums of the link weights of the inputs to and outputs from module m, respectively. We used a searching algorithm [27] to identify the network partitioning that maximizes the modularity index. B. Robustness to noise

The degree of robustness to noise is another important measure in the present study. As seen above, noise exposes fluctuations in output states. Here we define robustness to noise as the degree of noise-induced variances in the outputs from the network. Hence, we use the second term of Eq. (16) for the measure of robustness to noise R: R=−

NG 

Tr[Vg ].

By definition, R may be less than zero because the variances are not invariably negative. To be exact, R has an upper limit less than zero because of the trade-off among low sensitivities to multiple noises, as we will discuss in the next section. C. Fairness of trade-off among noise sensitivities

We introduced the means for examining the mathematical structures of the relationship between robustness to noise and network evolution. Therefore, the analysis particularly focuses on the evolutionary forces introduced by increasing robustness to noise. The fitness definition given in Sec. II C comprises the sum of errors in the nominal system and the variances of outputs. In the evolutionary process, the former effect dominates because variances of outputs are proportional to squared small noise strengths. Here it is assumed that the errors in the nominal system are regarded as approximately zero, such that the first term of Eq. (16) is eliminated. Subsequently, we can extract the consequent evolutionary constraint reflecting the effect of increasing robustness to noise using mathematical programming by using the following Lagrange function: L = R + Tr[K],

A. Modularity

To measure the degree of modular structure in an evolving network, we used the modularity index developed by Newman and Given [2]. However, implementation of the method in the

(25)

g=1

(26)

where R is given in Eq. (25) and K (∃ K−1 ) is a symmetric matrix containing Lagrange multipliers. By setting the gradients to zero and noting that V and K are symmetric, the partial

042705-5

YUSUKE IKEMOTO AND KOSUKE SEKIYAMA

PHYSICAL REVIEW E 89, 042705 (2014)

derivatives of L can be expressed as ∂L = 2KV + 2σa2 KJa = 0, ∂Ja

(27)

∂L = 2σb2 KJb = 0, ∂Jb

(28)

∂L = I + 2K (Ja − I) = 0. (29) ∂V From Eq. (29), we derive the symmetric matrix K = −(Ja − I)−1 /2; hence, Ja also becomes symmetric. By multiplying Eqs. (27) and (28) on the right-hand side by (Ja − I)T and JTb , we obtain KV(Ja − I)T + σa2 KJa (Ja − I)T = 0,

(30)

σb2 KJb JTb = 0.

(31)

By adding these expressions to both sides and eliminating K, we get V(Ja − I)T + σa2 Ja (Ja − I)T + σb2 Jb JTb = 0.

(32)

Substituting this relation into the original Lyapunov equation given in Eq. (15), we obtain

  : = σa2 (Ja − I)(Ja − I)T + σb2 Jb JTb − σa2 + σc2 I = 0.

also the scalar V = v. Although this case is too simple to assess modularity, it will allow visualization of the evolutionary dynamics in the space (a,b). Prior to computational simulation, the evolved network was mathematically estimated with approximate solutions. For the nominal system, the asymptotic values of the actual output were derived by x˙ = 0 in Eq. (1). If a network fits to goal (u,p), the link weights would satisfy bu = (1 − a)p. Because we set p = 1 and u = 1, the link weights will evolve to lie close to the line b = −a + 1.

This force is exerted by the first term in the fitness expression given by Eq. (16). Another restriction of the evolving network is induced by the effect of robustness to noise, which is expressed in the second term of the fitness. This effect leads the evolving network to the form given in Eq. (33). Hence, the link weights evolve to lie close to the ellipse

 (36) σa2 (a − 1)2 + σb2 b2 − σa2 + σc2 = 0. Thus, the approximate solution a ∗ and b∗ of the evolved network is derived by the intersection of the line given by Eq. (35) and the ellipse given by Eq. (36): 1/2 2 −1/2

σa + σb2 + 1, (37) a ∗ = − σa2 + σc2 1/2 2 −1/2

b∗ = σa2 + σc2 σa + σb2 .

(33)

This condition indicates that there is a complementary relationship between the magnitudes of system gains (Ja − I)(Ja − I)T and Jb JTb . This analysis indicates that the increase of robustness to noise ensures that  remains close to 0. Thus, the variances at the output induced by noises are decreased by well-balanced system gains, indicating that the network finds a fair trade-off among noise sensitivities of multiple noise sources. In this sense, we may define a fairness of trade-off relationship among noise sensitivities as the degree of how the network decreases the effect of multiple sources of noise in a balanced manner. To assess how the fairness of trade-off relationship among noise sensitivities is achieved, we define the measure C = Tr[ T ],

(34)

where C is the sum of squared values of the elements in . When  is close to zero, C is also close to zero. In common with the robustness to noise, C rarely attains a value of zero because the solutions satisfying given goals are not always involved in the equality given by Eq. (33). However, this analysis demonstrates that the network may evolve to minimize C on satisfying given goals. Thus, in simulations, we monitor the dependence of both R and Q versus C in evolving networks. IV. RESULTS IN LINEAR SYSTEMS A. One-dimensional linear system

For a better understanding of evolutionary dynamics, we begin with a simple one-dimensional linear model with the goals P = p = 1 and U = u = 1. The network matrices were expressed as A = a and B = b; thus, the Jacobian matrices are given as Ja = a and Jb = b. The covariance of the output is

(35)

(38)

Here the solution satisfying the stability condition −1 + a < 0 is selected. Now we move on to evaluations using evolutionary simulation. To investigate the evolution of outputs when strengthening robustness to noise, we compare how evolutionary dynamics change with and without the presence of noise. In the absence of noise, noise strengths are set to σa = σb = σc = 0. To evaluate convergence of evolutionary dynamics, we choose the two initial states of networks set to (a,b) = (−1,1) and (a,b) = (0.5,−0.5). Computer simulations confirmed that the evolutionary pathways are well characterized by the line given by Eq. (35) and the ellipse given by Eq. (36), as shown in Fig. 6(a). After 20–30 generations, the link weights converged to the line given by Eq. (35). This result indicates that the networks rapidly evolved to satisfy the goal. The system behavior after convergence to the line was extremely different depending on the presence or absence of noise. Without noise, the evolving network remained on the line; however, with noise, the network evolved to a ∗ and b∗ over 10 000 generations. To illustrate how increases in the robustness to noise were reflected in the evolution of the actual output, we obtained the density of output for the network with the highest fitness of the noise-interfused model at 20, 30, 3000, at 10 000 generations, as shown in Fig. 6(b). At each generation, the output was visually evaluated by using the density distribution function, with the actual output of the nominal system E[x] = b/(1 − a)u and variance v. The density distribution was visualized by a function proportional to exp{−(x − E[x])2 /2v}. At 20 generations, the network had reached the halfway point in evolution to the goal. From 30 to 10 000 generations, the density distributions became increasingly sharp and robustness

042705-6

MODULAR NETWORK EVOLUTION UNDER SELECTION FOR . . .

(a)

mathematical structure of evolutionary dynamics but will sacrifice biological plausibility. By analogy with the one-dimensional case, the evolved network can be estimated through approximation. In this case, the goals were set to P and U and Jacobian matrices were expressed as Ja = A and Jb = B. By setting x˙ = 0 in Eq. (1), we have E[x] = (I − A)−1 BU = P = MU. As a necessary condition, B = (I − A)M must be satisfied by eliminating U, which yields

2 with noises without noises 10000 generation

1

PHYSICAL REVIEW E 89, 042705 (2014)

3000 generation 30 generation

20 generation

(I − A)M − B = 0.

0

−1

1

This simultaneous algebraic equation corresponds to the equality of Eq. (35) for the one-dimensional model and it holds that the expected value of outputs corresponds to that of the required outputs. Under these conditions, we obtain

−1

BBT = (I − A)MMT (I − A)T .

4

3

Density of

10000 generation

= 0.

3000 generation

2

1 20 generation

0.2

0.4

0.6

0.8

(41)

By substituting Eq. (40) into Eq. (41) and noting that A satisfies −I + A to be negative definite and symmetric, we obtain the evolved A∗ and B∗ as follows:

1/2 2 −1/2 + I, (42) σa I + σb2 MMT A∗ = − σa2 + σc2

30 generation

0

(40)

Moreover, we consider the condition for increasing robustness in a higher-dimensional system, which corresponds to the equality of Eq. (36). By substituting A and B in Jacobian matrices given in Eq. (33), we obtain 

 = σa2 (I − A)(I − A)T + σb2 BBT − σa2 + σc2 I

(b)

0

(39)

2

1.0

1.2

1.4

1/2 2 −1/2

B∗ = σa2 + σc2 σa I + σb2 MMT M.

Output FIG. 6. (Color online) Evolutionary dynamics in a onedimensional linear system. (a) Evolutionary pathway of the network with highest fitness. Blue (dark gray) lines represent the pathway under evolution with noise; gray (light gray) lines represent the pathway without noise. Open squares and circles indicate the states at the initial and later generations after 10 000, respectively. The straight line b = −a + 1 represents the solution set, satisfying p = −b/(a − 1)u. The ellipse is determined using σa2 (a − 1)2 + σb2 b2 − (σa2 + σc2 ) = 0, which is the condition for maximizing robustness to noise. (b) Stochastic density of the output x, at 20, 30, 3000, and 10 000 generations, as visualized by a Gaussian distribution.

to noise increased while satisfying the goal. In particular, the output variances v decreased gradually to 1.73 × 10−2 , 1.06 × 10−2 , and 0.99 × 10−2 at 30, 3000, and 10 000 generations, respectively. These results highlight that the increase of robustness to noise sharpens the outputs and leads to a fair trade-off among noise sensitivities. This directivity evolution adopts the specific link weights from an infinite number of solutions satisfying the goal. B. High-dimensional linear system

Next we investigate the evolution of networks in higherdimensional linear systems under the goals defined in Sec. II E. The linear model will support an understanding of the

(43)

Here the matrix in curly brackets is positive definite. By substituting this into Eq. (32), the covariance is derived as V∗ = −σa2 A∗ − σb2 B∗ B∗T (A∗ − I)−1 . Now, if the definition of MVG is that M comprises a block diagonal matrix, then MMT also comprises a block diagonal matrix. In this case, A∗ and B∗ become block diagonal matrices with the same configurations as MMT and M, respectively. This analysis shows that the networks evolve to ensure that A and B become block diagonal matrices, i.e., become modular networks. If sensor noise passes out of the system, then whether the evolved networks have a modular structure is not guaranteed. The same holds for the case where either noise strength σa or σb is equal to zero. If σa = 0, Eq. (41) is expressed as σb2 BBT − σc2 I = 0 and A∗ becomes indefinite; i.e., there exists an infinite number of A∗ satisfying σb2 I = σc2 (I − A∗ )MMT (I − A∗ )T , according to Eq. (40). If σb = 0, A∗ becomes negligible relative to the modular M and it becomes proportional to the identity matrix as shown in Eq. (42). Obviously, if both types of sensor noise disappear, there exists an infinite number of solutions satisfying only the goals given in Eq. (39). If the source noise is equal to zero, the modular structure can arise if and only if sensor noise exists. However, as shown in Eq. (41), the strength of source noise has a significant role and σc can regulate the allowable range of the magnitude of matrices A∗ and B∗ . Notably, these analyses do not show that robustness to noise leads to maximizing modularity; they merely reveal

042705-7

YUSUKE IKEMOTO AND KOSUKE SEKIYAMA

PHYSICAL REVIEW E 89, 042705 (2014)

modular structure in the evolved networks. To clarify the details, we explore the general solution of evolved modular networks when  = 0 given by Eq. (41). Let A and B be the set satisfying the goals (I − A )M − B = 0 derived from Eq. (39). From Eq. (41) we obtain  

(I − A ) σa2 I + σb2 MMT (I − A )T = σa2 + σc2 I + . (44) Because both sides of this expression are positive definite symmetric and I − A is positive definite,  satisfies the following equation: 1/2 2 1/2

= σa + σc2 (I + )1/2 . (45) (I − A ) σa2 I + σb2 MMT Hence, the general solution is obtained by using A∗ , B∗ , and  as  1/2 1 A = I+ 2  (A∗ − I) + I, (46) σa + σc2  1/2 1  B∗ . (47) B = I+ 2 σa + σc2 When  = 0 it is confirmed that A and B correspond to A∗ and B∗ , respectively. Now that A∗ and B∗ have modular (a)

Output node No.

Input node No. 1 2 3 4 5 6 7

1 2 3 4 5 6 7

Without noises

With noises

(b)

Linear model Output node No. 1 2 3 4 5 6 7

(d)

With noises

Without noises

structure, M, A , and B have modular structure if  has the same modular structure. Furthermore, depending on , there exist modular solutions A and B that represent higher modularity index than that of A∗ and B∗ . Thus, the increase of robustness to noise chooses one network within an infinite number of modular solutions for harnessing the modular structure of goals. In that sense, the modular structure in evolved networks can be regarded as a by-product of increasing robustness to noise. The relation between robustness to noise and evolution of modularity will be examined in the following evolutionary computer simulations. We tested the evolution of the network in the linear model by using computer simulations. We followed evolving networks with noise and compared them to networks without noise. The noise-interfused networks evolved higher modular structures compared with the model without noise, as shown in Fig. 7(a). The evolved network was decomposed into three modules, comprising {output/input} nodes {1,2,3/1,2}, {4,5/3,4,5}, and {6,7/6,7}. These modules reflected the configuration of M corresponding to the mathematical analysis. The absolute error between the analytical solutions and simulation results was an average of 0.019 per link. Moreover, we performed the simulations for ten trials, as shown in Fig. 7(d). It was observed

1.6

3.8

0

0

0

-0.6

-1.6

-3.8

0.8

1.4

3.6

0

0

0

-0.8

-1.4

-3.6

(f)

0

0.7

6

−0.1

0.6

5

−0.2

0.5

−0.3

0.4

−0.4

0.3

−0.5

0.2

1

−0.6

0.1

0

−0.7

0

0

0.7

6

−0.1

0.6

5

−0.2

0.5

−0.3

0.4

−0.4

0.3

−0.5

0.2

−0.6

0.1

−0.7

0

4

x10−3

0

0.7

6

−0.1

0.6

5

−0.2

0.5

−0.3

0.4

−0.4

0.3

−0.5

0.2

1

−0.6

0.1

0

−0.7

0

4 3

3 2

Hill model

0.6

(e) x10−3

(c)

Hyperbolic tangent model

2

x10−3

4 3 2 1 0

FIG. 7. (Color online) Results from simulations using different network function models. The evolved networks and evolving metrics were evaluated in (a) and (d) for the linear model, (b) and (e) for the hyperbolic tangent model, and (c) and (f) for the Hill model. (a)–(c) Evolved network with the highest fitness wherein the lattices represent the link weights (with matrices A on the left-hand side and B on the right-hand side of each panel). Networks evolved with robustness to noise (the upper panes) comprising higher modular structures than those evolving without noise (the lower panes) for each network function model. (d)–(f) Box plots of the final metrics evaluated for ten trials. Boxes in blue represent metrics distributions evolved with noise and those in gray represent metrics distributions evolved without noise. On each box plot, thick lines represent the median, box length represents the 25th to 75th data percentile, the whisker indicates one and a half of the interquartile range, and the plus symbols represent outliers. 042705-8

MODULAR NETWORK EVOLUTION UNDER SELECTION FOR . . .

(a)

(b)

Linear model

PHYSICAL REVIEW E 89, 042705 (2014)

(c)

Hyperbolic tangent model

Hill model 1700 generation

Robustness to noises

300 generation

-0.3

-0.2

1500 generation

-0.2

-0.4

with noises without noises

-0.3

-0.3 0.6 0.6

0.5 0.5

0.5

Modularity

0.4

1700 generation

0.4

0.3

0.4 300 generation

0.3

0.3

1500 generation

0.2 0.1 0 0

1

2 3 4 5 Fairness of trade-off between noise sensitivities

0.2

0.2

0.1

0.1

0 6 −3 0 x10

1 2 3 Fairness of trade-off between noise sensitivities

0 4 −3 0 x10

2 3 4 Fairness of trade-off between noise sensitivities

1

5 −3 x10

FIG. 8. (Color online) Evolutionary pathway results from simulations using different network function models. The blue (dark grey) lines represent evolving metrics in the case with noise and the gray (light grey) lines represent those in the case without noise. Squares and circles indicate the initial state and the state at 30 000 generations. Evolutionary pathways were recorded for (a) the linear model, (b) the hyperbolic tangent model, and (c) the Hill model. The upper and lower figures illustrate the evolution of robustness to noise R and modularity Q versus the fairness of trade-off between noise sensitivities C, respectively.

that, if noise exists, the evolved networks have higher robustness to noise (R = −0.237[−0.238,−0.237]) and higher modularity (Q = 0.538[0.532,0.542]). Subsequently, it was also confirmed that the fairness of trade-off among noise sensitivities evolved to 0 (C = 8.829[7.014,11.46] × 10−5 ). Without noise, the same metrics varied widely and the network structures were not settled in each trial. To detail the evolutionary dynamics, we investigated the evolutionary pathways of both the robustness to noise R and the modularity Q versus the fairness of trade-off among noise sensitivities C by recording those metrics associated with the highest fitness networks, as shown in Fig. 8(a). Until about 1500 generations, the evolutionary pathways fluctuated. The fluctuations are considered to be due to the exploration behavior of goal orientations since the forces for fitting goals are predominant during the earlier phase of evolution. From around 1500 generations, both R and Q tended to be negative versus C. Throughout all generations, the correlation coefficients were −0.917 (R versus C) and −0.987 (Q versus C). For R versus Q, the correlation coefficients were 0.887. These results highlight the positive correlation between robustness to noise and modularity via achieving a fair trade-off among noise sensitivities.

V. RESULTS IN NONLINEAR SYSTEMS

Finally, we tested the evolution of networks in a higherdimensional nonlinear model by computer simulations. The same settings of the linear model were used for the simulations here except for a variation of network functions. We also compared the evolving networks between two cases with and without noise. Although mathematical analysis is difficult, the nonlinear model allows evaluation of robustness to noise in the evolutionary dynamics of modular networks, which would give a better perspective for biological systems. The nonlinear models involve the hyperbolic tangent function given by Eq. (3) and the Hill function given by Eq. (4), as defined in Sec. II A. For both the hyperbolic tangent [Fig. 7(b)] and Hill models [Fig. 7(c)], the evolved networks represented a modular structure for the noise-interfused models. The matrices A and B evolved to block diagonal forms similar to those of the linear model. We also performed the evolutionary simulations for ten trials with the hyperbolic tangent [Fig. 7(e)] and Hill [Fig. 7(f)] models. In the noise-interfused model, the evolved networks exhibited higher robustness to noise (for the hyperbolic tangent model, R = −0.163[−0.164,−0.163]; for the Hill model, R = −0.161[−0.161,−0.161]); the modularity for the

042705-9

YUSUKE IKEMOTO AND KOSUKE SEKIYAMA

PHYSICAL REVIEW E 89, 042705 (2014)

hyperbolic tangent model was Q = 0.613[0.612,0.614] and that for the Hill model was Q = 0.638[0.635,0.641]). Subsequently, the fairness of trade-off among noise sensitivities was suppressed to low levels (for the hyperbolic tangent model C = 5.47[5.39,5.63] × 10−4 and for the Hill model C = 1.58[1.58,1.58] × 10−3 ). Compared with the linear model [Fig. 7(d)], both nonlinear models [Figs. 7(e) and 7(f)] resulted in higher modularity in the evolved networks. This tendency depended on the evolved matrices displaying higher contrast in nonlinear models, as visualized in Figs. 7(a)–7(c). This is because the elements of the Jacobian matrices in the nonlinear models are allowed to have larger values. The nonlinear functions possess output that is bounded within the ranges (−1,1) for the hyperbolic tangent and [0,1) for the Hill models and is slightly saturated around those borders. The saturation acts as a suppression of changes in outputs. This effect holds not only for the actual output of a nominal system but also for a fluctuation by noise in output nodes. Therefore, the output fluctuations were buffered with nonlinearity to permit larger gains than those allowed in the linear case. Moreover, for the nonlinear models, we tracked the evolutionary pathways of R and Q versus C, as shown in Figs. 8(b) and 8(c). It was observed that the qualitative features of the evolutionary pathways were similar to those of the linear case. The evolutionary pathways fluctuated at the initial phase, until about 300 generations for the hyperbolic tangent model and until about 1700 generations for the Hill model. These fluctuations are considered to be exploration behaviors for goal orientations as is the case with the linear model. After explorations, both robustness to noise and modularity increased in conjunction with achieving a fair trade-off among noise sensitivities. Throughout all generations, the correlation coefficients were −0.890 (R versus C) and −0.987 (Q versus C) and −0.947 (R versus C) and −0.932 (Q versus C) for the hyperbolic tangent model and Hill model, respectively. For R versus Q, the correlation coefficient was 0.891 for the hyperbolic tangent model and 0.848 for the Hill model. These results demonstrate that the characteristic evolutionary dynamics presented by the linear model is robust to variations of nonlinearity in network functions. Furthermore, they revealed that a positive correlation is present between robustness to noise and modularity in biologically inspired models.

VI. DISCUSSION AND SUMMARY

The focus of our research was to investigate the relationship between robustness to noise and the evolution of modular networks. To accomplish this, we presented models for network evolution with noise-interfused linear and nonlinear dynamical systems. The presented models showed that robustness to noise can be the evolutionary constraint that leads to evolution of modular networks. In addition, computer simulation demonstrated that these features are robust to variations of biologically inspired network functions. The

results showed that there is a positive correlation between robustness to noise and network modularity. There has been renewed interest in studying the advantages of systems involving noise [28]. One particular class of systems is stochastic state switching [28–30], where noise plays an essential role for state transitions. As an alternative side effect of noise, our model suggests that low sensitivities to noise contribute to network evolution, where the noise indirectly plays the role of a driving force for evolution of modular structure. In the present model, the importance of fitness fluctuations for evolution of modular networks has been emphasized. In past research, it has been observed that modular networks evolve under modularity changing environments. That is, the fitness fluctuations caused from outside the system were one of the key factors that led to emergence of a modular network. In addition, we focused on network evolution under fitness fluctuations caused by noise internal to the system. In terms of genetics, our model suggests that the shape of the resultant phenotypic variance of noise can possibly act on the geometry of gene regulatory networks. In our model, the modular network evolved under selection for the functional trait of robustness to noise. In some recent work, an attempt has been made to find what evolutionary constraint arises in modular networks. The prominent hypothesis was based on the modular structure standing as a by-product of the burden of maintaining network linkages [6,8]. In those scenarios, the evolutionary constraint was explicitly inserted into the fitness in the form of subtracting the link cost and it was therefore based on selection for the trait concerned itself. In contrast, we featured an alternative evolutionary constraint based on the evolutionary forces against noise. Under these conditions, the evolutionary constraint implicitly engages network evolution. Despite all this, from the viewpoint of placing evolutionary restrictions on networking, our evolutionary scenario is similar to those involving link cost reduction. However, the remarkable feature of our model was that the fitness is attributable to only output behaviors involving the regulation of the nominal signal and its sharpness. From this aspect, our model emphasized the possibility of modular network evolution under the selection for the trait concerned with the function of noise management on network dynamics. In summary, we presented a model for evolution of modular networks under increasing robustness to noise that is either internally generated in the network or externally interfused into the network. Although the proposed model was simple, it could enable us to gain a better understanding of how robustness contributes to evolving networks. Our model will aid the understanding of the positive role played by system noise in network evolution.

ACKNOWLEDGMENT

This work was supported by JSPS Grant-in-Aid for Young Scientists (B) No. 25870260.

042705-10

MODULAR NETWORK EVOLUTION UNDER SELECTION FOR . . . [1] G. P. Wagner, M. Pavlicev, and J. M. Cheverud, Nat. Rev. Genet. 8, 921 (2007). [2] M. Girvan and M. E. J. Newman, Proc. Natl. Acad. Sci. USA 99, 7821 (2002). [3] J. Ihmels, G. Friedlander, S. Bergmann, O. Sarig, Y. Ziv, and N. Barkai, Nat. Genet. 31, 370 (2002). [4] J.-D. J. Han, N. Bertin, T. Hao, D. S. Goldberg, G. F. Berriz, L. V. Zhang, D. Dupuy, A. J. M. Walhout, M. E. Cusick, F. P. Roth, and M. Vidal, Nature (London) 430, 88 (2004). [5] T. Takaguchi, M. Nakamura, N. Sato, K. Yano, and N. Masuda, Phys. Rev. X 1, 011008 (2011). [6] N. Kashtan, A. E. Mayo, T. Kalisky, and U. Alon, PLoS Comput. Biol. 5, e1000355 (2009). [7] N. Kashtan and U. Alon, Proc. Natl. Acad. Sci. USA 102, 13773 (2005). [8] J. Clune, J.-B. Mouret, and H. Lipson, Proc. R. Soc. London Ser. B 280, 20122863 (2013). [9] C. Espinosa-Soto and A. Wagner, PLoS Comput. Biol. 6, e1000719 (2010). [10] R. D. Leclerc, Mol. Syst. Biol. 4, 213 (2008). [11] T. H. Oakley, B. Ostman, and A. C. V. Wilson, Proc. Natl. Acad. Sci. USA 103, 11637 (2006). [12] H. Kitano, Nat. Rev. Genet. 5, 826 (2004). [13] K. Kaneko, PLoS ONE 2, e434 (2007). [14] I. Lestas, G. Vinnicombe, and J. Paulsson, Nature (London) 467, 174 (2010). [15] W. C. Stacey and D. M. Durand, J. Neurophysiol. 86, 1104 (2001).

PHYSICAL REVIEW E 89, 042705 (2014)

[16] E. Levine and T. Hwa, Proc. Natl. Acad. Sci. USA 104, 9221 (2007). [17] F. J. Bruggeman, N. Bl¨uthgen, and H. V. Westerhoff, PLoS Comput. Biol. 5, e1000506 (2009). [18] R. K. Pan and S. Sinha, Phys. Rev. E 76, 045103(R) (2007). [19] E. A. Variano, J. H. McCoy, and H. Lipson, Phys. Rev. Lett. 92, 188701 (2004). [20] B. A. Høverstad, Artif. Life 17 (1), 33 (2011). [21] M. B. Elowitz, A. J. Levine, E. D. Siggia, and P. S. Swain, Science 297, 1183 (2002). [22] A. Hyv¨arinen, J. Karhunen, and E. Oja, Independent Component Analysis (Wiley-Interscience, New York, 2001). [23] T. P. Trappenberg, Fundamentals of Computational Neuroscience (Oxford University Press, Oxford, 2010). [24] U. Alon, An Introduction to Systems Biology: Design Principles of Biological Circuits (CRC, Boca Raton, 2006). [25] N. G. van Kampen, Stochastic Processes in Physics and Chemistry (Elsevier, Amsterdam, 1992). [26] G. Golub, S. Nash, and C. Van Loan, IEEE Trans. Autom. Control 24, 909 (1979). [27] M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004). [28] A. Eldar and M. B. Elowitz, Nature (London) 467, 167 (2010). [29] A. Kashiwagi, I. Urabe, K. Kaneko, and T. Yomo, PLoS ONE 1, e49 (2006). [30] C. Davidson and M. Surette, Annu. Rev. Genet. 42, 253 (2008).

042705-11

Modular network evolution under selection for robustness to noise.

Real networks often exhibit modularity, which is defined as the degree to which a network can be decomposed into several subnetworks. The question of ...
949KB Sizes 5 Downloads 3 Views