This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON CYBERNETICS

1

A Nonhomogeneous Cuckoo Search Algorithm Based on Quantum Mechanism for Real Parameter Optimization Ngaam J. Cheung, Xue-Ming Ding, and Hong-Bin Shen

Abstract—Cuckoo search (CS) algorithm is a nature-inspired search algorithm, in which all the individuals have identical search behaviors. However, this simple homogeneous search behavior is not always optimal to find the potential solution to a special problem, and it may trap the individuals into local regions leading to premature convergence. To overcome the drawback, this paper presents a new variant of CS algorithm with nonhomogeneous search strategies based on quantum mechanism to enhance search ability of the classical CS algorithm. Featured contributions in this paper include: 1) quantum-based strategy is developed for nonhomogeneous update laws and 2) we, for the first time, present a set of theoretical analyses on CS algorithm as well as the proposed algorithm, respectively, and conclude a set of parameter boundaries guaranteeing the convergence of the CS algorithm and the proposed algorithm. On 24 benchmark functions, we compare our method with five existing CS-based methods and other ten state-of-the-art algorithms. The numerical results demonstrate that the proposed algorithm is significantly better than the original CS algorithm and the rest of compared methods according to two nonparametric tests. Index Terms—Convergence analysis, cuckoo search (CS) algorithm, nonhomogeneous cuckoo search (NoCuSa), nonhomogeneous search, quantum mechanism.

I. I NTRODUCTION N MATHEMATICS, optimization is to find a global minimum (or maximum) as efficiently as possible on arbitrary fitness landscapes of a given problem. However, in realistic problems the global gradient is weak and obscure

I

Manuscript received November 5, 2015; accepted January 10, 2016. This work was supported in part by the China Scholarship Council and the Hujiang Foundation of China under Grant C14002, and in part by the National Natural Science Foundation of China under Grant 61222306 and Grant 61175024. This paper was recommended by Associate Editor F. Herrera. N. J. Cheung is with the Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China, and also with the James Franck Institute, University of Chicago, Chicago, IL 60637 USA (e-mail: [email protected]). X.-M. Ding is with the School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China (e-mail: [email protected]). H.-B. Shen is with the Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and the Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China (e-mail: [email protected]). This paper has supplementary downloadable multimedia material available at http://ieeexplore.ieee.org provided by the authors. This includes a PDF file, which contains the proof of Theorem 2, information regarding benchmark functions, and numerical experiments. This material is 3.6 MB in size. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TCYB.2016.2517140

due to numerous local “noise” and/or irregularities [1]. The metaheuristic optimization algorithms use two basic strategies while searching for the global optimum: 1) exploration and 2) exploitation [2]–[4]. The exploration is a ability to reach the local best, while the exploitation is of ability to achieve the global optimum. As to metaheuristic algorithms, it is necessary to converge to the global optimum rapidly with high accuracy when dealing with different optimization problems. The recently developed cuckoo search (CS) algorithm is a nature-inspired search algorithm mimicking social behavior of cuckoos [5]. Yang and Deb [6] and Mohamad et al. [7] offered excellent reviews of recent progress on the CS algorithm. The reviews show various applications of CS-based algorithms involving engineering, pattern recognition, software testing and data generation, neural networks, job scheduling and data fusion, and wireless sensor networks. Moreover, in those fields, researchers have made numerous contributions to the improvement of the CS algorithm embedding various mechanisms. Refer [6] and [7] for further details. Since attractiveness and potential capability of the CS algorithm, numerous researchers in various scientific and engineering fields concern in-depth investigation in both developing strategies/mechanisms and applications to different real-world problems. Accordingly, many variants of the CS algorithm have been proposed to improve its performance, which can be generally grouped into following three categories. 1) Parameter Selection: As an essential issue, proper control parameter can significantly improve search abilities of CS-based algorithms and state-of-the-art algorithms. For instance, Walton et al. [8] developed a nonlinear decreasing strategy for Lévy flight step size for the population in the CS algorithm. Similarly, Ong [9] developed an adaptive strategy to select the step size, which aims to balance local- and global-search. Li and Yin [10] proposed a statistical learning strategy to determine discovery probability, which reflects the probability of accepting/rejecting a potential solution in original CS algorithm. 2) Strategies/Mechanisms Design: Basically, another important issue is how to achieve better rand walks/flights for generating a new and efficient solution. Many researchers focused on developing various generalized functions for the walks/flights, such as chaotic Lévy flight [11], Gaussian distribution [12],

c 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. 2168-2267  See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 2

IEEE TRANSACTIONS ON CYBERNETICS

and chaos theory [13]. Additionally, numerous studies have been suggested on proposing/designing different learning/updating strategies involving orthogonal learning strategy [14], quantum operators and measurements [15], and frog leaping mechanism [13]. 3) Hybridization of CS and Other State-of-the-Art Algorithms: Combination of featured strategies and/or mechanisms from two or more than two methods can also be efficient to enhance the search ability of the original CS algorithm. For instance, in [16], a hybrid algorithm of particle swarm optimization (PSO) [17] and the CS algorithm was presented to share learn mechanisms in both the methods. Similarly, Chiroma et al. [18] hybridized CS and PSO to predict carbon dioxide emissions. Bhandari et al. [19] proposed a hybrid of the CS algorithm and winddriven optimization [20] to study satellite image segmentation. Li and Yin [10] applied two mutation rules of differential evolution [21] to the CS algorithm. Although much progress has been achieved on the CS-based algorithms since 2009, significant efforts are required to further improve its performance. 1) Theoretical analysis supporting convergence of CS-based algorithms. 2) Providing the sufficient and necessary conditions for the control parameter settings. 3) Employing nonhomogeneous search rules to enhance the classical CS algorithm. To provide theoretical support, we, for the first time, present convergence analyses of the original CS algorithm and propose a variant of CS with nonhomogeneous search laws, termed nonhomogeneous cuckoo search (NoCuSa). We also theoretically analyze performance of a single individual and conclude a set of boundaries for the control parameters in NoCuSa. The rest of this paper is organized as follows. Section II presents the standard CS algorithm. In Section III, the proposed NoCuSa algorithm is described in detail. The convergence analyses of CS and NoCuSa algorithms are presented in Section IV, and Section V gives the numerical results and discussion. Conclusions are given in Section VI. II. C UCKOO S EARCH A LGORITHM CS is a metaheuristic algorithm developed by Yang and Deb [5], inspired by aggressive reproduction of cuckoo species with of Lévy flight behavior. The female cuckoo lays her fertilized eggs in nests of other host birds. In this way, the host birds unwittingly raise her brood. If a cuckoo’s egg in a nest of a host bird is discovered, the host bird will throw it out or abandon her nest and start her own brood elsewhere. In the CS algorithm, each egg of host birds in a nest represents a solution, and a cuckoo egg represents a new solution. If a new solution is better than the one in the nest, the worse one will be replaced. To formulate in terms of mathematics, given only one egg (solution) in each nest, and generation of new solution is followed a law of Lévy flight, since Yang and Deb [5] discovered that random-walk style search is better performed by Lévy flight rather than

simple random walk. Moreover, in order to get the simplest model of the CS algorithm, three rules of the CS algorithm are described as follows [22]. 1) Each cuckoo lays one egg (a potential solution) at one time and dumps it in a randomly chosen nest. 2) The better nests with high quality of eggs will be carried over to the next generation. 3) The number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability pa ∈ (0, 1). Accordingly, in the original CS algorithm, the update of position xi is given as follows: xi,k+1 ← xi,k + r · si,k

(1)

where i and k are indexes of the individual (position) and current iteration. r subjects to uniform distribution, that is r ∼ U(0, 1). The step size si of the ith cuckoo moving to the next nest is defined as follows:   si,k = rand · xı∈[1,n] − xj ∈[1,n] . (2) In (2), xı∈[1,n] and xj ∈[1,n] are positions selected from the whole population randomly, and ı and j are indexes for the individuals xı and xı , respectively. rand subjects to uniform random distribution function in the interval [0, 1]. According to Lévy flight [5], new position xi of egg (solution) is defined in (1)  Lévy ∼ u = t−1−β , (0 < β ≤ 2) (3) xi,k+1 ← xi,k + α ⊕ Lévy(β) where product ⊕ means entry-wise multiplications. α > 0 is a parameter corresponding to step size of a cuckoo, which is determined by Lévy distribution. The step size generating new nest is different from that in (2), and it is obtained by   sk = α xi,k − xj,k ⊕ Lévy(β)  0.01 · u  xi,k − xg (4) ∼ 1/β |v| where α is a constant. In (4), u and v subject to normal distributions N(0, σ 2 ) with σu and σv , respectively, in which σv = 1 and σu is defined as ⎡ ⎤1/β sin(πβ/2)(1 + β) ⎦  σ =⎣ (5) 2(β−1)/2 β 1+β 2 where (·) stands for a Gamma function, while β is used to control the value of σu . The CS algorithm employs a discovery probability pa to replace the nests abandoned by the hosts. Then, the update law is defined as follows:  xi,k + r if P > pa (6) xi,k+1 = xi,k else where pa is the discovery probability to create a new nest, while P is a random number in interval [0, 1].

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. CHEUNG et al.: NoCuSa ALGORITHM BASED ON QUANTUM MECHANISM FOR REAL PARAMETER OPTIMIZATION

3

III. N ONHOMOGENEOUS C UCKOO S EARCH A LGORITHM A. Quantum Mechanism Quantum computing (QC) has been an active research topic since the last decades. The topic covers investigations on quantum computers and quantum algorithms [23]. QC brings new philosophy to optimization due to its underlying concepts. More researchers have focused on merging evolutionary computation and QC in terms of theory and applications. Based on QC capabilities, numerous studies have been put forward to enhance both efficiency and speed of typical optimization algorithms [24]–[26]. A main motivation of employing QC in the CS algorithm is that each cuckoo can enhance its search ability according to an expected potential well, which can be designed as a target solution or the best exemplar of all the individuals in the CS algorithm. We present details on how to properly develop a quantum-based CS algorithm (NoCuSa) in the following paragraphs. In quantum mechanism, Schrödinger equation is used to describe how quantum state of a physical system changes with time [27] 2 − 2 ∂ ∇ + V(γ , t) (γ , t) (7) i (γ , t) = ∂t 2m where i is imaginary unit,  is the reduced Planck constant, (γ , t) is the wave function of the quantum system at position γ and time t, m is particle’s mass, and ∇ 2 is the Laplacian operator. In the absence of time, the delta potential is the Schrödinger equation for the wave function ψ(γ ) of a particle in 1-D in a potential V(γ ), which is defined as V(γ ) = −λδ(γ )

(8)

where δ(γ ) is the Dirac delta function, and λ(> 0) is intensity of Dirac delta function. With (7) and (8), we can obtain the solution of wave function ψ(γ ) as shown in (8) 

|γ | 1 . (9) ψ(γ ) = √ exp − L L Given that each cuckoo has a quantum behavior with its quantum state formulated by a wave function ψ. |ψ|2 is a probability density function of the nest position

 2|γ | 1 |ψ(γ )|2 = exp − . (10) L L In (10), we define η = exp(−2|γ |/L) and η ∈ [0, 1] subjects to uniform distribution, then the solution of |γ | is obtained as

 L 1 |γ | = ln . (11) 2 η B. Nonhomogeneous Update Laws Rooting in quantum mechanism, we define L as a distance between the ith nest xi and the mean position x¯ of all nest positions in current iteration. Let L = 2δ|¯x − xi,k | (k is iteration number) and si,k = |γ |, then the step size is redefined as follows:

 1  (12) si,k = δ|¯x − xi,k | ln η

Fig. 1. Schematic of the motion strategies in the proposed nonhomogeneous search laws. The fitness landscape is randomly drawn only for illustration. In the graph, the color dashed lines are the previous trajectories of the ith individual and the color real lines are the trajectories intuitively derived from the proposed nonhomogeneous update rules.

where δ is a control parameter, which controls the difference between the position xi and x¯ . The x¯ is given as follows: 1 xi . n n

x¯ =

(13)

i=1

Depending on (1), (6), (12), and (13), we replace the original update rule with nonhomogeneous search mechanisms as ⎧ α·r·u ⎪ ⎪ (14a) ⎪ ⎨xi,k + |v|1/β [xi,k − gk ] sr ∈ (2/3, 1] xi,k+1 = x¯ + L[¯x − x ] sr ∈ (1/3, 2/3] (14b) i,k k i,k ⎪ ⎪ ⎪ ⎩ x + E[g − x ] otherwise (14c) i,k k i,k where L = δ ln(1/η) and E = δ exp(η). Here, we replace the quantum trick L in (14b) with E, because L → +∞ if η → 0. This design can guarantee that there is no large fluctuation around the global best candidate nest. sr is a random number obtained from a uniform random distribution function in the interval [0, 1]. C. Proposed Algorithm In the proposed NoCuSa algorithm, the population exhibits potential search characteristics (such as surmounting the barrier on fitness landscape) as well as the preserved features of the original CS algorithm, such as simple topological structure and primitive easy implementation. Rooted in the above developed strategies, an intuitive schematic is illustrated in Fig. 1, in which we conceptually show how the nonhomogeneous update laws can enhance the search abilities of the individuals and get over the barrier on the fitness landscape. As illustrated in Fig. 1, the fitness landscape has several local minimums and barriers on its surface. In original CS algorithm, although each individual shares information from its neighbor, they are all attracted by the historical global best record, which even if is not the “real global” (law 1 in Fig. 1). The phenomenon results from their homogeneous search behavior and absence of exploration abilities creating potential and diverse solutions. As can be seen in Fig. 1, the

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 4

IEEE TRANSACTIONS ON CYBERNETICS

Algorithm 1: NoCuSa Algorithm begin Generate initial host nests xi (i = 1, 2, · · · , N). Evaluate the fitness fi of each nest xi . Determine the global best nest g. while the terminal condition is not reached do for i = 1, . . . , N do Calculate si and si by Eq. (4) and (12), respectively. Update xi by Eq. (14) and refresh f (xi ). Generate a cuckoo x randomly by Eq. (2) and (6). Evaluate the fitness f  of x . if f  is better than fi . Replace nest xi with x . Exclude the worst nest according to pa . end end Get current best nest g = arg {min f (xi )}. end end

Corollary 1 (Chebyshev’s Inequality): Let X be a random variable with mean value μ¯ and variance σ 2 . Then P(|X − μ| ¯ > ϑ) 

σ2 · ϑ2

(16)

Theorem 1: Given a problem satisfying the conditions in Definition 1, let xn  ⊆ M be the position sequence of a nest in CS algorithm, then the algorithm can converge in probability to the global position g. Proof: Let S = (α · r · s/|v|1/β ) (in standard CS algorithm α = 0.01), we have a reduced format of (1) as follows: xn+1 = xn + Sn+1 (xn − g)

(17)

where g is the best individual in the CS algorithm. For simplicity in analysis, the derived equation can be obtained as follows: xn − g = xn−1 − g + Sn (xn−1 − g)

current global best (best-so-far) is the real optimum, but a cuckoo will lay an egg (solution) when it looks for a new nest on its way to the best-so-far. If the cuckoo is trapped in this local region, it will not be able to jump out only dependent on random perturbation from its neighbors, because it neighbors also obey the same search law finding the similar solutions in the region. As a nonhomogeneous strategy, the proposed mechanism can surmount the drawbacks in the homogeneous update rule. As shown in Fig. 1, laws 2 and 3 can create various solutions for diversifying the search and flying to a promising direction approaching to the real global region. The implementation of NoCuSa is listed in Algorithm 1.

= (1 + Sn )(xn−1 − g) = (1 + Sn )(1 + Sn−1 )(xn−2 − g) .. .. . . n+1   = (x0 − g) 1 + Sj j=1

where x0 is the initial position of an “egg” (solution). As such, we can analyze the convergence of the sequence of the random variables xn  by the infinite product as follows: lim (xn − g) = 0 ⇔ lim

n→∞

n→∞

⇔ lim

n→∞

IV. C ONVERGENCE A NALYSIS To analyze convergence, we present two simple systems of CS and NoCuSa, respectively, with only one nest (solution) in 1-D of search space, and we conclude that quantum mechanism and nonhomogeneous update laws demonstrate their convergent characteristic and potential search abilities in improving the standard CS algorithm. Definition 1: Given a measurable function f : Rn → R and M ⊆ Rn , where M is feasible space. The objective is to find a point x ∈ M, which minimizes f on M. Let R = {z ∈ M| f (z) < ψ +  be an optimality region, where ψ is the essential infinitum of f on M and  > 0.

A. Convergence of Cuckoo Search Algorithm Proposition 1 (Markov’s Inequality): Let X be a nonnegative random variable, that is, P(X  0) = 1. Then P(X  ϑ) 

E[X] ϑ

where ϑ is any positive real number, that is ϑ > 0.

(15)

n    1 + Sj = 0 j=1 n 

  ln 1 + Sj = −∞. (18)

j=1

To get a fixed value and decrease the randomness of the limit in (18), we can obtain the expected value of the stochastic variable S that  α·r·u E[S] = drdudv |v|1/β  +∞  +∞  1 1 = α rdr · udu · |v|−1/β dv 2 0 −∞ −∞  +∞  +∞ 1 udu · |v|−1/β dv = α 2 −∞ −∞     +∞ 0 +∞ 1 udu + udu · |v|−1/β dv = α 2 −∞ 0 −∞     0  t +∞ 1 = α lim udu + lim udu · |v|−1/β dv t→∞ 0 2 t→∞ −t −∞  t   +∞

  1 2 0 1 1 u = α lim + u2  · |v|−1/β dv 2 t→∞ 2 −t 2 0 −∞  +∞ 1 −1/β = α·0· |v| dv 2 −∞ = 0.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. CHEUNG et al.: NoCuSa ALGORITHM BASED ON QUANTUM MECHANISM FOR REAL PARAMETER OPTIMIZATION

Let χ = ln(1 + S) and χ1 , χ2 , χ3 , . . . be a sequence of independent identically distributed random variables, each with mean μ¯ = E[ ln(1 + S)] and variance σ 2 . Then ∀ > 0 lim P(|χn − μ| ¯ > ) = 0.

Proof: Since the proof of (14a) is similar to that of the CS algorithm, here, we only analyze the convergence of the update rules in (14b) and (14c). For an egg (solution) in 1-D, and let p = x¯ , we can rewrite the (14b) and (14c) as follows:

(19)

n→∞

 xn =

Since χi is non-negative random variable, then ∀ϑ1 > 0 Markov’s inequality (Proposition 1) gives that P(χ  ϑ1 ) 

μ¯ . ϑ1

(20)

According to Chebyshev’s inequality (Corollary 1), ∀ϑ2 > 0 we can obtain σ2 P(|χ − μ| ¯ > ϑ2 )  2 . ϑ2 Applying Chebyshev’s inequality to (1/n)

n 

ln(1 + Si )

5

p + L(p − xn−1 ) xn−1 + E(g − xn−1 )

⎧ n ⎪  ⎪ ⎪ ⎪ x − p = L (x − p) = (x − p) Lj n n n−1 0 ⎪ ⎪ ⎨ j=1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

xn − g = (g − x0 )

n 

(Ej − 1).

n→+∞

Letting n → ∞ in the above inequality, we obtain    n 1     lim P  ln(1 + Si ) − μ¯  >  n→∞ n 

lim

n→+∞

Lj = 0.

(24)

j=1

Lj = 0

j=1

n→+∞

n 

δ ln

j=1 n 

n

n→+∞

B. Convergence of Nonhomogeneous Cuckoo Search Algorithm

1 =0 ηj

1 =0 ηj j=1 ⎛ ⎞ n  1 ⇔ lim ln⎝δ n ln ⎠ = −∞ n→+∞ ηj j=1 ⎡ ⎛ ⎞⎤ n  1 ⇔ lim ⎣n ln δ + ln⎝ ln ⎠⎦ = −∞ n→+∞ ηj ⇔ lim δ

This proves that a sequence {χ }∞ n=1 of random variables converges in probability. This completes the proof.

ln

j=1

Theorem 2: If un  is a sequence of independent identically distributed random variables, that is un  ∼ U(0, 1) for all n > 0, then   1 

n 1 1 1 = du = −τ ln ln ln ln n→+∞ n uj u 0

n 

⇔ lim

i=1 σ2

1 n→∞  2 n = 0.

lim

n→+∞

n 

According to (24), the limit is deduced as follows:

i=1

 lim

(23b)

Accordingly, the convergence of (23a) degenerates to prove the exist of the limit as lim (xn − p) = 0 ⇔ lim

   n 1   1 σ2   . ln(1 + Si ) − μ¯  >   2 P  n   n

(23a)

j=1

i=1

yields

(22a) (22b)

(21)

j=1

where τ ≈ 0.577215664901532. Proof: Proof of Theorem 2 can be found in supplementary material. Theorem 3: Given a problem satisfies the conditions in Definition 1, let xn  ⊆ M be the position sequence of a nest in NoCuSa, then the sufficient and necessary condition for guaranteeing the NoCuSa algorithm converges to the averaged position p of all the nests and the global position g is δ ∈ (1, eτ ).

+

⇔ ∀m ∈ Z⎡ , and ∀K ⎛ ∈ Z+ , ∃k > ⎞⎤K, such that k  1 lim lim ⎣k ln δ + ln⎝ ln ⎠⎦ < −m n→+∞ k→n ηj j=1 ⎡ ⎤ 

k  m 1 1 + ⎦ K, such that k    lim lim ln δeηj − 1 < −m for all η ∈ (0, 1)

n→+∞ k→n

j=1

k  ⇔ lim lim ln

 1 >m n→+∞ k→n δeηj − 1 j=1 

1 >m ⇔ lim lim k ln n→+∞ k→n δeηmin − 1  1 m ⇔ lim lim ln − >0 n→+∞ k→n δeηmin − 1 k  1 m − >0 ⇔ lim ln n→+∞ δeηmin − 1 n 

1 >0 ⇔ ln δeηmin − 1 ⎧ ⎨ δeηmin − 1 > 0 ⇔ 1 ⎩ >1 η δe min − 1 ⇔ 1 < δeηmin < 2 ⇔ 1 < δ < 2 for ηmin  0. Therefore, 1 < δ < eτ is the sufficient and necessary condition for the convergence of NoCuSa. This completes the proof. V. N UMERICAL S TUDIES AND D ISCUSSION To validate the proposed NoCuSa algorithm and fulfill the target of improving the CS algorithm’s performance, we choose 24 various benchmark functions [28], [29]. All the compared methods are validated on these functions with 30 dimensions, which are divided into two groups: 1) unimodal group (F1 –F6 and F15 –F17 ) and 2) multimodal group (F7 –F14 and F18 –F24 ) as shown in Table S1.

Fig. 2. Main effect of the four control parameters in NoCuSa concluded from DoE. Main effect of the (a)–(c) discovery probability pa , (d)–(f) difference factor α, (g)–(i) Lévy factor β, and (j)–(l) learning regulator δ on functions SSF, SSPNF, and SRF, respectively.

the NoCuSa algorithm in this section. The design of experiments (DoEs) were conducted on three benchmark functions involving shifted sphere function (SSF), F1 , shifted Schwefel’s problem 1.2 with noise in fitness (SSPNF), F4 , and shifted Rosenbrock’s function (SRF), F7 [28], with 30 dimensions to investigate the impact of these parameters. In the DoE, the maximum number of generations was set to 2000, and the population size of individuals in NoCuSa was set to 20. All the experiments were run independently for 20 times. Fig. 2 illustrates the investigation of the effects of the four parameters according to main effect of DoE. The discovery probability pa varied from 0.15 to 0.45 in increments of 0.1, and its main effect on the performance of NoCuSa is shown in the first row of Fig. 2. The difference factor, α, is an important parameter to control the difference between two potential solutions. Different values of α lead to various performance of NoCuSa. In DoE, the value of α was varied from 0.5 to 2.0 in increments of 0.2, and its main effect is illustrated in the second row in Fig. 2. We varied the Lévy factor, β, from 1.1 to 1.9 in increments of 0.2, while the learning regulator, δ, was increased from 0.2 to 3.0 in the same increments as that of β. The main effects of the two parameters are exhibited in the third and fourth rows in Fig. 2, respectively. According to the results of DoE, the four parameters were set to pa = 0.3, α = 1.1, β = 1.7, and δ = 1.6, respectively.

A. Parameters Analysis

B. Experimental Settings

In the standard CS algorithm, there are three control parameters consisting of pa , α, and β, which are denoted as the discovery probability, the difference factor, and the Lévy factor, respectively. In the proposed NoCuSa algorithm, we employ another control parameter, and it is called learning regulator (δ). These four parameters may have significant effects on the performance of NoCuSa. Therefore, we discuss the influence of the four control parameters used in

In this section, 24 benchmark functions (F1 –F24 in Table S1 of supplementary material) are used to compare the performance of the proposed NoCuSa algorithm with other 15 existing evolutionary- and/or swarm-based algorithms. These algorithms include Standard PSO (SPSO) [30], Adaptive PSO (APSO) [31], Comprehensive learning PSO (CLPSO) [32], CS [5], Modified CS (MCS) [8], Heterogeneous PSO (CHPSO) [3],

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. CHEUNG et al.: NoCuSa ALGORITHM BASED ON QUANTUM MECHANISM FOR REAL PARAMETER OPTIMIZATION

self-adaptive CS (SACS) [10], Particle swarm CS (PSCS) [16], Frog leaping and chaotic CS (FLC-CS) [13], Hybrid artificial bee colony (HABC) [33], Accelerating ABC with an adaptive local search (AABCLS) [34], Information learning ABC (ILABC) [35], self-adaptive differential evolution ( jDE) [36], Auto-enhanced population diversity differential evolution (AEPD-DE) [37], and DE/eig [38]. The parameter settings of each algorithm are listed as follows. 1) SPSO [30]: ω = 1/(2 log(2)) and c1 = c2 = 1.0 + log(2). 2) APSO [31]: ωstart = 0.9 and c1 = c2 = 2.0. 3) CLPSO [32]: ω = 0.9−0.7t/tm and c1 = c2 = 1.49445. 4) CHPSO [3]: ω = 0.9 − 0.5t/tm and c1 = c2 = 1.49445. 5) HABC [33]: η = 0.6, Tp = 90, and Fr /Fa = 0.1. 6) AABCLS [34]: φ ∈ [−1, 1], C = 1.5, and Pr = 0.4. 7) ILABC [35]: φ ∈ [−1, 1]. 8) jDE [36]: τ1 = τ2 = 0.1, F = 0.5, and CR = 0.9. 9) AEPD-DE [37]: τ1 = τ2 = 0.1, F = 0.5, and CR = 0.9. 10) Differential Evolution With Eigenvector-Crossover DE/eig [38]: F = 0.7 and CR = 0.5. 11) CS [5]: pa = 0.25, α = 0.01, and β = 3/2. 12) SACS [10]: pa = 0.05 + 0.15 × rand or pa = 0.85 + 0.05 × rand. 13) PSCS [16]: ϑ ∼ N(0, 0.5) and φ ∼ N(0.5, 0.5). 14) FLC-CS [13]: pa = 0.25, α = 0.01, and ω = (2/t)0.3 . 15) MCS [8]: pa = 0.7, α = 1 or 2, and β = 3/2. 16) NoCuSa: pa = 0.3, α = 1.1, β = 1.7, and δ = 1.6. Where t and tm are current generation and the maximum number of the generations, respectively. In numerical experiments, the population size of NoCuSa is 20. The maximum fitness evaluations (FEs) is set to 2 × 105 when solving the problems F1 –F14 , and the maximum FEs is set to 2 × 106 on the problems F15 –F24 . All experiments were independently run 30 times for each algorithm on each problem. The mean values and standard deviation of the results throughout each optimization process are recorded and presented in Tables I and II (the best is in bold). Fig. 4 and S1 (in supplementary material) illustrated the convergent characteristics of each algorithm on the benchmark functions (more details can be found in supplementary material). C. Dimension Effect on NoCuSa In this section, we demonstrate that the proposed NoCuSa method is insensitive to or slightly depending on the dimensions of the benchmark problems. The maximum FEs is set to 105 on these validation tests, and the other parameters of NoCuSa are the same as the settings in Section V-B. Moreover, to simplify the experimental studies, they were conducted on 14 different functions in CEC2005 [28] with dimension varying from 10 to 100 by increment of 10. As illustrated in Fig. 3, on five benchmark functions (F7 , F8 , F11 , F13 , and F14 ), NoCuSa can achieve an approaching solutions to the problems slightly depending on the variations of the dimensions. Excellently, NoCuSa is insensitive to the dimension of function F1 . On the rest of functions (F2 , F4 , F5 , F9 , and F12 ), NoCuSa is sensitive to the dimension when it is larger than 60, except that of function F10 , that is, NoCuSa

7

Fig. 3. Mean values of NuCuSa varying with dimensions on the 14 benchmark functions.

is robust to the dimension when it is smaller than 60. As an example, the phenomenon can be found on function F12 . D. Comparison Among CS-Based Algorithms Here, a fair comparison among different variants of CS algorithm was illustrated, and the variants include CS [5], MCS [8], SACS [10], PSCS [16], and FLC-CS [13]. As can be observed from Table I and Fig. 4, on three of the nine unimodal functions (F1 –F6 and F15 –F17 ), the hybrid algorithm PSCS did much better on F15 –F17 than the original CS algorithm, and it, taking the second place, was slightly superior to CS on problem F4 . A similarity can be found in FLC-CS on F15 –F17 , but it failed on F16 and was worse than PSCS and NoCuSa. On function F2 –F4 and F15 –F17 , NoCuSa was highly better than the other four CS-based algorithm with more than three orders of magnitudes. On the 15 multimodal functions, PSCS, FLC-CS, and NoCuSa exhibited different performances as illustrated in Fig. 4. For instance, FLC-CS was slightly better than PSCS on F8 , F13 , and F17 . Noticeably, it got better result on F19 when all other algorithms were all trapped in local regions. Nevertheless, PSCS exceeded FLC-CS on functions F9 , F14 , and F21 –F24 with highly better mean values (best-so-far). On the other hand, NoCuSa was greatly better than the rest methods on F7 , F10 –F12 , F14 , F18 , and F24 (slightly better than CS on F7 and F14 ). Excellently, NoCuSa was the only one that can achieve at the target solutions to both the functions F21 and F24 . According to Table I and Fig. 4, the proposed NoCuSa is superior to the other five variants of the CS algorithm, which is followed by PSCS and FLC-CS. Although MCS always cannot find the good solutions for the unimodal problems, but it did better than CS on functions F16 , F17 , and

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 8

IEEE TRANSACTIONS ON CYBERNETICS

TABLE I R ESULTS OF E ACH CS-BASED A LGORITHM ON THE 24 B ENCHMARK F UNCTIONS

Fig. 4. Mean convergent characteristics of all the CS-based algorithms on the benchmark functions with 30 dimensions. (a) F1 . (b) F2 . (c) F3 . (d) F4 . (e) F5 . (f) F6 . (g) F7 . (h) F8 . (i) F9 . (j) F10 . (k) F11 . (l) F12 . (m) F13 . (n) F14 . (o) F15 . (p) F16 . (q) F17 . (r) F18 . (s) F19 . (t) F20 . (u) F21 . (v) F22 . (w) F23 . (x) F24 .

F20 –F24 , while SACS failed on the 24 benchmark problems. On functions F2 , F16 , and F17 , NoCuSa is the only one that finds the target solutions. PSCS worked better than the rest of

algorithms on the functions F9 and F20 –F22 , but on the other functions (except F9 ) it converged too fast to create potential solutions, that is, PSCS lost the diversity in searching and

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. CHEUNG et al.: NoCuSa ALGORITHM BASED ON QUANTUM MECHANISM FOR REAL PARAMETER OPTIMIZATION

9

TABLE II C OMPARISON A MONG N O C U S A , A LL PSO-, ABC-, AND DE-BASED A LGORITHMS ON THE 24 B ENCHMARK F UNCTIONS

quickly got lost in local minimum. FLC-CS achieved better results on functions F8 and F19 than the other methods. As shown in Table I, CS was trapped on the functions F15 –F17 without the ability to jump out the local regions, but NoCuSa always significantly better than the classical CS algorithm on these problems, this phenomenon is the result of the nonhomogeneous update laws employed in NoCuSa, which enhance the individuals have potential search abilities in promising space. E. Comparison With Other State-of-the-Art Algorithms As shown in Table II and Fig. S1 of supplementary material, on unimodal problems NoCuSa can converge faster than the other six methods with highly better accuracy. Fig. S1 illustrates NoCuSa surpassed all other algorithms on functions F2 and F15 . Especially, it significantly improved the results on function F2 , because it achieved the target solutions to these problems. On the unimodal group (F1 –F6 and F15 –F17 ) except F4 , as shown in Fig. S1, NoCuSa has provided better performance in terms of the mean values and it has outperformed all the

other six algorithms with a significant margin. On two unimodal problems (F2 and F3 ), APSO got the second place as illustrated in Fig. S1, while CLPSO was not much promising for solving unimodal functions, which is consistent with the conclusions in [32]. SPSO performed better than APSO, CLPSO, CHPSO, HABC, AABCLS, and ILABC on function F4 , and they achieved similar convergent characteristics in median performance. On the same function, AEPD-DE and DE/eig were highly better than all other compared algorithms illustrated in Fig. S1. These two methods exhibited the similar performance on the functions F1 , F4 , and F15 –F17 . ABC-based algorithms involving HABC, AABCLS, and ILABC were not good enough to look for the solutions to all the unimodal functions. As can be seen in Fig. S1 and Table II, HABC and AABCLS were trapped into a similar local regions and cannot show good performances. Another similarity can be found between CHPSO and NoCuSa searching for the solution to the function F6 . Based on these experimental results, it can be inferred that NoCuSa has a good search ability in finding the solutions to the unimodal functions.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 10

It is an important feature to avoid being trapped in and achieve the real solution (the optimal one) in the rugged landscapes of the multimodal functions, such as the problems (F7 –F14 and F18 –F24 ) in this paper. In Table II and Fig. S1, on the 15 multimodal functions, the performances of the four PSO-based algorithms were varied with the different problems they dealt with. For example, CHPSO was apparently better than CLPSO on four functions F10 , F12 , F14 , and F17 , while it was surpassed by CLPSO on F9 , F21 , F22 , and F24 . As can be seen, three ABC-based algorithms (HABC, AABCLS, and ILABC) were always trapped into local regions and cannot get good results. For instance, they failed on the functions F7 and F18 , and in detail the performances of HABC and AABCLS were similar to each other on F7 while on F18 the three algorithms were trapped in. Different from the ABC-algorithms, DE-based algorithms had potential abilities on several multimodal problems, such as F9 , F18 , and F20 –F22 . As can be seen in Table II and Fig. S1, on the function F18 , the DE-based algorithms got the top best three places while they were compared with the rest methods. Although APSO was highly better than the standard PSO algorithm and other methods (such as ABC-based and DE-based algorithms) on functions F10 and F12 , it performed worse than other methods on most of the multimodal functions. CLPSO worked well on the functions F9 , F21 , F22 , and F24 , such as it got the first place on the multimodal function F13 , as well as it achieved the target solution to F22 . Due to the complex landscape in function F8 , all the algorithms were lost in local regions and achieved premature convergence, which also occurred on the problem F19 . Although CLPSO did better than the rest of the algorithm including NoCuSa, there was no significant difference among them. As reported in [32], CLPSO works well on multimodal functions, which is agree with the results on functions F13 and F23 . However, NoCuSa can exhibit better performance than CLPSO on most of the multimodal functions such as F7 , F10 –F12 , F18 , F21 , F23 , and F24 , especially NoCuSa achieved the real solutions to the problems F21 and F24 , which is due to the efficient nonhomogeneous-laws in the proposed NoCuSa algorithm. As to the ABC-based algorithms, NoCuSa was always better than them on all the multimodal functions in terms of mean values. For example, on F7 NoCuSa and CS all can achieve better results than the rest methods observed from Table II and Fig. S1, in which all ABC-based algorithms were trapped in and worse than the two approaches. AABCLS was slightly better than the other two variants (HABC and ILABC) of the ABC algorithm, but it still cannot exceed NoCuSa. Probably because AABCLS tended to enhance the local search and lost the global search ability, nevertheless, the proposed nonhomogeneous update laws qualify NoCuSa for both the global and the local search in dealing with various problems with different fitness landscapes. Fig. S1 and Table II show, accordingly, NoCuSa performed better than the other ten state-of-the-art algorithms on most of the multimodal functions, including F7 , F10 –F12 , F14 , F21 , F22 , and F24 , which implies NoCuSa has more efficient search ability by employing the nonhomogeneous strategies and quantum mechanism.

IEEE TRANSACTIONS ON CYBERNETICS

TABLE III AVERAGE R ANKS OF A LL C OMPARED A LGORITHMS ON THE 24 B ENCHMARK F UNCTIONS

Summarily, Fig. 4 and Table I demonstrate that NoCuSa is superior to the CS-based algorithms on both unimodal and multimodel functions, while the good performance of NoCuSa is observed in Fig. S1 and Table II when it is compared with other ten state-of-the-art algorithms, including PSO-, ABC-, and DE-based methods. Rooted in the theoretical analysis and experimental results, the search ability of NoCuSa is enhanced by nonhomogeneous update laws, which lead the individuals to learn from different exemplars (such as averaged position and the global best position). F. Statistical Analysis In order to assert the significant improvement in the proposed NoCuSa algorithm further, we employ two nonparametric methods, involving Friedman and Quade tests [39], to analyze and rank all the algorithms. Statistical analysis for ranking the performance were conducted on all the benchmark functions (F1 –F24 ), and the average rankings are listed in Table III. Each algorithm and its ranking are listed in ascending order (the lower the better). The statistics and corresponding p-values are shown at the bottom of the table. As illustrated in Table III, the rankings of 16 algorithms are almost consistent on the two tests. NoCuSa achieved the top best ranking in terms of minimization measurement, followed by the DE-based algorithms involving DE/eig, jDE, and AEPD-DE. As can be observed, jDE and AEPD-DE occupied slightly different rankings on the Friedman and Quade tests. Similarly, the differences of rankings can be found in another two pairs of algorithms, CLPSO–PSCS and FLC-CS–CS. The two nonparametric tests indicate that the proposed NoCuSa algorithm, compared with all the other methods including CS-based algorithms and other ten state-of-the-art methods, is significantly better. Moreover, the p-values suggest that NoCuSa achieved significant improvement of the classical CS algorithm with 5% significance level.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. CHEUNG et al.: NoCuSa ALGORITHM BASED ON QUANTUM MECHANISM FOR REAL PARAMETER OPTIMIZATION

VI. C ONCLUSION This paper presents a NoCuSa algorithm dependent on different update laws, where the individuals can learn the difference between pairs of agents and they can also potentially learn from the averaged information of the whole swarm, as well as the global best information based on the quantum mechanism for efficiently search. The novel nonhomogeneous update laws efficiently enhance the search abilities of individuals in NoCuSa in both local regions and large potential space, and it may be useful to other swarm-based search algorithms, such as ABC-based algorithms. Another important contribution in this paper is that the search behaviors of the individuals have been theoretically investigated in both original CS algorithm and the proposed NoCuSa algorithm, and a set of conditions for guaranteeing the convergence of a single individual in NoCuSa is also provided in this paper. The analyses and experiments summarize that the proposed nonhomogeneous strategy enables NoCuSa to have a powerful search ability and more effectively to generate better quality solutions when compared to the other fifteen existing methods. The statistical results demonstrate NoCuSa is significantly better than the compared algorithms at the 5% significance level. Based on the results on 24 benchmark functions, it can be concluded that NoCuSa significantly improves the performance of the original CS algorithm. Further work will concern the multiswarm strategy with various update laws on enhancing search abilities of the proposed NoCuSa algorithm. The source code of NoCuSa is available at: http://godzilla.uchicago.edu/ pages/ngaam/NoCuSa/index.html. R EFERENCES [1] D. Tolkunov and A. V. Morozov, “Single temperature for Monte Carlo optimization on complex landscapes,” Phys. Rev. Lett., vol. 108, no. 25, Jun. 2012, Art. ID 250602. [2] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “GSA: A gravitational search algorithm,” Inf. Sci., vol. 179, no. 13, pp. 2232–2248, 2009. [3] N. J. Cheung, X.-M. Ding, and H.-B. Shen, “OptiFel: A convergent heterogeneous particle swarm optimization algorithm for Takagi–Sugeno fuzzy modeling,” IEEE Trans. Fuzzy Syst., vol. 22, no. 4, pp. 919–933, Aug. 2014. [4] N. J. Cheung, Z.-K. Xu, X.-M. Ding, and H.-B. Shen, “Modeling nonlinear dynamic biological systems with human-readable fuzzy rules optimized by convergent heterogeneous particle swarm,” Eur. J. Oper. Res., vol. 247, no. 2, pp. 349–358, 2015. [5] X.-S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proc. World Congr. Nature Biologically Inspired Comput. (NaBIC), Coimbatore, India, Dec. 2009, pp. 210–214. [6] X.-S. Yang and S. Deb, “Cuckoo search: Recent advances and applications,” Neural Comput. Appl., vol. 24, no. 1, pp. 169–174, 2014. [7] A. B. Mohamad, A. M. Zain, and N. E. N. Bazin, “Cuckoo search algorithm for optimization problems—A literature review and its applications,” Appl. Artif. Intell., vol. 28, no. 5, pp. 419–448, 2014. [8] S. Walton, O. Hassan, K. Morgan, and M. R. Brown, “Modified cuckoo search: A new gradient free optimisation algorithm,” Chaos Soliton Fract., vol. 44, no. 9, pp. 710–718, 2011. [9] P. Ong, “Adaptive cuckoo search algorithm for unconstrained optimization,” Sci. World J., Sep. 2014. [10] X. Li and M. Yin, “Modified cuckoo search algorithm with self adaptive parameter method,” Inf. Sci., vol. 298, pp. 80–97, Mar. 2015. [11] J.-H. Lin and I.-H. Lee, “Emotional chaotic cuckoo search for the reconstruction of chaotic dynamics,” in Latest Advances in Systems Science and Computational Intelligence. Athens, Greece: WSEAS Press, 2012.

11

[12] H. Zheng and Y. Zhou, “A novel cuckoo search optimization algorithm base on Gauss distribution,” J. Comput. Inf. Syst., vol. 8, no. 10, pp. 4193–4200, 2012. [13] X. Liu and M. Fu, “Cuckoo search algorithm based on frog leaping local search and chaos theory,” Appl. Math. Comput., vol. 266, pp. 1083–1092, Sep. 2015. [14] X. Li, J. Wang, and M. Yin, “Enhancing the performance of cuckoo search algorithm using orthogonal learning method,” Neural Comput. Appl., vol. 24, no. 6, pp. 1233–1247, 2014. [15] A. Layeb, “A novel quantum inspired cuckoo search for knapsack problems,” Int. J. Bio-Inspired Comput., vol. 3, pp. 297–305, Sep. 2011. [16] X. Li and M. Yin, “A particle swarm inspired cuckoo search algorithm for real parameter optimization,” Soft Comput., pp. 1–25, Feb. 2015, Doi: 10.1007/s00500-015-1594-8. [17] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc. IEEE Int. Conf. Neural Netw., vol. 4. Perth, WA, Australia, 1995, pp. 1942–1948. [18] H. Chiroma et al., “Global warming: Predicting OPEC carbon dioxide emissions from petroleum consumption using neural network and hybrid cuckoo search algorithm,” PLoS ONE, vol. 10, Aug. 2015, Art. ID e0136140. [19] A. K. Bhandari, V. K. Singh, A. Kumar, and G. K. Singh, “Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur’s entropy,” Expert Syst. Appl., vol. 41, no. 7, pp. 3538–3560, 2014. [20] Z. Bayraktar, M. Komurcu, J. A. Bossard, and D. H. Werner, “The wind driven optimization technique and its application in electromagnetics,” IEEE Trans. Antennas Propag., vol. 61, no. 5, pp. 2745–2757, May 2013. [21] R. Storn and K. Price, “Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces,” J. Glob. Optim., vol. 11, no. 4, pp. 341–359, 1997. [22] X.-S. Yang and S. Deb, “Engineering optimisation by cuckoo search,” Int. J. Math. Model. Numer. Optim., vol. 1, no. 4, pp. 330–343, 2010. [23] G. Jaeger, Quantum Information. New York, NY, USA: Springer, 2007, pp. 81–89. [24] K.-H. Han and J.-H. Kim, “Quantum-inspired evolutionary algorithms with a new termination criterion, Hε gate, and two-phase scheme,” IEEE Trans. Evol. Comput., vol. 8, no. 2, pp. 156–169, Apr. 2004. [25] A. Layeb and D.-E. Saidouni, “A new quantum evolutionary local search algorithm for MAX 3-SAT problem,” in Hybrid Artificial Intelligence Systems (LNCS 5271). Heidelberg, Germany: Springer, 2008, pp. 172–179. [26] A. Draa, S. Meshoul, H. Talbi, and M. A. Batouche, “A quantuminspired differential evolution algorithm for solving the N-queens problem,” Int. Arab J. Inf. Technol., vol. 7, no. 1, pp. 21–27, 2010. [27] E. Schrödinger, “An undulatory theory of the mechanics of atoms and molecules,” Phys. Rev., vol. 28, pp. 1049–1070, Dec. 1926. [28] P. N. Suganthan et al., “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” School Elect. Electron. Eng., Nanyang Technol. Univ., Singapore, Tech. Rep. 2005005, 2005. [29] J. J. Liang, B.-Y. Qu, and P. N. Suganthan, “Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization,” Comput. Intell. Lab., Zhengzhou Univ., Zhengzhou, China, Tech. Rep. 201311, 2013. [30] M. Clerc. (2012). Standard Particle Swarm Optimisation. [Online]. Available: http://clerc.maurice.free.fr/pso/SPSO_descriptions.pdf [31] Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 6, pp. 1362–1381, Dec. 2009. [32] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 281–295, Jun. 2006. [33] L. Ma, Y. Zhu, D. Zhang, and B. Niu, “A hybrid approach to artificial bee colony algorithm,” Neural Comput. Appl., pp. 1–23, Mar. 2015, Doi: 10.1007/s00521-015-1851-x. [34] S. S. Jadon, J. C. Bansal, R. Tiwari, and H. Sharma, “Accelerating artificial bee colony algorithm with adaptive local search,” Memetic Comput., vol. 7, no. 3, pp. 215–230, 2015. [35] W.-F. Gao, L.-L. Huang, S.-Y. Liu, and C. Dai, “Artificial bee colony algorithm based on information learning,” IEEE Trans. Cybern., vol. 45, no. 12, pp. 2827–2839, Dec. 2015. [36] J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer, “Selfadapting control parameters in differential evolution: A comparative study on numerical benchmark problems,” IEEE Trans. Evol. Comput., vol. 10, no. 6, pp. 646–657, Dec. 2006.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 12

IEEE TRANSACTIONS ON CYBERNETICS

[37] M. Yang, C. Li, Z. Cai, and J. Guan, “Differential evolution with autoenhanced population diversity,” IEEE Trans. Cybern., vol. 45, no. 2, pp. 302–315, Feb. 2015. [38] S.-M. Guo and C.-C. Yang, “Enhancing differential evolution utilizing eigenvector-based crossover operator,” IEEE Trans. Evol. Comput., vol. 19, no. 1, pp. 31–49, Feb. 2015. [39] J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm Evol. Comput., vol. 1, no. 1, pp. 3–18, 2011.

Ngaam J. Cheung is currently pursuing the Ph.D. degree with Shanghai Jiao Tong University, Shanghai, China. He is a Visiting Scholar with the University of Chicago, Chicago, IL, USA. His current research interests include protein folding pathways, structure prediction, and computational protein design via swarm intelligence, individualized modeling methods for solving biological problems and handling big data in bioinformatics, and fuzzy logics for inferring statistical information from big data.

Xue-Ming Ding received the Ph.D. degree in control science and engineering from the University of Science and Technology of China, Hefei, China, in 2005. He is currently an Associate Professor with the University of Shanghai for Science and Technology, Shanghai, China. His current research interests include system identification, embedded system, and motor control.

Hong-Bin Shen received the Ph.D. degree from Shanghai Jiaotong University, Shanghai, China, in 2007. He was a Post-Doctoral Research Fellow of Harvard Medical School, Boston, MA, USA, from 2007 to 2008, and a Visiting Professor of the University of Michigan, Ann Arbor, MI, USA, in 2012. He is a Professor of the Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University. His current research interests include pattern recognition, bioinformatics, and image processing. He has published over 80 papers and constructed 20 bioinformatics online severs in the above areas. Dr. Shen serves as an Editorial Member of several international journals.

A Nonhomogeneous Cuckoo Search Algorithm Based on Quantum Mechanism for Real Parameter Optimization.

Cuckoo search (CS) algorithm is a nature-inspired search algorithm, in which all the individuals have identical search behaviors. However, this simple...
4MB Sizes 0 Downloads 7 Views