BioSystems 132 (2015) 43–53

Contents lists available at ScienceDirect

BioSystems journal homepage: www.elsevier.com/locate/biosystems

A self adaptive hybrid enhanced artificial bee colony algorithm for continuous optimization problems Hai Shan, Toshiyuki Yasuda, Kazuhiro Ohkura * Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-852, Japan

A R T I C L E I N F O

A B S T R A C T

Article history: Received 10 November 2014 Received in revised form 12 April 2015 Accepted 7 May 2015 Available online 14 May 2015

The artificial bee colony (ABC) algorithm is one of popular swarm intelligence algorithms that inspired by the foraging behavior of honeybee colonies. To improve the convergence ability, search speed of finding the best solution and control the balance between exploration and exploitation using this approach, we propose a self adaptive hybrid enhanced ABC algorithm in this paper. To evaluate the performance of standard ABC, best-so-far ABC (BsfABC), incremental ABC (IABC), and the proposed ABC algorithms, we implemented numerical optimization problems based on the IEEE Congress on Evolutionary Computation (CEC) 2014 test suite. Our experimental results show the comparative performance of standard ABC, BsfABC, IABC, and the proposed ABC algorithms. According to the results, we conclude that the proposed ABC algorithm is competitive to those state-of-the-art modified ABC algorithms such as BsfABC and IABC algorithms based on the benchmark problems defined by CEC 2014 test suite with dimension sizes of 10, 30, and 50, respectively. ã 2015 Elsevier Ireland Ltd. All rights reserved.

Keywords: Artificial bee colony algorithm Swarm intelligence Continuous optimization problems Self adaptive mechanism CEC 2014 test suite

1. Introduction Optimization is an applied science that determines the best values of given parameters for a given problem. The aim of optimization is to obtain the relevant parameter values that enable an objective function to generate the minimum or maximum value (Civicioglu and Besdok, 2013). In the recent years, various kinds of novel optimization algorithms have been proposed to solve real parameter optimization problems, including the IEEE Congress on Evolutionary Computation (CEC) 2005 and CEC 2013 test suite (Liang et al., 2013a). Based on the definition and some comments for modifications of CEC 2013 test suite, CEC 2014 benchmark problems were developed (Liang et al., 2013b). In this CEC 2014 test suite, hybrid and composition functions were additionally defined as complex optimization problems. CEC 2014 test suite is an invaluable resource which includes 30 benchmark functions without making use of surrogates or meta-models. To solve these more complex optimization problems, an effective and efficient swarm intelligence (SI) based or evolutionary optimization algorithm is required. In the past decade year, swarm intelligence (SI), which is a discipline of artificial intelligence, has attracted the interest of

* Corresponding author. Tel.: +81 82 424 7550. E-mail addresses: [email protected] (H. Shan), [email protected] (T. Yasuda), [email protected] (K. Ohkura). http://dx.doi.org/10.1016/j.biosystems.2015.05.002 0303-2647/ ã 2015 Elsevier Ireland Ltd. All rights reserved.

many research scientists in related fields. Bonabeau et al. (1999) defined SI as “any attempt to design algorithms or distributed problem-solving devices inspired by the collective behavior of social insect colonies and other animal societies”. SI based algorithms include particle swarm optimization (PSO) (Kennedy and Eberhart, 1995), ant colony optimization (Dorigo et al., 2006), artificial bee colony (ABC) (Karaboga, 2005) and cuckoo search (CS) (Yang and Deb, 2009). The most widely used evolutionary algorithms are genetic algorithm (GA) (Holland, 1975) and evolution strategy (ES) (Beyer, 2001) and differential evolution (DE) (Storn and Price, 1997). A recent study showed that the ABC algorithm performs significantly better or at least comparably to other SI algorithms (Civicioglu and Besdok, 2013; Karaboga and Akay, 2009a). The ABC algorithm was introduced by Karaboga (2005) as technical report. Its performance was initially measured using benchmark optimization functions (Karaboga and Basturk, 2007, 2008). The ABC algorithm has been applied to several fields in various ways (Karaboga and Akay, 2009b; Karaboga et al., 2014), such as training neural networks (Karaboga and Akay, 2007), protein structure prediction (Benitez and Lopes, 2010), sensor deployment (Udgata et al., 2009), Wireless Sensor Network (Okdem et al., 2011), the redundancy allocation problem (Yeh and Hsieh, 2011), engineering design optimization (Akay and Karaboga, 2012a), data mining (Celik et al., 2011) and job shop scheduling (Yin et al., 2011).

44

H. Shan et al. / BioSystems 132 (2015) 43–53

The ABC algorithm is superior to other SI algorithms in terms of its simplicity, flexibility, and robustness. Karaboga and Akay (2009a) implemented comparison experiments and the results showed that the performance of ABC algorithm was better or similar to GA, DE, and PSO algorithms. In addition to the advantageous properties, the ABC algorithm requires fewer training parameters, so combining it with other algorithms is easier. Given its flexibility, the ABC algorithm has been revised in many recent studies. For example, Alatas (2010) proposed a chaotic ABC algorithm, in which many chaotic maps for parameters adapted from the standard ABC algorithm were introduced to improve its convergence performance. Zhu and Kwong (2010) proposed a gbest-guided ABC algorithm by incorporating the information of the global best solution into the solution search equation to improve the exploitation. In addition, Gao and Liu (2012) proposed a modified ABC (MABC) algorithm that used a modified solution search equation with chaotic initialization; further MABC excluded the onlooker bees and scout bees phases. Akay and Karaboga (2012b) proposed the modified ABC algorithm to overcome the slow convergence speed of the standard ABC algorithm. Banharnsakun et al. (2011) proposed BsfABC algorithm and exploited the best solution found so far. The best solution was used to modify the onlooker bee step, thus leaving employed bee step unchanged. Aydin et al. (2011) proposed incremental ABC (IABC) algorithm that integrates the population growth and local search with standard ABC algorithm. However, along with the advantages of the improved versions of ABC, a few disadvantages still exist. For example, slower convergence speed for some unimodal problems and easily get trapped in local optima for some complex multimodal problems (Karaboga and Akay, 2009a), and low exploitation abilities. Richards and Ventura (2004) found that uniformity of the initial population plays a more important role in higher dimensional problems (up to 50 dimensions), in contrast, claim that uniform initialization methods lose their effectiveness in problems with dimensionality larger than 12, therefore, the initial population for more higher dimension size of ABC algorithm is not so effective. To overcome these disadvantages, we propose an self adaptive hybrid enhanced ABC algorithm inspired by levy flight (Brown et al., 2007; Pavlyukevich, 2007), a self adaptive mechanism for employed bees and onlooker bees steps, and combined with DE and PSO algorithms, at last, introduced chaotic opposition-based learning (OBL) in scout bee step (Tizhoosh, 2005). We implemented comparative experiments and set up parameters for our proposed ABC algorithm to demonstrate the efficacy of the algorithm; more specifically, we used the CEC 2014 test suite benchmark problems to show the performance of proposed ABC algorithm. Finally, we implemented comparative experiments using our proposed ABC and the standard ABC, and state-of-the-art ABC algorithms such as BsfABC and IABC algorithms. In addition to this introductory section, the remainder of this paper is organized as follows. The ABC algorithm is introduced in Section 2. In Section 3, we describe our proposed ABC algorithm. The experimental setup and results are discussed in Section 4, and we conclude our paper in Section 5. 2. The artificial bee colony (ABC) algorithm The ABC algorithm is a swarm based meta-heuristic algorithm introduced by Karaboga (2005) that has successfully applied to numerical optimization problems (Karaboga and Basturk, 2007, 2008; Karaboga and Akay, 2007; Akay and Karaboga, 2012b). In the ABC algorithm, the artificial bee colony comprises three kinds of bees: employed bees, onlooker bees, and scout bees. Employed bees search for food source sites by modifying the site in their memory, evaluate the nectar amount of each new source, and

memorize the more productive site through a selection process; these bees share information related to the quality of the food sources they exploit in the “dance area”. Onlooker bees wait in the hive and decide a food source to exploit based on the information coming from employed bees. As such, more beneficial sources have higher probability to be selected by onlookers. Further, onlooker bees choose food sources depending on the given information through probabilistic selection and modify these sources. In order to decide if a source is to be abandoned, the counters which have been updated during search are used. If the value of the counter is greater than the control number of the ABC algorithm, known as the limit, the source associated with the counter is assumed to be exhausted and is abandoned. When the food source is abandoned, a new food source is randomly selected by a scout bee to replace the abandoned source. The main steps of the algorithm are given below: 1) Initialize the population of solutions xij with xij ¼ xmin;j þ rand½0; 1ðxmax;j  xmin;j Þ

(1)

where i 2 1; 2; :::; SN and j 2 1; 2; :::; D are randomly selected indexes, SN is the number of food source, and D is the dimension size. 2) Evaluate the population. 3) Initialize cycle to 1. 4) Produce new solutions vi for the employed bees by using xij mentioned in Eq. (1) vij ¼ xij þ fij ðxij  xkj Þ

(2)

where fij is a uniformly distributed random number in the range [1,1]; i; k 2 1; 2; :::; SN are randomly selected indexes with k different from i and j 2 1; 2; :::; D is a randomly selected index. 8 1 > > ; if f  0 < i ð1 þ f i Þ (3) fiti ¼ > > : 1 þ jf i j; if f i < 0 then evaluate the solutions according to fitness value fiti in minimization problem, where f i is the cost value of solution vi 5) Apply the greedy selection process for the employed bees. 6) If the solution does not improve, add 1 to the trail, otherwise, set the trail to 0. 7) Calculate probability values pi for the solutions using Eq. (4) as pi ¼

fiti SN X fitn

(4)

n¼1

where fiti is the fitness value of solution i. 8) Produce new solutions for the onlooker bees from solutions xi , which is selected depending on pi , then evaluate them. 9) Apply the greedy selection process for the onlooker bees. 10) If the solution does not improve, add 1 to the trail, otherwise, set the trail to 0. 11) Determine the abandoned solution through the number of limit for the scout, if it exists, and replace it with a new random solution using Eq. (1). 12) Memorize the best solution achieved so far. 13) Add 1 to cycle. 14) Repeat above cycles (4–13) until cycle reaches a predefined maximum cycle number (MCN). To enhance the exploitation and exploration processes, best-sofar ABC (BsfABC) algorithm was proposed by Banharnsakun et al. (2011). In this BsfABC algorithm, three major changes were introduced. All onlooker bees use the information from all employed bees to make a decision on a new candidate food

H. Shan et al. / BioSystems 132 (2015) 43–53

source. Thus, the onlookers can compare information from all candidate sources and are able to select the best-so-far position which will lead to optimal solution. The new method used to calculate a candidate food source is shown as vid ¼ xij þ f  f b ðxij  xbj Þ

(5)

where vid is the new candidate food source for onlooker bee position i and dimension d,d ¼ 1; 2; 3; :::; D, D is the dimension size, xij is the selected food source position i in a selected dimension j, f is a random number between 1 and 1, f b is the fitness value of the best food source so far, xbj is the best-so-far food source in selected dimension j. A global search ability for the scout bee was introduced for resolving the problem of trapping in local optimum (Banharnsakun et al., 2011). vij ¼ xij þ fij ½vmax 

iteration ðvmax  vmin Þxij MCN

(6)

where vij is a new feasible solution of a scout bee that is modified from the current position of an abandoned food source (xij ) and fij is a random number between [1,1]. MCN means maximum cycle number. The values of vmax and vmin represent the maximum and minimum percentage of the position adjustment for the scout bee. The value of vmax and vmin are fixed to 1 and 0.2, respectively. Aydin et al. (2011) proposed another modified ABC algorithm named incremental ABC (IABC) algorithm. IABC algorithm begins with few food sources. New food sources are placed biasing their location towards the location of the best-so-far solution. This is implemented as shown: x0new;j ¼ xnew;j þ rand½0; 1ðxgbest;j  xnew;j Þ

(7)

where xnew;j is the randomly generated new food source location, x0new;j is the updated location of the new food source, xgbest;j refers to best-so-far food source location. Another modification is applied by the scout bees step in IABC. The difference is a replacement factor parameter, Rfactor , that controls how much of the new food source locations will be closer to the best-so-far food source. This modified rule is shown in Eq. (8). x0new;j ¼ xnew;j þ Rfactor ðxgbest;j  xnew;j Þ

(8)

The other difference between the standard ABC algorithm and IABC is that employed bees search in the vicinity of xgbest;j instead of a randomly selected food source. This modification boosts the exploitation behavior of the algorithm and helps to converge quickly towards good solutions. The standard ABC, IABC, BsfABC algorithms are introduced to compare with the proposed ABC

45

algorithm which is mentioned on next section, and the comparative performance of these algorithms will be evaluated based on the CEC 2014 test suite experiments. The standard ABC, IABC, BsfABC algorithms are introduced to compare with the proposed ABC algorithm which is mentioned on next section, and the comparative performance of these algorithms will be evaluated based on the CEC 2014 test suite experiments. 3. Proposed artificial bee colony (ABC) algorithm ABC algorithm has the characteristics which make it more attractive in SI algorithms. It has few control parameters: population size, limit and MCN number; it has the advantages of simple, flexible and robust; It has fast convergence speed and it is easy to combine with other SI algorithms. However, it has still some drawbacks. ABC algorithm produces the candidate solution from its parent by a simple operation based on taking the difference of randomly determined parts of the parent and a randomly chosen solution from the population, so it does not get better convergence for initialization step, it is easy to trap in local optimum during the search food step, it does not control the balance well between exploration and exploitation while the employed bees and onlooker bees steps. Exploration and exploitation (Rashedi et al., 2009) are two common aspects in population-based heuristic algorithms. Exploration is the ability to expand the search space, and exploitation is the ability to find optima around a good solution. As mentioned in previous section, exploration and exploitation play key roles in SI algorithms, they coexist in the evolutionary process of algorithms such as PSO, DE, and ABC, however, they contradict each other in the meantime. In this proposed ABC algorithm, we introduce these modifications which are levy flight initialization, self adaptive mechanism for employed bees and onlooker bees steps, and chaotic opposition based learning (OBL) for scout bee step and improve the convergence performance of standard ABC algorithm based on the benchmark optimization problems. Population initialization is a crucial step in SI algorithms because it can affect convergence speeds and the quality of the final solution. If no information about the solution is available, then random initialization is the most commonly used method for generating an initial population. Local search method is basically incorporated with the population based search algorithm. We introduced a levy flight distribution as local search method and combined with ABC algorithm. The main search algorithm explores the most promising search space regions while the exploitation ability is improved by using levy flight method through inspecting the surroundings of initial solutions. In the past, the flight behaviors of animals and insects that exhibit important properties

Table 1 CEC 2014 test functions. Func. No.

Func. name

Func. No.

Func. name

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15

Rotated High Conditioned Elliptic Rotated Bent Cigar Rotated Discus Shifted and Rotated Rosenbrock Shifted and Rotated Ackley Shifted and Rotated Weierstrass Shifted and Rotated Griewank Shifted Rastrigin Shifted and Rotated Rastrigin Shifted Schwefel Shifted and Rotated Schwefel Shifted and Rotated Katsuura Shifted and Rotated HappyCat Shifted and Rotated GHBat Expanded F4 plus F7

F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27 F28 F29 F30

Shifted and Rotated Expanded Scaffer’s F6 Hybrid Function 1 (N = 3) Hybrid Function 2 (N = 3) Hybrid Function 3 (N = 4) Hybrid Function 4 (N = 4) Hybrid Function 5 (N = 5) Hybrid Function 6 (N = 5) Composition Function 1 (N = 5) Composition Function 2 (N = 3) Composition Function 3 (N = 3) Composition Function 4 (N = 5) Composition Function 5 (N = 5) Composition Function 6 (N = 5) Composition Function 7 (N = 3) Composition Function 8 (N = 3)

46

H. Shan et al. / BioSystems 132 (2015) 43–53

of levy flight have been analyzed in various studies. This levy flight behavior has been applied to optimization and search algorithms, and reported results show its importance in the field of solution search algorithms (Yang and Deb, 2009, 2013). Recently, Yang proposed new meta-heuristic algorithms, such as CS using levy flight. Levy flight is a random walk in which the step lengths have a heavy-tailed probability distribution. Random step lengths drawn from a levy flight distribution (Yang, 2010; Viswanathan et al., 1996) are shown as

Table 2 Parameter adjustment experiments.

LðsÞ : jsj1b

performance. One scheme of the mutations of DE, “DE/best/1” can effectively maintain population diversity. We therefore combined the “DE/best/1” mutation strategy with the food searching process of the ABC algorithm to produce a new search equation (shown below as Eq. (16)) and improved the convergence ability.

(9)

where bð0 < b  2Þ is an index and s is the step length. Initialization for our proposed ABC using levy flight is calculated as shown in Eq. (10). ¼ xtij þ a  levyðbÞ xtþ1 ij

(10)

where i 2 1; 2; :::; SN and j 2 1; 2; :::; D are randomly selected indexes, t is the iteration number set for 50, a is uniformly distributed number selected from U(0, 1). The levy flight is calculated as LevyðbÞ : 0:01ð

u 1=b t Þ ðxi;j  xtbest;j Þ jvj

(11)

where i 2 1; 2; :::; SN and j 2 1; 2; :::; D are randomly selected indexes, xbest;j is the best solution found so far, u and v are derived from normal distributions as u : Nð0; s 2u Þ

v : Nð0; s 2v Þ

Gð1 þ bÞsinðpb=2Þ su ¼ bG½ð1 þ bÞ=22ðb1Þ=2

(12) !1=b ;

sv ¼ 1

(13)

To achieve good optimization performance with higher convergence speed and without trapped in local optima, we introduced a self-adaptive mechanism to change the search range related with a cycle number, and then combined it with DE to improve the performance of the employed bees. In the standard ABC algorithm, a random perturbation is added to the current solution to produce a new solution. This random perturbation is weighted by fij selected from [1,1] and is a uniformly distributed

real random number in the standard ABC. A low value of fij results in small steps to find the optimal value, therefore achieving convergence slowly. A high value of fij accelerates the search, but reduces the exploration ability of the perturbation process. Therefore, we used a self adaptive mechanism to balance the exploration ability and the convergence speed of the algorithm for employed bees. The self adaptive ABC approach has a very simple structure and is easy to implement. fij is changed with the cycle number according to a random value called rand in the range [0,1] for food searching process of employed bee; fij is determined as  3cycle=ð25MCNÞ (14) fij ¼ e3cycle=ð25MCNÞ ; 0  rand  0:5 ; 0:5 < rand  1 e The DE algorithm has proved to be a simple yet powerful and efficient population based algorithm for many global optimization problems. To further improve the performance of the DE algorithm, researchers have suggested different schemes of DE (Swagatam and Suganthan, 2011). Like other evolutionary algorithms, DE also relies on an initial random population generation and then improves its population via mutation, crossover and selection processes. The DE equation is shown below as Eq. (15). The searching food source process in ABC algorithm is similar to the mutation process of DE. And in DE, the best solutions in the current population are very useful for improving convergence

Lm./NP

20

50

100

150

200

300

50 100 150 250 400

–/–/– –/–/– –/–/– –/–/– –/–/–

–/–/– –/–/– 10/30/– 10/30/– 10/–/–

–/–/– 10/30/– 10/30/– 10/30/50 –/30/50

–/–/– 10/–/50 10/–/50 –/30/50 –/–/50

–/–/– –/–/– 10/–/50 –/30/50 –/–/–

–/–/– –/–/– –/–/– –/–/– –/–/–

vi;G ¼ xbest;G þ F  ðxri ;G  xri ;G Þ 1

(15)

2

where i 2 1; 2; :::; SN is a randomly selected index, NP is the population number, G is the generation number, vi;G is the donor vector, and ri1 and ri2 are random numbers chosen from the range [1, NP]. vij ¼ xbest;j þ fij ðxij  xkj Þ

(16)

where i; k 2 1; 2; :::; SN are randomly selected indexes with k different from i; j 2 1; 2; :::; D is a randomly selected index and fij is the parameter given in Eq. (14). In order to increase the probability of the lower fitness individual to be selected through modifying Pi , we introduce self adaptive mechanism to Pi . The value of Pi varies with the updated cycle number, it also depends on the summation value of fitness and max. value of fitness. Pi is given in Eq. (17). 0:15cycle MCN

Pi ¼ exp

0:15cycle fiti fit þ ð1  exp MCN ÞPSN i fitmax n¼1 fiti

(17)

where MCN is the maximum cycle number, fiti is the fitness value of solution i, fitmax is the maximum fitness value. To the best of our knowledge, the search ability of the ABC algorithm is good at exploration, but poor in terms of exploitation. Specifically, we can view the relationship of employed bees and onlooker bees as focused on exploration and exploitation, respectively. Employed bees explore new food sources and send information to onlooker bees, and onlooker bees exploit the food sources explored by employed bees. In the standard ABC algorithm, much time is required to find the food source due to poor exploitation abilities and lower convergence speeds. To improve the exploitation ability of the algorithm, we incorporated PSO into the ABC algorithm. PSO is based on the simulation of simplified social animal behaviors. The equation Table 3 Comparison results of mean values of function error of modified ABC algorithm to standard abc algorithm on 10D. Standard ABC VS.

LF

DE

PSO

OBL

Pi

+  –

7 5 18

11 9 10

10 6 14

10 7 13

9 16 5

Standard ABC VS.

LD

LP

LDP

DC

PC

+  –

10 11 9

10 16 4

13 10 7

10 10 10

12 11 7

Standard ABC VS.

DP

DCP

CP

DPP

DPPC

+  –

9 10 11

9 11 10

7 17 6

11 10 9

11 11 8

H. Shan et al. / BioSystems 132 (2015) 43–53 Table 4 Comparison performance of mean values of function error of our proposed ABC algorithm to standard ABC, BsfABC, and IABC algorithms. Proposed ABC on 10D VS

ABC

BsfABC

IABC

+  –

14 5 11

21 6 3

17 9 4

Proposed ABC on 30D VS

ABC

BsfABC

IABC

+  –

12 11 7

18 8 4

14 10 6

Proposed ABC on 50D VS

ABC

BsfABC

IABC

+  –

10 11 9

16 10 4

11 13 6

47

The concept of OBL was introduced by Tizhoosh (2005) and has been applied to accelerate reinforcement learning and backpropagation learning in neural networks (Ventresca and Tizhoosh, 2006). The main idea behind OBL is to improve our chance to start with a closer (fitter) solution by checking the opposite solution simultaneously. By doing this, the closer one to solution (say guess or opposite guess) can be chosen as initial solution. In fact, according to probability theory, in 50% of cases, the guess is farther to solution than opposite guess; for these cases staring with opposite guess can accelerate convergence. According to Rahnamayan et al. (2008), OBL was introduced to DE and improved the convergence performance. Therefore, to accelerate convergence speed and prevent to stick on a local solution, we introduced initialization approach which employs chaotic systems (Alatas, 2010) and OBL method for scout bees. Here, a sinusoidal iterator is selected for chaotic, and its equation is defined as chkj ¼ sinðpchk1;j Þ

governing PSO is shown as Eq. (18) below. We modified the onlooker bee search solution by taking advantage of the search mechanism of PSO; our modified search equation for onlooker bees is shown as Eq. (19). vi;d ¼ vvi;d þ c1 r1 ðpi;d  xi;d Þ þ c2 r2 ðpg;d  xi;d Þ

(18)

where d 2 1; 2; :::; D, i 2 1; 2; :::; M, D is the dimension size M is the total number of particles in the swarm, v is the inertia weight, r1 ; r2 are random numbers in the range [0,1], c1 ; c2 are acceleration coefficients, pi;d is the personal best and pg;d is the global best. vij ¼ xij þ ’ij ðxij  xkj Þ þ cij ðxbest;j  xij Þ

(19)

where i; k 2 1; 2; :::; SN are randomly selected indexes with k different from i; j 2 1; 2; :::; D is a randomly selected index; xbest;j is the jth element of the best solution so far, and ’ij 2 ½1; 1 and

cij 2 ½0; 1:5 are uniformly distributed random numbers.

Table 5 Mean vlues of function error for the proposed (Pro.) ABC, ABC, BsfABC and IABC algorithms and the signs for Pro. ABC compared with ABC, BsfABC and IABC algorithms in 10D.

(20)

where chk 2 ð0; 1Þ, k ¼ 1; 2; :::; Max, j ¼ 1; 2; :::; D The initialization population for scout bees is shown as xij ¼ xmin;j þ chkj ðxmax;j  xmin;j Þ

(21)

and the chaotic OBL equation is shown as oxij ¼ xmin;j þ xmax;j  xij

(22)

where ox indicates the opposition-based population. We selected SN individuals from the set fxðSNÞ [ oxðSNÞg as the initial scout bees population. For our proposed ABC algorithm, we specifically modified steps 1, 4, 7, 8, and 11 of standard ABC algorithm. The modification Eq. (10) substituted for Eq. (1) in step 1, amendments Eqs. (14) and (16) exchange Eq. (2) in step 4, modification Eq. (17) substituted for Eq. (4) in step 7, amendment Eq. (19) substituted for Eq. (2) in step 8, and modifications Eqs. (20)–(22) exchange Eq. (1) in step 11, respectively. Table 6 Mean values of function error for the proposed (Pro.) ABC, ABC, BsfABC and IABC algorithms and the signs for Pro. ABC compared with ABC, BsfABC and IABC algorithms in 30D.

Func. No.

Pro. ABC

ABC Sign

BsfABC Sign

IABC Sign

Func. No.

Pro. ABC

ABC Sign

BsfABC Sign

IABC Sign

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

4.35e+04 2.66e+01 6.75e+01 4.41e-03 1.35e+01 8.87e-01 5.94e-04 0.00e+00 3.12e+00 0.00e+00 5.84e+01 1.30e-01 8.21e-02 6.42e-02 5.19e-01 1.38e+00 2.95e+04 5.55e+01 2.84e-01 2.58e+01 9.21e+02 6.31e-02 1.50e+02 1.10e+02 1.17e+02 9.50e+01 4.54e+00 3.45e+02 2.47e+02 4.68e+02

6.68e+04  1.21e+01 – 3.34e+01 – 1.00e-02 + 8.44e+00  1.05e+00  1.42e-03  0.00e+00  4.47e+00 + 3.01e-01  9.04e+01 + 1.02e-01 – 8.51e-02  1.11e-01 + 4.01e-01  1.68e+00 + 6.33e+04 + 1.07e+02 + 1.86e-01  4.20e+01 + 3.13e+03 + 1.31e-01 + 1.05e+02  1.10e+02  1.25e+02 + 9.99e+01 + 6.24e+00 + 3.32e+02 – 2.47e+02 – 5.15e+02 +

5.87e+04 + 4.56e+00 – 2.73e+02 + 8.04e-03 + 1.83e+01  1.93e+00 + 4.95e-04  0.00e+00  7.44e+00 + 3.80e-02 + 65.2e+01 + 1.09e-01 – 7.21e-02  1.06e-01 + 7.25e-01 + 1.87e+00 + 4.25e+04 + 8.53e+01 + 2.34e-01  9.32e+01 + 2.46e+03 + 1.46e+00 + 1.05e+02  1.17e+02 + 1.34e+02 + 1.00e+02 + 1.21e+01 + 3.66e+02 + 2.31e+02 – 4.99e+02 +

4.58e+04  2.17e+01  1.11e+02 + 6.53e-03 + 1.22e+01  1.46e+00 + 1.31e-02  0.00e+00  6.03e+00 + 0.00e+00  7.74e+01 + 1.10e-01 – 1.30e-01 + 1.20e-01 + 6.17e-01 + 1.65e+00 + 3.92e+04  1.03e+02 + 2.76e-01  2.36e+01  1.83e+03 + 4.76e-01 + 5.64e+00 – 1.15e+02 + 1.27e+2 + 9.82e+01 + 5.73e+00 + 3.28e+02 – 2.49e+02 – 5.20e+02 +

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

6.66e+06 7.59e+00 2.42e+02 2.08e-01 2.02e+01 1.30e+01 0.00e+00 0.00e+00 3.99e+01 0.00e+00 1.70e+03 2.68e-01 2.05e-01 1.53e-01 6.22e+00 9.03e+00 1.17e+06 9.34e+03 5.88e+00 2.17e+03 2.35e+05 1.59e+02 3.15e+02 1.79e+02 2.06e+02 1.00e+02 4.07e+02 8.09e+02 9.95e+02 1.42e+03

2.42e+06 – 1.15e+01 + 9.40e+01 – 2.69e-01 + 2.02e+01  1.1e+01 – 0.00e+00  0.00e+00  6.32e+01 + 1.53e-01 + 1.54e+04 + 1.89e-01 – 1.94e-01  1.72e-01 + 5.83e+00  9.32e+00 + 1.25e+06 + 5.21e+02 – 5.62e+00  2.77e+03 + 8.50e+04 – 1.19e+02  3.15e+02  2.53e+02 + 2.05e+02  1.00e+02  4.04e+02  8.76e+02 + 8.70e+02 – 1.58e+03 +

7.14e+06 + 1.39e+01 + 2.18e+02  2.48e-01 + 2.00e+01  1.50e+01 + 0.00e+00  0.00e+00  4.85e+01 + 3.21e-02 + 2.09e+03 + 1.80e-01 – 1.81e-01 – 1.65e-01 + 9.53e+00 + 1.00e+01 + 2.99e+06 + 2.76e+02 – 6.50e+00 + 2.45e+03 + 3.59e+05 + 3.28e+02 + 3.15e+02  2.25e+02 + 1.95e+02  1.00e+02  4.07e+02  8.32e+02 + 7.73e+02 – 2.43e+03 +

3.01e+06 – 3.20e+01 + 3.02e+02  2.25e-01 + 2.02e+01  1.26e+01  0.00e+00  0.00e+00  5.52e+01 + 7.21e-01 + 1.49e+03 – 1.95e-01 – 2.79e-01 + 1.82e-02 + 8.03e+00 + 9.61e+00 + 1.47e+06 + 4.44e+03 – 6.27e+00 + 3.08e+03 + 1.08e+05 – 1.20e+02  2.97e+02 – 2.22e+02 + 1.86e+02  1.00e+02  3.89e+02  8.61e+02 + 9.14e+02  2.06e+03 +

48

H. Shan et al. / BioSystems 132 (2015) 43–53

Table 7 Mean values of function error for the proposed (Pro.) ABC, ABC, BsfABC and IABC algorithms and the signs for Pro. ABC compared with ABC, BsfABC and IABC algorithms in 50D. Func. No.

Pro. ABC

ABC Sign

BsfABC Sign

IABC Sign

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

1.27e+07 9.40e+03 5.85e+03 5.52e+00 2.03e+01 2.74e+01 0.00e+00 0.00e+00 1.12e+02 5.92e-03 3.34e+03 3.33e-01 2.61e-01 1.84e-01 1.55e+01 1.79e+01 3.68e+06 1.03e+04 1.38e+01 1.88e+04 1.92e+06 6.06e+02 3.44e+02 2.56e+02 2.12e+02 1.00e+02 4.15e+02 1.22e+03 1.82e+03 9.09e+03

7.51e+06 – 2.30e+02 – 3.55e+03 – 1.60e+01 + 2.03e+01  2.81e+01  0.00e+00  0.00e+00  1.54e+02 + 5.20e-01 + 3.80e+03 + 2.34e-01 – 2.82e-01  2.03e-01 + 1.58e+01  1.94e+01 + 2.58e+06  1.71e+03 – 1.94e+01 + 1.29e+04 – 1.44e+06 – 5.01e+02 – 3.44e+02  2.59e+02  2.12e+02  1.00e+02  4.21e+02 + 1.47e+03 + 1.11e+03 – 9.28e+03 +

1.41e+07  3.00e+02 – 6.40e+03 + 1.13e+01 + 2.03e+01  3.32e+01 + 0.00e+00  0.00e+00  1.39e+02 + 1.82e+00 + 4.55e+03 + 2.25e-01 – 2.70e-01  2.11e-01 + 2.47e+01 + 1.84e+01  5.82e+02 + 5.36e+02 – 1.64e+01 + 2.88e+04 + 3.93e+06 + 8.42e+02 + 3.44e+02  2.57e+02  2.12e+02  1.00e+02  4.42e+02 + 1.28e+03 + 1.38e+03 – 1.02e+04 +

8.53e+06 – 1.18e+02 – 6.63e+03 + 8.32e+00 + 2.03e+01  2.81e+01  0.00e+00  0.00e+00  1.25e+02 + 3.15e-01 + 3.71e+03 + 2.49e-01 – 3.39e-01 + 2.17e-01 + 2.13e+01 + 1.89e+01 + 2.74e+06  1.08e+04  1.45e+01  1.26e+04 – 1.40e+06 – 5.30e+02  3.34e+02  2.58e+02  2.12e+02  1.00e+02  4.19e+02  1.38e+03 + 1.55e+03 – 9.23e+03 +

Fig. 2. Comparative convergence for function F18 on 10D.

(F4–F16), hybrid function (F17–F22) and composition functions (F23–F30). The functions and their names are summarized in Table 1. The detailed description of the CEC 2014 test suite is available in Liang et al. (2013b). All test functions are minimization problems defined as follows: To minimize f(x), x ¼ ½x1 ; x2 ; :::; xD T where D is the number of dimensions. Given o ¼ ½o1 ; o2 ; :::; oD T , the shifted global optimum distribut-

4. Experiments 4.1. Experimental setup The CEC 2014 test suite extends its predecessor CEC 2013 test suite. In the CEC 2014 test suite, the previously proposed composition functions are improved, and additional hybrid test functions are introduced. There are 30 numerical test functions that are minimization problems categorized into the following four groups: unimodal functions (F1–F3), simple multimodal functions

Fig. 1. Comparative convergence for function F11 on 10D.

ed randomly in the range ½80; 80D . Different from CEC 2013, each functions has a shift data for CEC 2014. All test functions are shifted to o and are scalable. For convenience, the same search ranges ½100; 100D is defined for all test functions. Different from CEC 2013, different rotation matrix Mi are assigned to each function and each basic function in CEC 2014. In this paper, we evaluated the standard ABC, BsfABC, IABC and our proposed ABC algorithms for all 30 test functions defined in the CEC 2014 test suite with the parameters selected by comparing

Fig. 3. Comparative convergence for function F30 on 10D.

H. Shan et al. / BioSystems 132 (2015) 43–53

49

Fig. 4. Comparative convergence for function F2 on 30D.

Fig. 6. Comparative convergence for function F24 on 30D.

experiments at three dimension sizes, i.e., 10, 30, and 50, respectively. The 30 test functions were executed 25 times with respect to each test function at each problem dimension size. The algorithms were terminated when the MCN was reached for function evaluations or when the error value was smaller than

indicates competitive performance of our algorithm on the three dimension sizes 10, 30, and 50. “–” indicates that its performance was significantly worse than others on that dimension size. From Table 2, we observe that the ABC is not very sensitive to the choice of parameters given much lower or much higher population sizes and limits. We therefore selected a limit of 250 and population size of 100. In Table 3, all kinds of single modifications which included levy flight (LF) initialization, combined with DE, combined with PSO, chaotic OBL scout initialization and probability value Pi for standard ABC algorithm and combination forms of several modifications which included combinations of LF and DE (LD), LF and PSO (LP), LF, DE and PSO (LDP), DE and chaotic OBL (DC), PSO and chaotic OBL (PC), DE and Pi (DP), DE, chaotic OBL and Pi (DCP), chaotic OBL and Pi (CP), DE, PSO and Pi (DPP), DE, PSO, Pi and chaotic OBL (DPPC) are shown with the number of better, similar and worse performance compared to standard ABC algorithm using the Wilcoxon’s rank sum test with significant level

108 . In our experiments, we set maximum evaluation sizes to 10,000, 30,000, and 50,000 for problem dimension sizes of 10, 30, and 50, respectively. We also performed the Wilcoxon’s rank sum test with significance level of 0.05, and conducted comparative experiments for the standard ABC, BsfABC, IABC and the proposed ABC algorithms. We used the C language for our experiments on a Linux system with an Intel Core i3CPU 540 @3.07 GHz  4 with 64-bit processing. 4.2. Experimental results and discussion Table 2 shows our parameter adjustment experiment results with NP representing population size. In this table, “–/–/–”

Fig. 5. Comparative convergence for function F9 on 30D.

Fig. 7. Comparative convergence for function F4 on 50D.

50

H. Shan et al. / BioSystems 132 (2015) 43–53

Fig. 8. Comparative convergence for function F27 on 50D.

Fig. 10. Boxplot of comparative convergence for function F4.

p = 0.05 for the obtained 25 mean values of function errors. According to the comparison result, the LDP is the best, the PC is listed as second one and the DPPC is the third. Based on the comparison results, we implemented the experiments of proposed ABC algorithm which combined with the levy flight, PSO, chaotic OBL, self adaptive DE and Pi modifications. After implementing comparative experiments on the standard ABC, BsfABC, IABC and our proposed ABC algorithms, we listed, in Table 4, the number of better, similar, and worse performance of mean values of function error of these algorithms on 30 test functions statistically with symbols “+”, “ ”, “–”. Tables 5–7 illustrate the mean values of function error of standard ABC, BsfABC, IABC, and the proposed ABC algorithms for 10,000, 30,000, and 50,000 evaluations of dimension size of 10, 30, and 50, respectively. Figs. 1–8 illustrate the convergence performance for logarithmic values of mean value of function error on standard ABC, BsfABC, IABC, and our proposed ABC algorithms with increasing

function evaluations on dimension sizes of 10 (10D), 30 (30D), and 50 (50D), respectively. According to Figs. 1–3, the performance of our proposed ABC algorithm is better than standard ABC, BsfABC, and IABC algorithms on 10D. The convergence performances of proposed ABC on functions F11 (Fig. 1) and F18 (Fig. 2) are much better compared to standard ABC, BsfABC, and IABC algorithms on 10D. The same results could be seen on functions F6 and F9; the comparative convergence of function F30 (Fig. 3) is better than standard ABC, BsfABC, and IABC algorithms on 10D, the same performances are found on functions F4, F9, F10, F14,F16, F17, F20–F22, and F25–F27. The best result is achieved by all four ABC algorithms for function F8, moreover, the function F8 has the best performance because the mean value of function error reached zero. For functions F7 and F10, the proposed ABC algorithm almost achieves the best performance value of zero as the mean values of function error. According to Figs. 4–6, the convergence performance of our proposed ABC algorithm is better than standard ABC, BsfABC, and IABC algorithms on 30D. The convergence speed of functions F2

Fig. 9. Boxplot of comparative convergence for function F1.

Fig. 11. Boxplot of comparative convergence for function F5.

H. Shan et al. / BioSystems 132 (2015) 43–53

51

Fig. 12. Boxplot of comparative convergence for function F14.

Fig. 14. Boxplot of comparative convergence for function F20.

(Fig. 4) and F9 (Fig. 5) are much faster compared to those standard ABC, BsfABC, and IABC algorithms on 30D. The convergence ability of function F24 (Fig. 6) is statistically better than standard ABC, BsfABC, and IABC algorithms on 30D. The same results are seen from functions F4, F10–F12, F16, F17, F20, F28 and F30. The best results are achieved by standard ABC, BsfABC, IABC, and proposed ABC algorithms on functions F7 and F8 which reached the best performance because the mean values of function error of zero. For function F10, the proposed ABC algorithm almost achieves the best performance value of zero as the mean value of function error. The standard ABC, BsfABC, IABC, and proposed ABC algorithms have the statistical similar results for functions F23, F25–F27 on D30. According to Figs. 7 and 8, the comparative convergence of our proposed ABC algorithm is achieved compared to standard ABC, BsfABC, and IABC algorithms on 50D. The convergence speed of function F4 (Fig. 7) is much better than standard ABC, BsfABC, and IABC algorithms on 50D. In addition, functions F9 and F10 have the same results as F4; the convergence performance of function F27 (Fig. 8) is better than standard ABC, BsfABC, and IABC algorithms on

50D, but not so obviously. The similar results are observed from functions F9, F11, F14, F16, F19, F28, and F30. The best results are achieved by standard ABC, BsfABC, IABC, and proposed ABC algorithms on functions F7 and F8 which reached the best performance because the mean values of function error of zero. For function F10, the proposed ABC algorithm almost achieves the best performance with the mean value of function error of zero. The four ABC algorithms have the similar performances for functions F5, F6, F13, F15, F17 and F23–F26 on D50. Figs. 9–16 show the boxplots for mean values of function error of standard ABC, BsfABC, IABC, and proposed ABC algorithms on 10D, 30D, and 50D respectively. The number “1–4”, “5–8”, and “9–12” of these figures indicate that mean values of function error for standard ABC, BsfABC, IABC, and proposed ABC algorithms on 10D, 30D, and 50D, respectively. According to Fig. 9, the performance of proposed ABC is better than BsfABC algorithm for F1 on 10D and 30D, but its performance is not better than others except for BsfABC algorithm on 10D, 30D and 50D. For the remaining functions of unimodal functions, we could observe that

Fig. 13. Boxplot of comparative convergence for function F19.

Fig. 15. Boxplot of comparative convergence for function F25.

52

H. Shan et al. / BioSystems 132 (2015) 43–53

Fig. 16. Boxplot of comparative convergence for function F27.

the proposed ABC algorithm achieves the best performance for functions F2 on D30, F3 on 10D and 50D compared to BsfABC and IABC algorithms. From the analysis of the unimodal functions of F1–F3, we conclude that the performance of our proposed ABC algorithm is not much effective. Figs. 10–12 illustrate the boxplot of simple multimodal functions F4, F5, and F14 with mean values of function error of standard ABC, BsfABC, IABC, and proposed ABC algorithms on 10D, 30D, and 50D, respectively. According to Fig. 10, the performance of proposed ABC is the best, IABC is second, BsfABC is better than ABC for function F4 on 10D, 30D, and 50D. We observe that the performance of standard ABC algorithm is the best for function F5 on 10D, however, all four ABC algorithms on function F5 achieves the same performances on 30D and 50D according to Fig. 11. According to Fig. 12, we could see that the performance of proposed ABC for function F14 is better than others on 10D, 30D and 50D; BsfABC is better than ABC algorithm on 10D and 30D; but standard ABC algorithm is not worse than BsfABC algorithm on 50D; IABC algorithm performs the worst on 10D, 30D, and 50D. For the remaining functions of simple multimodal functions, the proposed ABC algorithm reaches the best performance functions F4, F9 and F14 on 10D, 30D and 50D; functions F11and F16 on 10D; function F10 on 30D; functions F10 and F11 on 50D, respectively. For functions F7 and F8, we achieve the best performance on 10D, 30D, and 50D, because the mean values of the function error reached zero. Based on the analysis of simple multimodal functions, we could conclude that the proposed ABC algorithm is much competitive compared with standard ABC, BsfABC, and IABC algorithms. Figs. 13 and 14 show the boxplots of hybrid functions for F19 and F20 with the mean values of function error of standard ABC, BsfABC, IABC, and proposed ABC algorithms on 10D, 30D, and 50D, respectively. According to Fig. 13, the performance of proposed ABC is the best compared to standard ABC on 10D, BsfABC on 30D and 50D, IABC on 30D, however, the performance of ABC algorithm is a bit better than others on 10D and 30D. According to Fig. 14, the performance of proposed ABC is the best on 10D and 30D. For the remaining hybrid functions, we also conclude that the performance of proposed ABC is the best for functions F17 (VS. ABC, BsfABC), F18, F20 and F21 on 10D; functions F17, F21 (VS. BsfABC) and F22 (VS. BfABC) on 30D; functions F17 (VS. BsfABC), F21 and F22 (VS. BsfABC) on 50D.

Figs. 15 and 16 illustrate the boxplots of composition functions for F25 and F27 with the mean values of function error of standard ABC, BsfABC, IABC, and proposed ABC algorithms on 10D, 30D, and 50D, respectively. According to Fig. 15, the performance of proposed ABC is the best only on 10D. The convergence performances for all four ABC algorithms are very similar on 30D and 50D. The proposed ABC reaches the best performance on 10D, 30D, and 50D according to Fig. 16. For the remaining composition functions, we could conclude that the performance of proposed ABC is the best among these four algorithms except for function F29 on 10D, 30D, and 50D; function F23 on 10D and 30D (VS. IABC); function 28 on 10D (VS. ABC, IABC). For functions F23, F24 on 50D, F26 on 30D and 50D achieve the similar performance on 10D, 30D, and 50D. According to the above tables and figures, the proposed ABC algorithm is not much effective for unimodal functions of F1, F2, and F3, however, it is much effective for simple multimodal functions as F4–F16. For hybrid functions and composition functions, the performance of proposed ABC algorithm is the best among the four ABC algorithms as a whole, but not so significantly. However, the performance of standard ABC, BsfABC, and IABC algorithms is better than others for several functions on certain dimension size as mentioned above. For all 10D, 30D and 50D, functions F7 and F8 reach the best performance with the best mean values of function error. 5. Conclusions In this paper, we implemented comparative experiments of standard ABC, BsfABC, IABC, and proposed ABC algorithms using benchmark problems defined by CEC 2014 test suite. We introduced levy flight initialization, chaotic OBL scout initialization, incorporated DE and PSO into the standard ABC algorithm, and introduced self adaptive mechanism to probability equation of onlooker bee step to form our proposed ABC algorithm. We selected the best parameter settings through a number of initial comparative experiments and then evaluated the performance of standard ABC, BsfABC, IABC, and proposed ABC algorithms with dimension sizes of 10, 30, and 50, respectively. From our experimental results, we conclude that the proposed ABC algorithm achieves better performance compared to standard ABC, BsfABC, IABC algorithms on 30 benchmark functions, especially, the proposed ABC algorithm performs significantly well on simple multimodal functions. References Akay, B., Karaboga, D., 2012a. Artificial bee colony algorithm for large-scale problems and engineering design optimization. J. Intell. Manuf. 23 (4), 1001–1014. Akay, B., Karaboga, D., 2012b. A modified ABC for real-parameter optimization. Inf. Sci. 92, 120–142. Alatas, B., 2010. Chaotic bee colony algorithms for global numerical optimization. Expert Syst. Appl. 37, 5682–5687. Aydin, D., Liao, T.J., Marco, A., 2011. Improving performance via population growth and local search: the case of the ABC algorithm. IRIDIA Technical Report TR/ IRIDIA/2011–015. Banharnsakun, A., Achalakul, T., Sirinaovakul, B., 2011. The best-so-far selection in ABC algorithm. Appl. Soft Comput. 11, 2888–2901. Benitez, C.V., Lopes, H.S., 2010. Parallel artificial bee colony algorithmapproaches for protein structure prediction using the 3dhp-sc model. Intelligent Distributed Computing IV. Springer-Verlag, Berlin, Heidelberg, pp. 255–264. Beyer, H.G., 2001. The theory of evolution strategies. Natural Computing Series. Springer, Berlin. Bonabeau, E., Dorigo, M., Theraulaz, G., 1999. Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press, New York, pp. 1–21. Brown, C.T., Liebovitch, L.S., Glendon, R., 2007. Lévy flights in Dobe Ju/’ hoansi foraging patterns. Hum. Ecol. 35, 129–138. Celik, M., Karaboga, D., Koyl, F., 2011. Artificial bee colony data miner (abc-miner). Proc. of IEEE International Symposium on Innovations in Intelligent Systems and Applications (INISTA) 96–100.

H. Shan et al. / BioSystems 132 (2015) 43–53 Civicioglu, P., Besdok, E., 2013. A conceptual comparison of the cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif. Intell. Rev. 39 (4), 315–346. Dorigo, M., Birattari, M., Sẗutzle, T., 2006. Ant colony optimization: artificial ants as a computational intelligence technique. IEEE Comput. Intell. Mag. 1 (4), 1–2 IRIDIA – Technical Report Series: TR/IRIDIA/2006-023. Gao, W., Liu, S., 2012. A modified artificial bee colony algorithm. Comput. Oper. Res. 39, 687–697. Holland, J.H., 1975. Adaption in Nature and Artificial System. University of Michigan Press, Ann Arbor, MI. Karaboga, D., 2005. An idea based on honey bee swarm for numerical optimization. Technical Report TR06. Karaboga, D., Akay, B., 2007. An artificial bee colony (abc) algorithm on training artificial neural networks. 15th IEEE Signal Processing and Communications Applications 1–4. Karaboga, D., Basturk, B., 2007. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Global Optim. 39 (3), 459–471. Karaboga, D., Basturk, B., 2008. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 8, 687–697. Karaboga, D., Akay, B., 2009a. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 214 (1), 108–132. Karaboga, D., Akay, B., 2009b. A survey: algorithms simulating bee swarm intelligence. Artif. Intell. Rev. 31 (1), 68–85. Karaboga, D., Gorkemli, B., Ozturk, C., Karaboga, N., 2014. A comprehensive survey: artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 42 (1), 21–57. Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. IEEE Int. Conf. Neural Networks 1942–1948. Liang, J.J., Qu, B.Y., Suganthan, P.N., Hern’andez-D’ıaz, A.G., 2013. Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization. Technical Report 201212. Liang, J.J., Qu, B.Y., Suganthan, P.N., 2013. Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real parameter numerical optimization. Technical Report 201311. Okdem, S., Karaboga, D., Ozturk, C., 2011. An application of wireless sensor network routing based on artificial bee colony algorithm. IEEE Congr. Evol. Comput. 326–330.

53

Pavlyukevich, I., 2007. Lévy flights, non-local search and simulated annealing. Comput. Phys. 226, 1830–1844. Rahnamayan, S., Hamid, R.T., Magdy, M.A.S., 2008. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 12 (1), 64–79. Rashedi, E., Nezamabadi-pour, N., Saryazdi, S., 2009. GSA: a gravitational search algorithm. Inf. Sci. 179 (13), 2232–2248. Richards, M., Ventura, D., 2004. Choosing a starting configuration for PSO. IEEE Int. Conf. Neural Networks 23, 2309–2312. Storn, R., Price, K., 1997. Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11 (4), 341–359. Swagatam, D., Suganthan, P.N., 2011. Differential evolution: a survey of the state-ofthe-art. IEEE Trans. Evol. Comput. 15 (1), 4–31. Tizhoosh, H.R., 2005. Opposition-based learning: a new scheme for machine intelligence. Proc. of International Conference on Comp. Intell. for Modeling Control and Automation 1, 695–701. Udgata, S.K., Sabat, S.L., Mini, S., 2009. Sensor deployment in irregular terrain using artificial bee colony algorithm. IEEE Congress on Nature & Biologically Inspired Computing 1309–1314. Ventresca, M., Tizhoosh, H.R., 2006. Improving the convergence of backpropagation by opposite transfer functions. Proc. of IEEE World Congress Computation Intelligence 9527–9534. Viswanathan, G.M., Afanasyev, V., Buldyrev, S.V., 1996. Levy flight search patterns of wandering albatrosses. Nature 381. Yang, X.S., Deb, S., 2009. Cuckoo search via levy flights. Proc. of World Congress on Nature & Biologically Inspired Computing 210–214. Yang, X.S., 2010. Engineering Optimization: an Introduction with Metaheuristic Applications. John Wiley and Sons, New Jersey, pp. 153–161. Yang, X.S., Deb, S., 2013. Multiobjective cuckoo search for design optimization. Comput. Oper. Res. 40 (6), 1616–1624. Yeh, W.C., Hsieh, T.J., 2011. Solving reliability redundancy allocation problems using an artificial bee colony algorithm. Comput. Oper. Res. 38 (11), 1465–1473. Yin, M.H., Li, X.T., Zhou, J.P., 2011. An efficient job shop scheduling algorithm based on artificial bee colony. Sci. Res. Essay 5 (24), 2578–2596. Zhu, G., Kwong, S., 2010. Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 217, 3166–3173.

A self adaptive hybrid enhanced artificial bee colony algorithm for continuous optimization problems.

The artificial bee colony (ABC) algorithm is one of popular swarm intelligence algorithms that inspired by the foraging behavior of honeybee colonies...
2MB Sizes 0 Downloads 7 Views