IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
1905
Noise-Tuning-Based Hysteretic Noisy Chaotic Neural Network for Broadcast Scheduling Problem in Wireless Multihop Networks Ming Sun, Member, IEEE, Yaoqun Xu, Xuefeng Dai, and Yuan Guo
Abstract— Compared with noisy chaotic neural networks (NCNNs), hysteretic noisy chaotic neural networks (HNCNNs) are more likely to exhibit better optimization performance at higher noise levels, but behave worse at lower noise levels. In order to improve the optimization performance of HNCNNs, this paper presents a novel noise-tuning-based hysteretic noisy chaotic neural network (NHNCNN). Using a noise tuning factor to modulate the level of stochastic noises, the proposed NHNCNN not only balances stochastic wandering and chaotic searching, but also exhibits stronger hysteretic dynamics, thereby improving the optimization performance at both lower and higher noise levels. The aim of the broadcast scheduling problem (BSP) in wireless multihop networks (WMNs) is to design an optimal time-division multiple-access frame structure with minimal frame length and maximal channel utilization. A gradual NHNCNN (G-NHNCNN), which combines the NHNCNN with the gradual expansion scheme, is applied to solve BSP in WMNs to demonstrate the performance of the NHNCNN. Simulation results show that the proposed NHNCNN has a larger probability of finding better solutions compared to both the NCNN and the HNCNN regardless of whether noise amplitudes are lower or higher. Index Terms— Broadcast scheduling problem, hysteresis, noise tuning, noisy chaotic neural network, wireless multihop networks.
I. I NTRODUCTION
W
IRELESS MULTIHOP NETWORKS (WMNs) can provide easy-to-use wireless data communication services over a broad geographic region through intermediate nodes, which are able to receive and forward packets from adjacent nodes via wireless links [1]– [6]. Since, in order to save radio channel resources, all nodes in WMNs often share a single channel to transmit, uncontrolled transmissions often cause conflicts that result in damaged packets at the destination and ultimately increased network delay. In order to avoid transmission conflicts, the time division multiple access (TDMA) protocol has been adopted. In TDMA, time is divided Manuscript received February 11, 2012; revised August 31, 2012; accepted September 5, 2012. Date of publication October 18, 2012; date of current version November 20, 2012. This work was supported in part by the National Natural Science Foundation of China under Grant 61100103, Grant 60974104, and Grant 61201370, the Natural Science Foundation of Heilongjiang Province under Grant F201035, and the Program for Young Teachers Scientific Research in Qiqihar University under Grant 2010K-M14. M. Sun, X. Dai, and Y. Guo are with the College of Computer and Control Engineering, Qiqihar University, Qiqihar 161006, China (e-mail:
[email protected];
[email protected];
[email protected]). Y. Xu is with the Institute of Systems Engineering, Harbin University of Commerce, Harbin 150028, China (e-mail:
[email protected]). Digital Object Identifier 10.1109/TNNLS.2012.2218126
into frames and each frame is a collection of time slots. A time slot has a unit time length required for a single packet to be transmitted to or received from adjacent nodes. Effective broadcast scheduling for TDMA is necessary to avoid any conflict and to use channel resources efficiently [4]. The goal of the broadcast scheduling problem (BSP) is to find an optimal TDMA frame structure that fulfills the following two objectives: 1) to find a minimal TDMA frame length able to schedule transmissions of all nodes without any conflict and 2) to maximize the conflict-free node transmissions, i.e., maximize channel utilization. The BSP has been proven to be NP-complete [7], [8], and various heuristic methods have been proposed to solve the BSP. Most of the earlier methods mainly focus on the maximization of the channel utilization under the assumption that the TDMA frame length is fixed and is known a priori [8], [9]. In contrast, recent methods tend to find both the minimal frame length and the maximal channel utilization in two separate stages, which are more suitable for the two objectives of BSP. As a result, there have been extensive research interests and efforts in usage of efficient optimization technologies to come up with two-stage algorithms [1]– [7], [10]– [15] for BSP. Neural networks have proven to be powerful tools in many fields, including combinatorial optimization, signal processing, automatic control, pattern recognition, and so on [16]–[20]. Hopfield-type neural networks have been proven to be feasible and efficient for BSP by means of two-stage optimization. For example, Funabiki and Kitamichi [3] first proposed a gradual neural network (GNN) by combining a binary Hopfield neural network (HNN) and a gradual expansion scheme (GES) for BSP. They used the GNN to find the minimal TDMA frame length in the first stage, and then used the binary HNN to maximize the channel utilization in the second stage. After the GNN, Wang et al. [1] proposed the gradual noisy chaotic neural network (G-NCNN) for BSP by combining the noisy chaotic neural network (NCNN) and the GES. It is noted that the NCNN [1], [20] searches globally using stochastic chaotic simulated annealing (SCSA), which combines the best of both stochastic simulated annealing (SSA) [21] and chaotic simulated annealing (CSA) [22], i.e., stochastic wandering and efficient chaotic searching. Consequently, the NCNN outperforms both the HNN and the transiently chaotic neural network (TCNN) [22] in various combinatorial optimization problems [23]– [25]. Hence, the G-NCNN
2162–237X/$31.00 © 2012 IEEE
1906
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
is also superior to both the GNN and the gradual TCNN (G-TCNN) in solving BSP [1]. Besides, the G-NCNN has been proven to be superior to the sequential vertex coloring (SVC) algorithm [4], the hybrid Hopfield neural networkgenetic algorithm (HNN-GA) [5], and mean field annealing (MFA) algorithm [7]. There has been much research and interest [6], [15] in improvements on the NCNN together with applications to BSP because of the excellent optimization property of SCSA in the NCNN. Recently, for overcoming the decayed performance of the NCNN at higher initial noise amplitudes, Sun et al. [6] proposed the hysteretic noisy chaotic neural network (HNCNN) by first constructing an equivalent NCNN model and then controlling noises in the equivalent model, which can exhibit both hysteretic dynamics and SCSA without increasing any extra parameters into the NCNN. Hysteretic dynamics in the HNCNN with higher noise levels can not only be enhanced to prevent the network trapping in local minima effectively, but can also help the network achieve a better tradeoff between stochastic wandering and chaotic searching of SCSA. As a result, the gradual HNCNN (G-HNCNN) at higher initial noise amplitudes can obtain better TDMA frame schedules compared to the G-NCNN [6]. Moreover, the G-HNCNN with higher initial noise amplitudes is also proven to be superior to other algorithms, such as the hybrid sequential vertex coloring-noisy chaotic neural network (SVC-NCNN) [2], the component permutation genetic algorithm (CPGA) [10], the finite-state machine-based algorithm (FSMA) [11], the co-evolutionary genetic algorithm for collision-free set problem (GACFS) [12], and the graph coloring algorithms [13], [14]. However, simulation results of BSP in [6] suggest that the G-HNCNN often performs worse than the G-NCNN at lower initial noise amplitudes. For one thing, with the initial noise amplitude decreasing, hysteretic dynamics in the HNCNN become decayed. As a result, hysteretic dynamics in the HNCNN with lower noise levels are too weak to prevent the network trapping in local minima effectively at lower noise levels. Also, hysteretic dynamics in the HNCNN have an ability to suppress stochastic noises, which causes stochastic wandering in the HNCNN to be weaker than that in the NCNN, destroying the balance between stochastic wandering and chaotic searching of SCSA in the HNCNN. In this paper, in order to improve the optimization performance of the HNCNN, we propose a novel noise-tuningbased hysteretic noisy chaotic neural network (NHNCNN). Unlike the HNCNN, there is a noise tuning factor in the NHNCNN. By using the noise tuning factor to modulate the level of stochastic noises, the NHNCNN can not only balance stochastic wandering and chaotic searching for better playing the role of SCSA in optimization, but can also cause a stronger hysteretic dynamics to escape from local minima more effectively. With the help of the noise tuning factor, the NHNCNN exhibits a higher probability of obtaining better solutions than the NCNN and the HNCNN not only at lower noise levels but also at higher noise levels. The rest of this paper is organized as follows. In the next section, a general overview of the NCNN and the HNCNN is given.
Our NHNCNN is proposed, and differences between the NHNCNN and the HNCNN are analyzed in detail in Section III. Section IV briefly reviews and formulates the BSP in WMNs, and provides the motion equations of the gradual NHNCNN (G-NHNCNN) for the two-stage BSP optimization. Finally, the G-NHNCNN is applied to solve BSP by making comparisons with the G-NCNN and the G-HNCNN in Section V. Conclusions are made in the last section. II. OVERVIEW OF NCNN AND HNCNN A. NCNN By introducing an exponentially decaying self-feedback connection weight into the original HNN, TCNN can acquire the ability to escape from local minima of combinatorial optimization problems. The exponentially decaying self-feedback weight of the TCNN can cause a reverse bifurcation process known as CSA [22], [26]–[30]. Because CSA can control the state searching in a fractal region which is much smaller than the entire state space, CSA has higher searching efficiency. However, CSA has completely deterministic dynamics and is not guaranteed to find the globally optimal solutions for some initial conditions of the TCNN, no matter how slowly the annealing occurs [31], [32]. In contrast with CSA, SSA can search a globally optimal solution with probability one if the annealing speed is sufficiently slow [1], i.e., SSA is able to find a globally optimal solution so long as searching time is enough long. In order to combine the best of CSA and SSA, i.e., efficient chaotic searching and stochastic wandering, Wang et al. [1], [20] added decaying stochastic noise into the TCNN, and proposed SCSA using the NCNN. Since stochastic wandering works both before and after chaos disappears, the optimization performance of both chaotic searching and gradient descent searching is greatly improved. As a result, the NCNN is superior to the TCNN. The NCNN improves optimization performance greatly from the viewpoint of simulated annealing. Nevertheless, higher noises in the NCNN can easily destroy reverse bifurcations [1], i.e., destroy CSA and weaken the efficiency of chaotic searching. Consequently, the performance of the NCNN easily decays at higher noise levels. B. HNCNN In order to overcome the decayed performance of the NCNN at higher noise levels, Sun et al. [6] first constructed an equivalent NCNN model and then controlled noises of the equivalent model to propose the HNCNN. Compared to the NCNN, the HNCNN can exhibit both hysteretic dynamics and SCSA without adding any extra parameters. In addition, the activation function in the HNCNN is composed of two offset sigmoid functions and engenders a hysteretic loop, known as the hysteretic activation function [6]. Hysteretic dynamics in the HNCNN can cause neurons to jump discontinuously between two offset sigmoid functions, which can pull neurons out of saturated regions and help the network to escape from local minima. If the distance of the two offset sigmoid functions becomes larger, the discontinuous
SUN et al.: NOISE-TUNING-BASED HYSTERETIC NCNN FOR BSP IN WMNs
jump of neurons will be aggravated, which is more beneficial for the network to escape from local minima. Note that, the distance of two offset sigmoid centers in the HNCNN is directly proportional to the absolute value of stochastic noises. Hence, hysteretic dynamics in the HNCNN can improve the ability to escape from local minima at higher noise levels. In addition, hysteretic dynamics in the HNCNN can cause the neuron to have a tendency to remain in its current state, which can cause the network to have an ability to suppress noises [6], [33]. Suppression on noises is beneficial to balance stochastic wandering and chaotic searching at higher noise levels. As a result, the HNCNN has a higher probability of obtaining better solutions compared to the NCNN at higher noise levels. However, the HNCNN is likely to be inferior to the NCNN at lower noise levels [6]. For one thing, suppression of hysteretic dynamics on noises causes stochastic wandering in the HNCNN to become weaker than that in the NCNN, which may destroy the balance between stochastic wandering and chaotic searching of SCSA in the HNCNN as the noise level is lower. Also, because the discontinuous jump is too weak to pull neurons out of the saturated region at lower noise levels, hysteretic dynamics cannot help neurons to escape from local minima. Simulation results of BSP in [6] can confirm the deficiency of the HNCNN.
simulated annealing of n(t). Note that n(t) and n(t + 1) are two independent random variables generated in time t and t + 1, respectively. Unlike the HNCNN, there exists a noise tuning factor δ in the motion (2) of the NHNCNN. It is noted that the noise tuning factor here works as a multiplication coefficient before stochastic noises in the motion (2). Note that, if we change the value of δ, stochastic noises added in the motion (2) will be changed while stochastic noises added in the activation function (1) will be unchanged. Obviously, the NHNCNN model will become the HNCNN model if the noise tuning factor is equal to the damping factor of nerve membrane (i.e., δ = k). The exponentially decaying z(t) and A[n(t)] [see (3) and (4)] can cause the network to exhibit chaotic searching and stochastic wandering, known as SCSA [1], [6], [20]. And the activation function (1) together with (5) can cause the network to exhibit hysteretic dynamics. That is to say, the proposed NHNCNN can exhibit SCSA and hysteretic dynamics simultaneously. In the following sections, both SCSA and hysteretic dynamics in the NHNCNN are analyzed in detail. Before the analyses, we first give the single neuron model of the NHNCNN, shown as follows: 1 1 + exp{−[y(t) + η(t)]/ε} y(t + 1) = ky(t) + δη(t) − z(t) (x(t) − I0 ) z(t + 1) = (1 − β1 )z(t) x(t) =
III. P ROPOSED NHNCNN In order to improve the optimization performance of the HNCNN, we propose a NHNCNN in this paper, shown as follows: 1 (1) x i j (t) = 1 + exp{−[yi j (t) + ηi j (t)]/ε} yi j (t + 1) = k yi j (t) + δηi j (t) ⎡ ⎤ N M ⎢ ⎥ +α ⎢ wi j,kl x kl (t) + Ii j ⎥ ⎣ ⎦
1907
(2)
k=1, l=1, k =i l = j
−z(t) x i j (t) − I0 z(t + 1) = (1 − β1 )z(t) (3) (4) A[n(t + 1)] = (1 − β2 )A[n(t)] ⎧ ⎪ t =0 ⎨0, ηi j (t) = + |n(t − 1)| , t > 0, yi j (t) < yi j (t − 1) ⎪ ⎩ − |n(t − 1)| , t > 0, yi j (t) ≥ yi j (t − 1) (5) where x i j (t) is the output of the neuron i j ; yi j (t) is the input of the neuron i j ; ηi j in (1) acts as the sigmoid center of the activation function; k(0 < k < 1) is a damping factor of nerve membrane; δ(δ > 0) is the noise tuning factor; z(t) is self-feedback connection weight; ε is the steepness parameter of the activation function; α is positive scaling parameter for inputs; wi j,kl is the connection weight from neuron kl to neuron i j , with wi j,kl = wkl,i j and wi j,i j = 0; Ii j is an input bias of neuron i j ; I0 is a positive parameter; n(t) is stochastic noise in [−A[n(t)], A[n(t)]] with a uniform distribution, and A[n(t)] is the noise amplitude; β1 (0 < β1 < 1) is the simulated annealing of z(t); β2 (0 < β2 < 1) is the
(6) (7) (8)
A[n(t + 1)] = (1 − β2 )A[n(t)] (9) ⎧ ⎪ 0, t = 0 ⎨ η(t) = + |n(t − 1)| , t > 0, y(t) < y(t − 1) ⎪ ⎩ − |n(t − 1)| , t > 0, y(t) ≥ y(t − 1). (10) A. SCSA of NHNCNN Under the assumption of the same initial noise amplitude A[n(0)], the level of stochastic noises in the NHNCNN will be higher than that in the HNCNN when δ is larger than k, where we call δ(δ > k) as a noise increase factor (NIF). Under the condition of the same initial noise amplitude A[n(0)], the level of stochastic noises in the NHNCNN will be lower than that in the HNCNN when δ is smaller than k, where we call δ(δ < k) as a noise reduction factor (NRF). From the effects of the noise tuning factor on the level of stochastic noises, we can see that the NIF can be used to strengthen stochastic wandering, while the NRF can be used to weaken stochastic wandering. Hence, modulation of the noise tuning factor is beneficial to balance stochastic wandering and chaotic searching of SCSA. On the one hand, if the magnitude of stochastic noises is too small to take effect on the stochastic wandering, we can use an NIF to strengthen stochastic wandering. On the other hand, if the magnitude of stochastic noises is so high that the reverse bifurcation process is seriously destroyed, we can use an NRF to weaken stochastic wandering.
1908
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
Because of the existence of the noise tuning factor, SCSA in the NHNCNN will exhibit a new property compared to that in both the HNCNN and the NCNN. Namely, with the help of the noise tuning factor, the NHNCNN can try its best to balance stochastic wandering and chaotic searching of SCSA regardless of whether the noise amplitude is higher or lower, while both the HNCNN and the NCNN can balance stochastic wandering and chaotic searching of SCSA only at a certain amplitude of stochastic noises [6]. B. Hysteretic Dynamics of NHNCNN Since a single neuron was demonstrated to exhibit hysteretic property [34], hysteretic neural networks have been extensively studied. For example, Takefuji and Lee [35] proposed a binary hysteretic HNN which uses upper/lower trip points to exhibit hysteretic dynamics. Bharitkar and Mendel [36] proposed a multivalued hysteretic HNN which uses sigmoid center parameters to exhibit hysteretic dynamics. Liu and Xiu [37] proposed a multivalued hysteretic chaotic neural network which uses exponentially decaying sigmoid center parameters to exhibit hysteretic dynamics. In addition, Sun et al. [6] proposed the HNCNN by controlling the decaying stochastic noises in an equivalent NCNN model to exhibit hysteretic dynamics. Hysteretic activation functions can be divided into two types based on movement directions of hysteretic loops. Hysteretic loops of the first type move in an anticlockwise direction. Such hysteretic activation functions of the first type, which tend to cause neurons to remain in their current states and can help neurons to resist random neural response, have a certain ability to suppress noises [6], [33]. Inversely, hysteretic loops of the second type move in a clockwise direction, which tend to escape from the current state of neurons and can quicken learning rate [37]. Since such hysteretic loops of the second type can quicken changes of states, they are incapable of suppressing noises. Among the above mentioned hysteretic neural networks, hysteretic activation functions in [6], [33], and [35] belong to the first type, while those in [36] and [37] belong to the second type. However, all the hysteretic activation functions can cause neurons to jump discontinuously between two offset activation functions. Therefore, all of them can prevent some or all the neurons from prematurely saturating by pulling neurons out of their saturated regions. In this sense, all the hysteretic activation functions would be able to help networks to escape the local minima. As for the HNCNN and the NHNCNN, hysteretic dynamics have relationships with amplitudes of sigmoid center in the hysteretic activation function, and different amplitudes of sigmoid centers can cause differences in hysteretic dynamics. In the following section, sigmoid center of the hysteretic activation function in the NHNCNN neuron is analyzed in detail to demonstrate differences between the NHNCNN and the HNCNN. For easier comparisons between the NHNCNN and the HNCNN, we transform the NHNCNN neuron (6)–(10) into another equivalent expression (11)–(15) under the assumption of the same η(t)(t = 0, 1, . . . , n) and the same initial input,
shown below x (t) =
1
(11) + ξ(t)]/ε} y (t + 1) = k[y (t) + η(t)] − z(t) x (t) − I0 (12) z(t + 1) = (1 − β1 )z(t) (13) 1 + exp{−[y (t)
A[n(t + 1)] = (1 − β2 )A[n(t)]
(14)
where
⎧ η(0), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨(δ − k)η(0) + η(1), ξ(t) = k(δ − k)η(0) + (δ − k)η(1) + η(2), ⎪ ⎪ ⎪... ⎪ ⎪ ⎪ ⎩n−1 n−(i+1) (δ − k)η(i ) + η(n), i=0 k
t t t .. . t
=0 =1 =2
(15)
= n.
Note that ξ(t) in (11) acts as the sigmoid center of the activation function. Obviously, ξ(t) will become η(t) if δ = k. With the same variable η(t) (t = 0, 1, . . . , n) and the same initial input, the inputs together with the outputs of the NHNCNN neuron (6)–(10) and the equivalent expression (11)–(15) have the following equivalent relationship, described as: x(t) = x (t) (16) −1 t −(i+1) k (δ − k)η(i ) y(t) = y (t) + ti=0 where x(t) and y(t) represent the output and the input of the output and the (6)–(10), x (t) and y (t) represent −1 t −(i+1) input of (11)–(15), and lim ti=0 k (δ − k)η(i ) = 0 t →∞ at 0 < k < 1. The equivalent relationship (16) between (6)–(10) and (11)–(15) can be easily proven using mathematical induction. As seen from the transformation from (6)–(10) to (11)– (15), to make it easier for comparisons between sigmoid centers in the NHNCNN neuron and the HNCNN neuron, the sigmoid center of the NHNCNN neuron is transformed into the variable ξ(t). That is to say, it is the variable ξ(t) that affects hysteretic dynamics of the NHNCNN neuron, while it is the variable η(t) that affects hysteretic dynamics of the HNCNN neuron. This suggests that there exist differences of hysteretic dynamics between the NHNCNN neuron and the HNCNN neuron. The differences can be described as follows. For one thing, the variable ξ(t) is obviously different from the variable η(t) in amplitudes. For another, the algebraic sign of the variable ξ(t) can be either same or different from that of [y (t) − y (t − 1)], where y (t) is the input of (11)–(15) at time t. Namely, the hysteretic loop with sigmoid center ξ(t) in the NHNCNN neuron can move either in an anticlockwise direction or in a clockwise direction, which is distinct from the HNCNN neuron where the hysteretic loop with sigmoid center η(t) moves only in an anticlockwise direction. C. Simulation of SCSA and Hysteretic Dynamics in NHNCNN Neuron Dynamics In this section, we demonstrate SCSA and hysteretic dynamics of the NHNCNN by comparing neuron dynamics of the NHNCNN with the HNCNN.
SUN et al.: NOISE-TUNING-BASED HYSTERETIC NCNN FOR BSP IN WMNs
Fig. 1. Dynamics of the HNCNN neuron with the initial noise amplitude A[n(0)] = 0.002.
1909
(a)
(b) (a)
(b) Fig. 2. Dynamics of the NHNCNN neuron with different noise tuning factors δ at A[n(0)] = 0.002. (a) NRF: δ = 0.3. (b) NIF: δ = 3.0.
For convenience, to compare neuron dynamics of the NHNCNN with the HNCNN, we set the parameters k, ε, z(0), I0 , β1 , and β2 of the NHNCNN neuron same with those of
Fig. 3. Dynamics of sigmoid center ξ(t) with different noise tuning factors and sigmoid center η(t) at A(0) = 0.002. (a) Sigmoid center ξ(t) with the NRF δ = 0.3. (b) Sigmoid center ξ(t) with the NIF δ = 3.0.
the HNCNN neuron in [6]: k = 0.9, ε = 0.004, z(0) = 0.1, I0 = 0.65, β1 = 0.0003, and β2 = 0.0005. In addition, the initial noise amplitude A[n(0)] is set as 0.002. In order to compare the NHNCNN neuron with the HNCNN neuron as well as show the effects of NRF and NIF on neuron dynamics, we set the noise tuning factor as 0.3 (NRF) and 3.0 (NIF). Dynamics behaviors of the HNCNN neuron and the NHNCNN neuron with different noise tuning factors (NRF: δ = 0.3, NIF: δ = 3.0) are shown in Figs. 1 and 2, respectively. In addition, the corresponding dynamics of sigmoid centers ξ(t) with different noise tuning factors (NRF: δ = 0.3, NIF: δ = 3.0) compared to η(t) are plotted in Fig. 3, where the percentages of movement directions of hysteretic loops from ξ(t) with different noise tuning factors (NRF: δ = 0.3, NIF: δ = 3.0) are also provided. As seen from Fig. 2, all the neuron states can retreat from chaos by the reverse period-doubling bifurcation, and stochastic noises work both before and after chaos disappears. That is to say, the NHNCNN neuron can exhibit chaotic searching and stochastic wandering of SCSA. As seen from Fig. 3, the sigmoid center ξ(t) can evolve larger amplitudes
1910
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
more easily than the sigmoid center η(t), regardless of whether the noise tuning factor is NRF (δ = 0.3) or NIF (δ = 3.0). This suggests that the NHNCNN neuron can exhibit stronger hysteretic dynamics than the HNCNN neuron. In addition, movement directions of hysteretic loops generated from ξ(t) labeled in Fig. 3 show that hysteretic loop of the NHNCNN neuron can move in either an anticlockwise or clockwise direction. However, effects of NRF (δ = 0.3) and NIF (δ = 3.0) on both SCSA and hysteretic dynamics of the NHNCNN neuron are different, as shown below. 1) Effects of the NRF on Neuron Dynamics: Note that dynamics of the NHNCNN neuron and the sigmoid center ξ(t) with the NRF δ = 0.3 are shown in Figs. 2(a) and 3(a), respectively. The percentages of movement directions of hysteretic loops provided in Fig. 3(a) indicate that the anticlockwisedirection hysteretic loop with the ability to suppress noises dominates hysteretic dynamics in the NHNCNN neuron with the NRF δ = 0.3, which suggests domination of hysteretic dynamics in suppressing noises. Hence, the NRF can cause the NHNCNN neuron to exhibit stronger hysteretic dynamics dominant in suppressing noises. 2) Effects of the NIF on Neuron Dynamics: Note that dynamics of the NHNCNN neuron and the sigmoid center ξ(t) with the NIF δ = 3.0 are shown in Figs. 2(b) and 3(b), respectively. The percentages of movement directions of hysteretic loops provided in Fig. 3(b) imply that the anticlockwisedirection hysteretic loop with the ability to suppress noises does not dominate hysteretic dynamics in the NHNCNN neuron, which greatly reduces the ability to suppress noises. That is to say, the NIF can cause the NHNCNN neuron to exhibit stronger hysteretic dynamics which have a certain ability to strengthen noises. As seen from the above analyses, hysteretic dynamics in the NHNCNN neuron caused by the NRF or NIF can behave stronger, and can exhibit an ability similar to the NRF or NIF in suppressing or strengthening noises. D. Optimization Advantages of NHNCNN Over HNCNN and NCNN 1) Well Balanced SCSA: It can be seen from analyses in Section III-A that, the NIF can be used to strengthen stochastic wandering if the initial noise amplitude is lower, while the NRF can be used to weaken stochastic wandering if the initial noise amplitude is higher. That is to say, the NHNCNN can use the NRF/NIF to well balance stochastic wandering and chaotic searching of SCSA regardless of levels of stochastic noises, while both the HNCNN and the NCNN can balance stochastic wandering and chaotic searching of SCSA only at a certain level of stochastic noises. 2) Stronger Hysteretic Dynamics: It can be seen from analyses in Section III-C that, the NHNCNN has stronger hysteretic dynamics than the HNCNN, no matter the noise tuning factor is the NRF or NIF. In addition, the stronger hysteretic dynamics caused by the NRF or NIF, which can exhibit an ability similar to the NRF or NIF in suppressing or strengthening noises, can further enhance the role of the NRF or NIF in balancing SCSA. Such an interaction between
stronger hysteretic dynamics and well balanced SCSA is more beneficial for the NHNCNN to escape from local minima than the HNCNN and the NCNN. Based on the above two aspects, the NHNCNN with the help of the noise tuning factors would achieve better solutions than the HNCNN and the NCNN, no matter whether the initial noise amplitude is higher or lower. IV. BSP AND NHNCNN-BASED T WO -S TAGE O PTIMIZATION In this section, we briefly describe and formulate BSP in WMNs, and provide the motion equations of NHNCNN-based two-stage optimization. A. BSP A WMN can be represented by a graph G = (V, E), where vertices in V = {1, 2, . . . , N} are network nodes, N is the total number of nodes in the WMN, and E is the direct connectivity set of adjacent nodes. If (i, j ) ∈ E, nodes i and j are called as one hop away. If nodes i and j are not one hop away but satisfy (i, k) ∈ E and (k, j ) ∈ E for node k, nodes i and j are called as two hops away. There are two constraints in BSP. For one thing, two nodes that are one hop away or two hops away cannot transmit in the same slot of a TDMA frame. For another, each node must be scheduled to transmit at least once in a TDMA frame. We use an N × N symmetric binary matrix C = {ci j }(i, j = 1, 2, . . . , N) called the connectivity matrix to represent the topology of a WMN 1, if (i, j ) ∈ E for i = j (17) ci j = 0, otherwise. From matrix C, we can obtain another N × N symmetric binary matrix D = {di j }(i, j = 1, 2, . . . , N) called the compatibility matrix ⎧ ⎪ ⎨1, if nodes i and j are (18) di j = within two-hop away for i = j ⎪ ⎩ 0, otherwise. We use a N × M binary matrix T = {ti j } to describe the TDMA frame, where M is the number of time slots 1, if time slot j is assigned to node i ti j = (19) 0, otherwise. The channel utilization ρ for all the nodes and the average time delay η for each node to broadcast packets, which can be used to evaluate different algorithms, are described as follows: N M 1 ti j NM i=1 j =1 N 1 M . η= M N j =1 ti j
ρ=
(20)
(21)
i=1
The BSP can be formulated as to find an optimal or suboptimal TDMA frame structure T = {ti j }, which has a
SUN et al.: NOISE-TUNING-BASED HYSTERETIC NCNN FOR BSP IN WMNs
minimal frame length Mmin and a maximal channel utilization ρmax , satisfying the following two constraints [1], [6]: M
1911
TABLE I S PECIFICATIONS OF T WO B ENCHMARK P ROBLEMS AND T HREE R ANDOM C ASES
ti j > 0 for all i = 1, 2, . . . , N
(22)
j =1 M N N
dik ti j tkj = 0.
(23)
j =1 i=1 k=1,k =i
The constraint (22) means that each node must transmit at least once in a TDMA frame, while the constraint (23) represents that any two nodes with one hop or two hops away cannot be scheduled in the same time slot.
Instance
Nodes N
Edges E
Maximum Degree
Minimum Degree
BM 1 BM 2 Case 1 Case 2 Case 3
30 100 100 200 400
70 200 201 1043 2215
8 8 8 17 23
2 1 1 1 3
V. S IMULATION R ESULTS AND D ISCUSSION
are 100-node-201-edge, 200-node-1043-edge, and 400-node2215-edge. They are listed in Table I. In evaluations, all the simulation results are obtained in 50 different runs. If an algorithm can find a feasible solution in the first stage but failed to converge to a feasible one in the second stage, we take the feasible solution obtained in the first stage as the final one for BSP. We use the best solution with its frequency, average runtime, coverage rate, mean ± standard deviation/maximum/minimum (mean ± SD/max/min) of M and η in the 50 different runs as evaluation indices. For convenience, we take the best solution obtained by an algorithm as Mmin /Pmax /ηmin , where Mmin is the minimal TDMA frame length of the best solution, Pmax (Pmax = N Mmin ρmax ) is the maximal number of transmissions of the best solution, and ηmin is the minimal average time delay of the best solution. We use the following evaluation criterions (1)–(3) to compare the best solutions obtained by various algorithms. (Assume that the best solution obtained by the algorithm A A /P A /η A , while the best solution obtained by the is Mmin max min B /P B /η B .) algorithm B is Mmin max min A B and η A < η B , then the best solution 1) If Mmin < Mmin min min obtained by the algorithm A is better than that obtained by the algorithm B. A = M B and P A > P B , then the best solution 2) If Mmin max max min obtained by the algorithm A is better than that obtained A is smaller by the algorithm B, no matter whether ηmin B than ηmin or not. A = M B , P A = P B and η A < η B , then 3) If Mmin max max min min min the best solution obtained by the algorithm A is better than that obtained by the algorithm B. For comparisons with the G-HNCNN and the G-NCNN, the parameters of the G-NHNCNN together with the weighting coefficients are set as same as those in [1], [2], [6], and [20], where the parameters of the G-NCNN, the G-HNCNN and the weighting coefficients are discussed in detail. Following the references, the involved parameters are set as: α = 0.015, ε = 0.004, z(0) = 0.08, β1 = 0.001, β2 = 0.0001, k = 0.9, I0 = 0.65, W1 = 1.0, W2 = 1.0, W3 = 1.0, and W4 = 0.6.
For convenience, we label the NHNCNN-based two-stage optimization for BSP as the G-NHNCNN in our simulations. In this section, the proposed G-NHNCNN is used to compare with the G-HNCNN and the G-NCNN. Two benchmark problems and three random cases are used to evaluate those algorithms. The two benchmark problems are 30-node-70-edge [7] and 100-node-200-edge [38], and the three random cases
A. Demonstrate Effects of NIFs and NRFs by BM 1 and BM 2 For convenient comparisons and analyses, simulation results obtained by the G-HNCNN and the G-NCNN for the two benchmark problems, i.e., BM 1 and BM 2, are summarized in Tables II and V. As seen from Tables II and V, the G-HNCNN
B. NHNCNN-Based Two-Stage Optimization For convenient comparisons with the G-NCNN and the G-HNCNN for BSP, the NHNCNN-based two-stage optimization adopts the same two-stage optimization with [1], [6] in this paper. Hence, we will not describe the two-stage optimization in any more detail, but we will briefly provide the revelent motion equations of the NHNCNN-based two-stage optimization. In the first stage, the NHNCNN computes using the following motion equation to minimize the TDMA frame length:
yi j (t + 1) = k yi j (t) + δηi j (t) − z(t) x i j (t) − I0 + αW1 −αW1
M
x ik + αW2
N
dik x kj
(24)
k=1,k =i
k=1
where W1 and W2 are weighting coefficients. In the second stage, the NHNCNN computes using the following motion equation to maximize the channel utilization:
yi j (t + 1) = k yi j (t) + δηi j (t) − z(t) x i j (t) − I0 −αW3
N
dik x kj + αW4 1 − x i j (25)
k=1,k =i
where W3 and W4 are weighting coefficients. Note that we use (26) [1], [22] to convert neuron output x i j (t) to ti j (i = 1, 2, . . . , N, j = 1, 2, . . . , M) when the network converges. If ti j satisfies the above two constraints (22) and (23), then T = {ti j } is a feasible TDMA schedule. Otherwise, T = {ti j } is not a feasible TDMA schedule ⎧ N M ⎪ ⎪ ⎨1, if x (t) > x kl (t)/(N × M) ij ti j = (26) k=1 l=1 ⎪ ⎪ ⎩0, otherwise.
1912
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
TABLE II S IMULATION R ESULTS O BTAINED BY THE G-HNCNN AND THE G-NCNN FOR THE
B ENCHMARK P ROBLEM BM 1 AT D IFFERENT I NITIAL N OISE A MPLITUDES IN 50 D IFFERENT RUNS
TABLE IV S IMULATION R ESULTS O BTAINED BY THE G-NHNCNN W ITH D IFFERENT NRFs FOR THE B ENCHMARK P ROBLEM BM 1 AT H IGHER I NITIAL N OISE A MPLITUDES OF 0.06 AND 0.09 IN 50 D IFFERENT RUNS
Algorithm
A[n(0)]
Best Solutions (Frequency)
Average Runtimes (s)
Converge Rate
G-HNCNN
0.003 0.006 0.008 0.060 0.090
11/42/8.80 (8) 11/42/8.80 (10) 10/36/9.00 (1) 10/37/8.83 (1) 10/37/9.72 (2)
2.12 2.24 2.15 1.85 1.91
100% 100% 100% 100% 100%
G-NCNN
0.003 0.006 0.008 0.060 0.090
11/42/8.80 (21) 10/36/9.00 (6) 10/36/9.00 (15) 10/36/9.00 (1) 14/35/12.83 (1)
1.84 1.76 1.78 2.65 4.53
100% 100% 100% 100% 2%
Best Solutions (Frequency) 0.8 10/37/8.83 (8) 0.7 10/37/8.83 (10) 0.06 0.6 10/37/8.83 (12) 0.5 10/37/8.83 (15) 0.4 10/37/8.83 (8) 0.8 10/37/8.83 (2) 0.7 10/37/8.83 (4) 0.09 0.6 10/37/8.83 (4) 0.5 10/37/8.83 (11) 0.4 10/37/8.83 (10) Coverage rate = 100%. A[n(0)]
δ
Average Runtimes (s) 1.75 1.54 1.61 1.65 1.98 1.93 1.77 1.72 1.69 1.82
TABLE III S IMULATION R ESULTS O BTAINED BY THE G-NHNCNN W ITH D IFFERENT NIFs FOR THE B ENCHMARK P ROBLEM BM 1 AT L OWER I NITIAL N OISE A MPLITUDES OF 0.003, 0.006, AND 0.008 IN 50 D IFFERENT RUNS Best Solutions (Frequency) 1.5 11/42/8.80 (6) 2.0 11/42/8.80 (8) 0.003 3.0 10/37/8.83 (1) 4.0 10/37/8.83 (2) 5.0 10/37/9.72 (9) 1.5 10/36/9.00 (3) 2.0 10/37/8.83 (2) 0.006 3.0 10/37/8.83 (12) 4.0 10/37/8.83 (17) 5.0 10/37/8.83 (24) 1.5 10/37/8.83 (3) 2.0 10/37/8.83 (9) 0.008 3.0 10/37/8.83 (14) 4.0 10/37/8.83 (27) 5.0 10/37/8.83 (17) Coverage rate = 100%. A[n(0)]
δ
Average Runtimes (s) 2.15 2.16 2.06 1.93 1.76 2.32 2.22 2.04 1.60 1.44 2.23 2.12 1.72 1.52 1.52
is inferior to the G-NCNN at lower initial noise amplitudes of 0.003, 0.006, and 0.008, while it is superior to the G-NCNN at higher initial noise amplitudes of 0.03, 0.04, 0.06, and 0.09. In the following section, we use the G-NHNCNN with the NRF/NIF to solve the two benchmark problems in order to demonstrate the effects of the NRF/NIF on optimization performance. First, we use the G-NHNCNN with different NIFs to resolve the two benchmark problems at lower initial noise amplitudes of 0.003, 0.006, and 0.008. The simulation results are summarized in Tables III and VI. For convenient comparisons, evaluation indices of the best optimization results obtained by the G-HNCNN, the G-HNCNN, and the G-NHNCNN with NIFs for BM 1 and BM 2 are plotted in Fig. 4. As seen from Table III and Fig. 4(a)–(b), the G-NHNCNN with larger NIFs tends to have a larger frequency and a smaller average runtime to obtain the maximal transmissions Pmax = 37 with the minimal frame length M = 9 for BM 1,
while both the G-HNCNN and the G-NCNN are difficult to achieve the maximal transmissions at the lower initial noise amplitudes of 0.003, 0.006, and 0.008. As seen from Table VI and Fig. 4(c)–(e), the G-NHNCNN with larger NIFs also has a tendency to have a smaller average runtime to obtain better solutions together with smaller mean ± SD of η for BM 2 than the G-HNCNN and the G-NCNN at most of the lower initial noise amplitudes. In addition, appropriately larger NIF is needed to achieve the best performance for the smaller initial noise amplitude. For example, in order to achieve the best performance shown in Fig. 4, a larger NIF 5.0 is needed at A[n(0)] = 0.003 while a smaller NIF 4.0 is needed at A[n(0)] = 0.008 for BM 1, and a larger NIF 2.0 is needed at A[n(0)] = 0.003 while a smaller NIF 1.8 is needed at A[n(0)] = 0.008 for BM 2. Note that the best solution 9/142/7.37 for BM 2 obtained by G-NHNCNN is better than results obtained by those algorithms [10]– [12], [38], summarized in Table VIII. Second, we use the G-NHNCNN with different NRFs to resolve the two benchmark problems at higher initial noise amplitudes, such as 0.03, 0.04, 0.06, and 0.09. The simulation results are summarized in Tables IV and VII. Similarly, evaluation indices of the best optimization results obtained by the G-HNCNN, the G-NCNN and the G-NHNCNN with NRFs are plotted in Fig. 5 for convenient comparisons. It can be seen from Table IV and Fig. 5(a) and (b) that the G-NHNCNN with NRFs can take a smaller average runtime to improve the frequency of achieving the best solutions for BM 1. Results in Table VII and Fig. 5(c)–(e) show that, although the G-NHNCNN with NRFs can obtain larger transmissions for BM 2, evaluation indices of both the average runtime and mean ± SD of η achieved by the G-NHNCNN with NRFs become worse. Besides, the best solution 9/132/7.72 obtained by NRFs is inferior to 9/142/7.37 obtained by NRFs. All these imply that, with the scale of BSP increasing, the G-NHNCNN with NRFs tends to be inferior to the G-NHNCNN with NIFs. However, the G-NHNCNN with NIFs/NRFs is able to obtain better solutions for BM 1 and BM 2 at lower/higher initial noise amplitudes.
SUN et al.: NOISE-TUNING-BASED HYSTERETIC NCNN FOR BSP IN WMNs
1913
Fig. 4. Evaluation indices of the best optimization results obtained by the G-HNCNN, the G-NCNN, and the G-NHNCNN with NIFs for BM 1 and BM 2 at lower initial noise amplitudes of 0.003, 0.006, and 0.008 in 50 different runs. (a) Frequency of the maximal transmission Pmax = 37 obtained for BM 1. (b) Average runtimes costed for BM 1. (c) Best transmissions achieved for BM 2. (d) Average runtimes costed for BM 2. (e) Mean ± SD of η obtained for BM 2.
Fig. 5. Evaluation indices of the best optimization results obtained by the G-HNCNN, the G-NCNN, and the G-NHNCNN with NRFs for BM 1 and BM 2 at higher initial noise amplitudes of 0.06, 0.09, and 0.03, 0.04 in 50 different runs, where “N/A” represents that the G-NCNN cannot find any feasible solutions in the first stage. (a) Frequency of the maximal transmission Pmax = 37 obtained for BM 1. (b) Average runtimes costed for BM 1. (c) Best transmissions achieved for BM 2. (d) Average runtimes costed for BM 2. (e) Mean ± SD of η obtained for BM 2. TABLE V S IMULATION R ESULTS O BTAINED BY THE G-HNCNN AND THE G-NCNN FOR THE B ENCHMARK P ROBLEM BM 2 AT D IFFERENT I NITIAL N OISE A MPLITUDES IN 50 D IFFERENT RUNS Algorithm
G-HNCNN
G-NCNN
A[n(0)]
Best Solutions
Average Runtimes (s)
Coverage Rate
M
η
0.003
9/134/7.62
7.79
98%
10.41 ± 0.96/13/9
7.97 ± 0.23/8.54/7.56
0.006
9/134/7.56
6.91
100%
10.24 ± 0.85/12/9
7.89 ± 0.18/8.21/7.52
0.008
9/138/7.51
5.16
100%
9.78 ± 0.68/11/9
7.73 ± 0.12/7.98/7.49
0.030
9/126/7.86
2.20
100%
9.00 ± 0.00/9/9
8.13 ± 0.14/8.41/7.85
0.040
9/122/8.15
2.58
100%
9.00 ± 0.00/9/9
8.34 ± 0.12/8.63/8.06
0.003
9/136/7.58
6.08
100%
10.12 ± 0.85/12/9
7.92 ± 0.21/8.38/7.58
0.006
9/138/7.49
3.41
100%
9.34 ± 0.52/11/9
7.74 ± 0.17/8.19/7.47
0.008
9/139/7.50
2.33
100%
9.08 ± 0.27/10/9
7.80 ± 0.13/8.30/7.50
0.030
9/120/8.19
3.28
98%
9.10 ± 0.37/11/9
8.49 ± 0.27/9.63/8.18
0.040
N/A
N/A
0%
N/A
N/A
“N/A” represents that the G-NCNN can not find any feasible solutions in the first stage.
B. Using NIFs and NRFs to Determine Whether the Initial Noise Amplitude 0.01 is Lower or Higher for Case 1 and Case 2 In order to apply the noise tuning factor for a new BSP at a certain initial noise amplitude, it is necessary to determine whether the initial noise amplitude is lower or higher. However, it is hard to use a direct method to determine whether a certain initial noise amplitude is lower or higher for a new BSP.
Note that NIFs and NRFs can cause stronger hysteretic dynamics while have different effects on stochastic wandering. Namely, NIFs can be used to strengthen stochastic wandering, while NRFs can be used to weaken stochastic wandering. If a certain initial noise amplitude is lower/higher for a new BSP, NIFs/NRFs can cause the G-NHNCNN to obtain better solutions than the G-HNCNN. Using the above characteristics, the G-NHNCNN can use NIFs and NRFs to determine whether a certain initial noise amplitude is lower or higher for a new BSP.
1914
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
TABLE VI S IMULATION R ESULTS O BTAINED BY THE G-NHNCNN W ITH D IFFERENT NIFs FOR THE B ENCHMARK P ROBLEM BM 2 AT L OWER I NITIAL N OISE A MPLITUDES OF 0.003, 0.006, AND 0.008 IN 50 D IFFERENT RUNS δ Best Solutions 1.0 9/136/7.58 1.3 9/136/7.58 0.003 1.8 9/136/7.58 2.0 9/141/7.45 3.0 9/139/7.41 1.0 9/138/7.55 1.3 9/138/7.41 0.006 1.8 9/141/7.38 2.0 9/142/7.37 3.0 9/140/7.46 1.0 9/140/7.46 1.3 9/140/7.40 0.008 1.8 9/142/7.40 2.0 9/140/7.39 3.0 9/135/7.58 Coverage rate = 100%. A[n(0)]
Average Runtimes (s) 6.88 6.88 5.76 4.88 3.68 5.61 5.39 3.89 3.32 2.01 4.06 2.86 2.60 2.95 1.88
M 10.12 ± 0.93/13/9 10.24 ± 0.87/13/9 10.00 ± 0.70/11/9 9.78 ± 0.68/11/9 9.44 ± 0.64/11/9 9.94 ± 0.71/12/9 9.86 ± 0.78/11/9 9.50 ± 0.58/11/9 9.30 ± 0.51/11/9 9.04 ± 0.20/10/9 9.52 ± 0.74/12/9 9.18 ± 0.39/10/9 9.12 ± 0.33/10/9 9.26 ± 0.44/10/9 9.00 ± 0.00/9/9
η 7.89 ± 0.20/8.37/7.58 7.92 ± 0.20/8.57/7.58 7.90 ± 0.19/8.40/7.58 7.80 ± 0.18/8.17/7.45 7.71 ± 0.15/8.12/7.41 7.82 ± 0.15/8.19/7.55 7.72 ± 0.15/8.06/7.41 7.62 ± 0.11/7.91/7.38 7.59 ± 0.12/7.93/7.37 7.64 ± 0.10/8.08/7.46 7.67 ± 0.15/8.21/7.46 7.57 ± 0.10/7.90/7.40 7.55 ± 0.09/7.85/7.37 7.55 ± 0.10/7.79/7.39 7.74 ± 0.10/8.03/7.56
TABLE VII S IMULATION R ESULTS O BTAINED BY THE G-NHNCNN W ITH D IFFERENT NRFs FOR THE B ENCHMARK P ROBLEM BM 2 AT H IGHER I NITIAL N OISE A MPLITUDES OF 0.03 AND 0.04 IN 50 D IFFERENT RUNS δ Best Solutions 0.8 9/128/7.83 0.7 9/130/7.78 0.03 0.6 9/132/7.72 0.5 9/128/7.87 0.4 9/130/7.78 0.8 9/122/8.03 0.7 9/125/7.97 0.04 0.6 9/125/7.91 0.5 9/129/7.92 0.4 9/127/7.85 Coverage rate = 100%. A[n(0)]
Average Runtimes (s) 2.31 2.81 3.38 4.03 3.81 2.65 2.64 2.80 5.05 6.05
In the following section, we use the G-NHNCNN with different noise tuning factors varied from NIF to NRF to determine whether the initial noise amplitude 0.01 is lower or higher for Case 1 and Case 2. The G-HNCNN and the G-NCNN are also applied to solve Case 1 and Case 2 at the initial noise amplitude of 0.01 for comparisons. Simulation results summarized in Table IX show that, in the evaluation indices of best solutions together with mean ± SD/max/min of M and η, the G-NHNCNN with some NIFs in Table IX outperforms the G-HNCNN, while the G-NHNCNN with all the NRFs is inferior to the G-HNCNN for both Case 1 and Case 2 at the initial noise amplitude of 0.01. It indicates that the initial noise amplitude 0.01 is lower for Case 1 and Case 2. By comparison, we can see that the best transmission schedules of Case 1 and Case 2 obtained by the G-NHNCNN with NIFs, i.e., 9/155/7.14 and 19/292/15.56, are also superior to those obtained by the G-NCNN. C. NIF-Based Method to Search the Best Solution for Case 3 From Section V-A we can see that, compared to NRFs, NIFs can more easily help the G-NHNCNN to improve the optimization performance. Hence, we should primarily select
M 9.02 ± 0.14/10/9 9.12 ± 0.33/10/9 9.28 ± 0.45/10/9 9.44 ± 0.50/10/9 9.36 ± 0.56/11/9 9.02 ± 0.14/10/9 9.02 ± 0.14/10/9 9.02 ± 0.14/10/9 9.56 ± 0.61/11/9 9.78 ± 0.88/12/9
η 8.08 ± 0.15/8.53/7.82 8.11 ± 0.23/8.80/7.78 8.13 ± 0.22/8.66/7.72 8.19 ± 0.22/8.58/7.84 8.15 ± 0.26/8.88/7.78 8.28 ± 0.13/8.60/8.03 8.24 ± 0.12/8.65/7.97 8.20 ± 0.16/8.92/7.91 8.42 ± 0.35/9.33/7.88 8.46 ± 0.45/9.60/7.85
the G-NHNCNN with NIFs to better solve a new BSP at lower initial noise amplitudes. For the given lower initial noise amplitudes a0 , a1 , a2 , . . . , an , we use the following method to search both an NIF and an initial noise amplitude for achieving the best solution (assume a0 < a1 < a2 < · · · < an .). Step 1: Variable i is initialized to 0. Step 2: The NIF δ is initialized to 1.0. Step 3: The G-NHNCNN with the NIF δ is applied to the BSP in 50 different runs at the initial noise amplitude ai . Take the best solution obtained by the G-NHNCNN with the NIF δ as Si,δ . Step 4: If Si,δ is the best among Si,δ−0.1 , Si,δ−0.2 , . . . , Si,1 , then increase the NIF δ by 0.1 and go to Step 3. Otherwise, take Si,θi (θi = δ − 0.1) as the best solution at the initial noise amplitude ai . Step 5: If Si,θi is the current best solution at the initial noise amplitudes of ai , ai−1 , . . . , a0 , then increase the variable i by 1 and go to Step 2. Otherwise, stop the computational process, and output the best solution Si−1,θi−1 , the initial noise amplitude ai−1 , and the NIF θi−1 .
SUN et al.: NOISE-TUNING-BASED HYSTERETIC NCNN FOR BSP IN WMNs
1915
Fig. 6. Comparisons of simulation results obtained for Case 3 by the G-HNCNN, the G-NCNN, and the G-NHNCNN with different NIFs at the given lower initial noise amplitudes in 50 different runs. (a) Maximal transmissions. (b) Average runtimes. (c) Mean ± SD of η.
Based on the method described above, we use the G-NHNCNN with NIFs to solve Case 3 under the given lower initial noise amplitudes of 0.003, 0.006, 0.008, and 0.010. As a result, the optimal solution 23/659/16.83 is found at A[n(0)] = 0.008 and NIF = 2.3. For comparison, the G-NCNN and the G-HNCNN are also applied to solve Case 3 under the same lower initial noise amplitudes. Simulation results obtained by the G-NCNN, the G-HNCNN, and the G-NHNCNN with NIFs are summarized in Table X, where two groups of NIFs are listed in the G-NHNCNN for each lower initial noise amplitude. For easy comparison, results in Table X are plotted in Fig. 6. As seen from Fig. 6, the G-NCNN is superior to the G-HNCNN in indices of maximal transmissions and mean ± SD of η at the lower initial noise amplitudes. More importantly, the G-NHNCNN with the first group of NIFs not only is superior to the G-HNCNN and the G-NCNN in indices of maximal transmissions and mean ± SD of η, but also can take smaller average runtimes than the G-NCNN at the lower initial noise amplitudes. In addition, the G-NHNCNN with the second group of NIFs can find the best solution for each lower initial noise amplitude, which suggests that the G-NHNCNN has better global searching ability than both the G-NCNN and the G-HNCNN. In order to further show differences among the G-NHNCNN with the first group of NIFs, the G-HNCNN and the G-NCNN, several paired t-tests are performed among the three algorithms, shown in Table XI. Results in Table XI suggest that, at lower initial noise amplitudes, the G-NHNCNN with the first group of NIFs has significantly better performance than the G-NCNN and the G-HNCNN, and the G-NCNN has significantly better performance than the G-HNCNN. D. Discussion As seen from Figs. 4 and 6, NIFs can try best to help the G-NHNCNN to achieve the best performance at each lower noise amplitude by strengthening stochastic
TABLE VIII B EST S OLUTIONS O BTAINED BY THE G-NHNCNN, THE MGA, THE CPGA, THE GACFS, AND THE FSMA FOR BM 2 Index Mmin Pmax ηmin
G-NHNCNN 9 142 7.37
MGA 9 133 7.89
CPGA 9 136 6.95
GACFS 9 139 —
FSMA 9 134 —
noises and hysteretic dynamics. As the initial noise amplitude increases, the G-HNCNN can also improve the optimization performance by strengthening stochastic noises and hysteretic dynamics. However, the G-HNCNN still cannot achieve the optimization performance of the G-NHNCNN with NIFs. The main reason may be that stochastic noises added in the hysteretic activation function of the G-HNCNN will also be amplified as the initial noise amplitude increases, which prevents the G-HNCNN from quickly converging to optimal solutions. As seen from Fig. 5, the G-NHNCNN with NRFs is able to find larger transmissions than the G-HNCNN and the GNCNN at higher noise amplitudes by suppressing noises of NRFs, but is not as powerful as the G-NHNCNN with NIFs at lower noise amplitudes. The main reason may be that stochastic noises added in the hysteretic activation function of the G-NHNCNN are hardly weakened by NRFs, which prevents the G-NHNCNN from quickly converging to optimal solutions. And the shortage of the G-NHNCNN with NRFs will become more and more obvious as the scale of BSP increases. A feasible improving method may be to introduce an extra factor to modulate stochastic noises added in the hysteretic activation function of the G-NHNCNN. However, NRFs have some positive effects. For example, NRFs can be combined with NIFs to determine whether a certain initial noise is lower or higher for new BSPs. This can be confirmed in Section V-B. In addition, we should stress that the G-NCNN is inferior to both the G-NHNCNN and the G-HNCNN at higher initial noise amplitudes for lack of hysteretic dynamics in suppressing noises. For example, we find from Tables II, V, and X
1916
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
TABLE IX S IMULATION R ESULTS O BTAINED BY THE G-NHNCNN W ITH NIFs, THE G-NHNCNN W ITH NRFs, THE G-NCNN, AND THE G-HNCNN FOR
Case
Case 1
C ASE 1 AND C ASE 2 AT THE I NITIAL N OISE A MPLITUDE 0.01 IN 50 D IFFERENT RUNS
Algorithm
δ
Best Solutions
Average Runtimes (s)
M
η
G-NCNN G-HNCNN
— 0.9 1.0 1.3 1.5 1.8 2.0 0.8 0.6 0.4 0.2
9/154/7.21 9/154/7.21 9/154/7.14 9/154/7.16 9/155/7.14 9/154/7.13 9/153/7.15 9/154/7.21 9/154/7.23 9/150/7.16 9/150/7.23
5.87 8.40 7.28 6.67 6.41 5.84 5.85 7.95 8.91 9.94 10.37
9.20 ± 0.45/10/9 9.58 ± 0.50/10/9 9.48 ± 0.54/11/9 9.32 ± 0.47/10/9 9.28 ± 0.45/10/9 9.08 ± 0.27/10/9 9.12 ± 0.33/10/9 9.64 ± 0.53/11/9 9.90 ± 0.51/11/9 10.14 ± 0.50/11/9 10.24 ± 0.48/11/9
7.25 ± 0.08/7.45/7.11 7.26 ± 0.07/7.41/7.10 7.23 ± 0.07/7.38/7.12 7.20 ± 0.07/7.34/7.08 7.19 ± 0.06/7.34/7.10 7.17 ± 0.05/7.32/7.08 7.19 ± 0.05/7.30/7.07 7.26 ± 0.09/7.49/7.13 7.30 ± 0.08/7.48/7.15 7.34 ± 0.06/7.52/7.16 7.38 ± 0.07/7.50/7.23
— 0.9 1.0 1.5 1.8 2.0 3.0 0.7 0.5 0.3 0.1
19/290/15.71 19/290/15.59 19/290/15.63 19/290/15.55 19/292/15.63 19/292/15.56 19/290/15.69 19/288/15.68 19/287/15.74 19/287/15.71 19/285/15.82
71.19 75.89 81.96 81.39 79.31 76.85 51.69 74.70 78.34 82.61 85.15
19.00 ± 0.00/19/19 19.00 ± 0.00/19/19 19.02 ± 0.14/20/19 19.00 ± 0.00/19/19 19.00 ± 0.00/19/19 19.00 ± 0.00/19/19 19.02 ± 0.14/20/19 19.08 ± 0.27/20/19 19.18 ± 0.39/20/19 19.28 ± 0.45/20/19 19.38 ± 0.60/21/19
15.75 ± 0.07/15.87/15.58 15.78 ± 0.07/15.92/15.59 15.76 ± 0.09/15.96/15.54 15.69 ± 0.07/15.90/15.54 15.69 ± 0.07/15.86/15.53 15.67 ± 0.08/15.93/15.51 16.31 ± 0.24/16.71/15.69 15.86 ± 0.09/16.18/15.68 15.92 ± 0.17/16.16/15.74 15.97 ± 0.12/16.26/15.71 16.04 ± 0.14/16.44/15.81
G-NHNCNN with NIFs
G-NHNCNN with NRFs G-NCNN G-HNCNN
Case 2
G-NHNCNN with NIFs
G-NHNCNN with NRFs Coverage rate = 100%
TABLE X S IMULATION R ESULTS O BTAINED BY THE G-NHNCNN W ITH D IFFERENT NIFs, THE G-HNCNN, AND THE G-NCNN FOR C ASE 3 FOR THE G IVEN L OWER I NITIAL N OISE A MPLITUDES IN 50 D IFFERENT R UNS A[n(0)]
0.003
Algorithm
δ
Best Solutions
Average Runtimes (s)
Coverage Rate
M
η
G-NCNN G-HNCNN
— 0.9 2.0 4.7
23/634/17.17 23/629/17.20 23/637/17.09 23/651/16.73
313.57 207.33 188.19 423.20
100% 100% 100% 100%
23.04 ± 0.20/24/23 23.10 ± 0.30/24/23 23.00 ± 0.00/23/23 23.00 ± 0.00/23/23
17.27 ± 0.08/17.45/17.14 17.33 ± 0.08/17.52/17.12 17.19 ± 0.08/17.38/17.04 16.92 ± 0.07/17.14/16.73
— 0.9 2.0 3.0
23/643/16.89 23/638/17.04 23/646/16.82 23/658/16.70
344.14 193.27 211.03 290.89
100% 100% 100% 84%
23.00 ± 0.00/23/23 23.02 ± 0.14/24/23 23.00 ± 0.00/23/23 23.10 ± 0.30/24/23
17.06 ± 0.08/17.22/16.89 17.20 ± 0.08/17.40/17.02 16.92 ± 0.07/17.08/16.81 16.81 ± 0.08/17.00/16.64
— 0.9 1.8 2.3
23/652/16.79 23/637/16.95 23/653/16.58 23/659/16.83
378.16 367.63 372.57 484.51
100% 100% 100% 84%
23.00 ± 0.00/23/23 23.00 ± 0.00/23/23 23.06 ± 0.24/24/23 23.02 ± 0.15/24/23
16.90 ± 0.06/17.04/16.74 17.09 ± 0.08/17.26/16.95 16.79 ± 0.07/17.00/16.58 16.80 ± 0.07/16.99/16.67
— 0.9 1.6 1.9
23/654/16.79 23/642/16.98 23/655/16.78 23/657/16.75
447.22 207.67 431.70 525.52
34% 100% 100% 74%
23.06 ± 0.24/24/23 23.02 ± 0.14/24/23 23.02 ± 0.14/24/23 23.22 ± 0.42/24/23
16.90 ± 0.07/17.01/16.79 17.04 ± 0.08/17.21/16.85 16.81 ± 0.07/16.99/16.66 16.83 ± 0.06/16.97/16.72
G-NHNCNN
0.006
G-NCNN G-HNCNN G-NHNCNN
0.008
G-NCNN G-HNCNN G-NHNCNN
0.010
G-NCNN G-HNCNN G-NHNCNN
that the G-NCNN only can achieve 2%, 0%, and 34% converge rates for BM 1, BM 2, and Case 3 at initial noise amplitudes of 0.09, 0.04, and 0.01, while both the G-NHNCNN and the G-HNCNN can obtain 100% converge rates for all the BSPs at the same initial noise amplitudes. In order to further confirm the above discussions on optimization performance of these algorithms, we apply the G-HNCNN, the G-NCNN, and the G-NHNCNN to resolve
BM 2 as the initial noise amplitude varies from 0.003 to 0.018 in the interval of 0.001, and plot the maximal transmissions obtained by the G-HNCNN, the G-NCNN and the G-NHNCNN together with the corresponding noise tuning factors in Fig. 7. As seen from Fig. 7(a), the G-NHNCNN has absolute optimization advantages over the G-HNCNN and the G-NCNN. As the initial noise amplitude increases from 0.009 to 0.018, the G-HNCNN begins to outperform the G-NCNN.
SUN et al.: NOISE-TUNING-BASED HYSTERETIC NCNN FOR BSP IN WMNs
1917
TABLE XI PAIRED t -T ESTS OF AVERAGE T IME D ELAY η A MONG THE G-NHNCNN W ITH THE F IRST G ROUP OF NIFs, THE G-HNCNN, AND THE G-NCNN FOR C ASE 3 AT L OWER I NITIAL N OISE A MPLITUDES OF 0.003, 0.006, AND 0.008 A[n(0)]
Paired t-Test
0.003
P-value (one-tail)
T-value
0.006
0.008
Fig. 7. Maximal transmissions obtained for BM 2 by the G-NCNN, the G-HNCNN, and the G-NHNCNN with corresponding noise tuning factors as the initial noise amplitude varies from 0.003 to 0.018 in the interval of 0.001. (a) Maximal transmissions with the frame length M = 9. (b) Corresponding noise tuning factors used in the G-NHNCNN for the maximal transmissions.
In addition, the G-NHNCNN with NIFs is superior to the G-NHNCNN with NRFs. As seen from Fig. 7(b), noise tuning factors used in the G-NHNCNN for the maximal transmissions decrease with the increasing of the initial noise amplitude. Finally, we propose to primarily use NIFs at lower initial noise amplitudes in applications of the G-NHNCNN to BSPs, because the G-NHNCNN with NIFs more easily achieves better solutions. Note that not all the NIFs can effectively improve the optimization performance at lower initial noise amplitudes. For one thing, an appropriately larger NIF is needed to achieve the best solution at the smaller initial noise amplitude. For another, NIFs that are too large have tendencies to cause a decayed optimization performance, because they can more easily destroy efficient chaotic searching by excessively strengthening stochastic noises. In order to obtain the best solution, we can use the method presented in Section V-C. VI. C ONCLUSION In this paper, we proposed a novel NHNCNN to solve BSP in WMNs. Unlike the HNCNN, the NHNCNN has a noise tuning factor. Modulation of the noise tuning factor can facilitate balancing of stochastic wandering and chaotic searching of SCSA. In addition, the noise tuning factor can cause sigmoid center of the activation function in the NHNCNN to evolve larger amplitudes, which can help the NHNCNN escape from local minima more effectively. Because of these two
G-NHNCNN G-NCNN 8.8941 0
G-HNCNN
G-NCNN
G-NHNCNN G-HNCNN 5.1205 0
3.7536 0.0002
P-value (two-tail)
0
0
0.0005
T-value
17.2442 0
8.8489 0
8.8146 0
P-value (one-tail) P-value (two-tail)
0
0
0
T-value
21.4764 0
10.1738 0
12.9012 0
0
0
0
P-value (one-tail) P-value (two-tail)
aspects, the proposed NHNCNN can increase the probability of obtaining better solutions, regardless of whether the initial noise amplitude is lower or higher. Finally, the G-NHNCNN, which combines the NHNCNN and the GES, was applied to solve two benchmark problems and three random cases of BSP in WMNs. In simulations, effects of the noise tuning factor on the optimization performance were demonstrated, and the noise tuning factor was used to determine whether a certain initial noise amplitude is lower or higher. In addition, an NIF-based method to achieve the best solution at lower initial noise amplitudes was presented. The simulation results showed that, with the help of the noise tuning factor, our proposed G-NHNCNN can find better TDMA solutions than the G-HNCNN and the G-NCNN. R EFERENCES [1] L. Wang and H. Shi, “A gradual noisy chaotic neural network for solving the broadcast scheduling problem in packet radio networks,” IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 989–1000, Jul. 2006. [2] H. Shi and L. Wang, “Broadcast scheduling in wireless multihop networks using a neural-network-based hybrid algorithm,” Neural Netw., vol. 18, nos. 5–6, pp. 765–771, 2005. [3] N. Funabiki and J. Kitamichi, “A gradual neural network algorithm for broadcast scheduling problems in packet radio networks,” IEICE Trans. Fundam., vol. E82-A, no. 5, pp. 815–824, 1999. [4] J. Yeo, H. Lee, and S. Kim, “An efficient broadcast scheduling algorithm for TDMA ad-hoc networks,” Comput. Oper. Res., vol. 29, no. 13, pp. 1793–1806, 2002. [5] S. Salcedo-Sanz, C. Bousono-Calzon, and A. R. Figueiras-Vidal, “A mixed neural-genetic algorithm for the broadcast scheduling problem,” IEEE Trans. Wireless Commun., vol. 2, no. 2, pp. 277–283, Mar. 2003. [6] M. Sun, L. Zhao, W. Cao, Y. Xu, X. Dai, and X. Wang, “Novel hysteretic noisy chaotic neural network for broadcast scheduling problems in packet radio networks,” IEEE Trans. Neural Netw., vol. 21, no. 9, pp. 1422–1433, Apr. 2010. [7] G. Wang and N. Ansari, “Optimal broadcast scheduling in packet radio networks using mean field annealing,” IEEE J. Sel. Areas Commun., vol. 15, no. 2, pp. 250–260, Feb. 1997. [8] A. Ephremides and T. V. Truong, “Scheduling broadcast in multihop radio networks,” IEEE Trans. Commun., vol. 38, no. 6, pp. 456–460, Jun. 1990. [9] N. Funabiki and Y. Takefuji, “A parallel algorithm for broadcast scheduling problems in packet radio networks,” IEEE Trans. Commun., vol. 41, no. 6, pp. 828–831, Jun. 1993.
1918
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 12, DECEMBER 2012
[10] X. Wu, B. S. Sharif, O. R. Hinton, and C. C. Tsimenidis, “Solving optimum TDMA broadcast scheduling in mobile ad hoc networks: A competent permutation genetic algorithm approach,” IEE Proc.Commun., vol. 152, no. 6, pp. 780–788, Dec. 2005. [11] I. Ahmad, B. Al-Kazemi, and A. S. Das, “An efficient algorithm to find broadcast schedule in ad hoc TDMA networks,” J. Comput. Syst., Netw., Commun., vol. 2008, pp. 1–10, Dec. 2008. [12] R. Gunasekaran, S. Siddharth, P. Krishnaraj, M. Kalaiarasan, and V. R. Uthariaraj, “Efficient algorithms to solve broadcast scheduling problem in WiMAX mesh networks,” Comput. Commun., vol. 33, no. 11, pp. 1325–1333, 2010. [13] Z. Lü and J. K. Hao, “A memetic algorithm for graph coloring,” Eur. J. Oper. Res., vol. 203, no. 1, pp. 241–250, 2010. [14] D. H. Al-Omari and K. E. Sabri, “New graph coloring algorithms,” Amer. J. Math. Stat., vol. 2, no. 4, pp. 739–741, 2006. [15] M. Gavrilova and K. Ahmadian, “On-demand chaotic neural network for broadcast scheduling problem,” J. Supercomput., vol. 59, no. 2, pp. 811–829, 2012. [16] X. Yang, J. Cao, and J. Lu, “Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 1, pp. 60–71, Jan. 2012. [17] D. Li, M. Han, and J. Wang, “Chaotic time series prediction based on a novel robust echo state network,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 5, pp. 787–799, May 2012. [18] A. Stuhlsatz, J. Lippel, and T. Zielke, “Feature extraction with deep neural networks by a generalized discriminant analysis,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 4, pp. 596–608, Apr. 2012. [19] T. Wang, K. Wang, and N. Jia, “Chaos control and associative memory of a time-delay globally coupled neural network using symmetric map,” Neurocomputing, vol. 74, no. 10, pp. 1673–1680, 2011. [20] L. Wang, S. Li, F. Tian, and X. Fu, “A noisy chaotic neural network for solving combinatorial optimization problems: Stochastic chaotic simulated annealing,” IEEE Trans. Syst., Man, Cybern., B, Cybern., vol. 34, no. 5, pp. 2119–2125, Oct. 2004. [21] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. [22] L. Chen and K. Aihara, “Chaotic simulated annealing by a neural network model with transient chaos,” Neural Netw., vol. 8, no. 6, pp. 915–930, 1995. [23] L. Wang, W. Liu, and H. Shi, “Noisy chaotic neural networks with variable thresholds for the frequency assignment problem in satellite communications,” IEEE Trans. Syst., Man, Cybern., C, Appl. Rev., vol. 38, no. 2, pp. 209–217, Mar. 2008. [24] L. Wang, W. Liu, and H. Shi, “Delay-constrained multicast routing using the noisy chaotic neural networks,” IEEE Trans. Comput., vol. 58, no. 1, pp. 82–89, Jan. 2009. [25] C. Zhao and L. Gan, “Dynamics channel assignment for large-scale cellular networks using noisy chaotic neural network,” IEEE Trans. Neural Netw., vol. 22, no. 2, pp. 222–232, Feb. 2011. [26] L. Chen and K. Aihara, “Global searching ability of chaotic neural networks,” IEEE Trans. Ciruits Syst. I, Fundam. Theory Appl., vol. 46, no. 8, pp. 974–993, Aug. 1999. [27] L. Chen and K. Aihara, “Chaos and asymptotical stability in discrete time neural networks,” Phys. D, vol. 104, nos. 3–4, pp. 286–325, 1997. [28] L. Wang and K. Smith, “On chaotic simulated annealing,” IEEE Trans. Neural Netw., vol. 9, no. 4, pp. 716–718, Jul. 1998. [29] L. Zhao, M. Sun, J. Cheng, and Y. Xu, “A novel chaotic neural network with the ability to characterize local features and its application,” IEEE Trans. Neural Netw., vol. 20, no. 4, pp. 735–742, Apr. 2009. [30] S. Chen, “Chaotic simulated annealing by a neural network with a variable delay: Design and application,” IEEE Trans. Neural Netw., vol. 22, no. 10, pp. 1557–1565, Oct. 2011. [31] L. Wang and H. Shi, “Noisy chaotic neural networks for combinatorial optimization,” Studies Comput. Intell., vol. 63, pp. 467–487, Jan. 2007. [32] I. Tokuda, K. Aihara, and T. Nagashima, “Adaptive annealing for chaotic optimization,” Phys. Rev. E, Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top., vol. 58, no. 4, pp. 5157–5160, 1998. [33] L. Wang and J. Ross, “Synchronous neural networks of nonlinear threshold elements with hysteresis,” Proc. Nat. Acad. Sci. United States Amer., vol. 87, no. 3, pp. 988–992, 1989. [34] G. W. Hoffman and M. W. Benson, “Neurons with hysteresis form a network that can learn without any changes in synaptic strengths,” in Proc. Amer. Inst. Phys. Conf. Neural Netw. Comput., Aug. 1986, pp. 219–226.
[35] Y. Takefuji and K. C. Lee, “An artificial hysteresis binary neuron: A model suppressing the oscillatory behaviors of neural dynamics,” Biol. Cybern., vol. 64, no. 5, pp. 353–356, 1991. [36] S. Bharitkar and J. M. Mendel, “The hysteretic Hopfield neural network,” IEEE Trans. Neural Netw., vol. 11, no. 4, pp. 879–888, Jul. 2000. [37] X. Liu and C. Xiu, “A novel hysteretic chaotic neural network and its applications,” Neurocomputing, vol. 70, nos. 13–15, pp. 2561–2565, 2007. [38] G. Chakraborty, “Genetic algorithm to solve optimum TDMA transmission schedule in broadcast packet radio networks,” IEEE Trans. Commun., vol. 52, no. 5, pp. 765–777, May 2004.
Ming Sun (M’12) received the B.S. degree in computer Science and technology from Heilongjiang University, Harbin, China, the M.S. degree in computer application technology from Harbin University of Commerce, Harbin, and the Ph.D. degree in navigation, guidance, and control from Harbin Engineering University, Harbin, in 2004, 2007, and 2010, respectively. He is currently a Faculty Member with the College of Computer and Control Engineering, Qiqihar University, Qiqihar, Heilongjiang, China. His current research interests include neural networks, chaotic dynamics, and combinatorial optimization.
Yaoqun Xu received the B.S. degree in mathematics from Jilin University, Changchun, China, the M.S. degree in mathematics from the Harbin Institute of Technology, Harbin, China, and the Ph.D. degree in navigation, guidance, and control from Harbin Engineering University, Harbin, 1993, 1997, and 2002, respectively. He was with the Foundation Department, Heilongjiang Commercial College, Harbin, in 1993. He was a Post-Doctoral Fellow with the Control Science and Control Engineering Department, Harbin Institute of Technology, from 2004 to 2006. He is currently a Professor with the College of Computer and Information Engineering, Harbin University of Commerce, Harbin. His current research interests include chaotic dynamics, neural networks, and intelligent optimization and decisions.
Xuefeng Dai received the B.S. degree in electrical engineering from Liaoning Technical University, Fuxin, China, in 1985, and the M.S. degree in automatic control theory and application and the Ph.D. degree in control theory and control engineering from the College of Automation, Harbin Engineering University, Harbin, China, in 1992 and 2001, respectively. He has been with the College of Computer and Control Engineering, Qiqihar University, Qiqihar, China, since 1992, where he is currently a Professor of control theory and applications. His current research interests include selforganizing fuzzy neural networks, path planning, and simultaneous localization and mapping for mobile robots.
Yuan Guo received the B.S. degree in automation from Qiqihar University, Qiqihar, China, in 1997, and the M.S. and Ph.D. degrees in electrical engineering from Yanshan University, Qinhuangdao, China, in 2004 and 2008, respectively. She has been with the College of Computer and Control Engineering, Qiqihar University since 1997, where she is currently a Vice Professor of electrical engineering. She is currently a Visiting Scholar with Johns Hopkins University, Baltimore, MD. Her current research interests include photoelectric detection, sensor technology, control theory, and information processing and simulation.