This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON CYBERNETICS

1

Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement Wei Yuan, Member, IEEE, Xinge You, Senior Member, IEEE, Jing Xu, Henry Leung, Fellow, IEEE, Tianhang Zhang, and Chun Lung Philip Chen, Fellow, IEEE.

Abstract—In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations. Index Terms—Cognitive radio, pareto optimization, spectrum sensing.

I. I NTRODUCTION OGNITIVE radio (CR) has been proposed to overcome spectrum shortage by improving spectrum utilization [1]–[5]. One critical components of the CR technology is spectrum sensing, which decides whether or not the licensed spectrum band (i.e., channel) is being used by primary users (PUs). In fading environments, however, individual secondary users (SUs) usually cannot achieve satisfactory

C

Manuscript received August 25, 2013; revised August 26, 2014 and January 10, 2015; accepted January 11, 2015. This work was supported in part by the National Natural Science Foundation of China under Grant 61272203, Grant 61300223, and Grant 61301127, in part by the International Scientific and Technological Cooperation Project under Grant 2011DFA12180, in part by the Fundamental Research Funds for the Central Universities under Grant 2013QN146, and in part by the National Key Technologies Research and Development Program of China under Grant 2012BAK31G00. This paper was recommended by Associate Editor S. Ferrari. (Corresponding author: Xinge You.) W. Yuan, X. You, J. Xu, and T. Zhang are with the Department of Electronics and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China (e-mail: [email protected]). H. Leung is with the Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada. C. L. P. Chen is with the Faculty of Science and Technology, University of Macau, Macau 999078, China. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TCYB.2015.2395412

sensing accuracy. To address this problem, SUs can collaborate to perform spectrum sensing, which is termed cooperative spectrum sensing (CSS). CSS is roughly divided into two categories: 1) decision fusion [6]–[10] and 2) data fusion [11]–[19]. For decision fusion, each SU reports its local binary decision based on individual spectrum sensing to a fusion center (FC) for a global decision. For data fusion, each SU sends its observation value or summary statistic to the FC for a final decision. In this paper, we focus on data fusion since it can achieve better sensing performance than decision fusion [2]. In [11], an optimal soft combination of the observed energies from different SUs is obtained based on the Neyman–Pearson criterion with the goal of maximizing detection probability. To reduce the computational complexity, an efficient linear CSS (LCSS) method is proposed in [12], in which a global decision is made based on the weighted linear combination of local statistics of individual SUs. By appropriately choosing the weights, the sensing performance of CSS can be optimized. To this end, Quan et al. [12] formulated a nonlinear optimization problem to minimize the missed detection probability. Quan et al. [13] proposed using semidefinite programming to solve the problem. It is also noted that a simple and direct analytic method is proposed in [14] to solve the LCSS optimization problem. Existing works on LCSS usually formulate a singleobjective optimization (SOO) problem with the objective of missed detection probability (or detection probability) minimization (or maximization). Their main concern is the protection of primary transmission. In practice, the interests of both PUs and SUs should be considered simultaneously. Hence, a designer often maximizes the throughput of SUs while minimizing the missed detection probability. However, these two design objectives are not completely compatible. To achieve an optimal balance between them, the designer can use the multiple-objective optimization (MOO) method [20]–[24] to optimize both objectives simultaneously.1 MOO usually provides a designer with a set of Pareto (or Pareto optimal) solutions [25]. For a Pareto solution, any improvement in one objective can only take place through degrading the performance of at least one other objective. Hence it indicates the optimal tradeoff among multiple objectives. In this paper, we use MOO to optimize the weights and detection threshold of LCSS for missed detection probability minimization and secondary throughput maximization. 1 For the MOO methods, the objectives need not be completely conflicting.

c 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. 2168-2267  See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 2

IEEE TRANSACTIONS ON CYBERNETICS

Since there exists an infinite set of Pareto solutions, we aim to obtain some representative ones. The normal constraint (NC) method [25]–[27] sheds some light on this task. It provides a framework for transforming an MOO problem into a set of SOO problems, each of which usually results in a Pareto solution. The Pareto solutions obtained by NC are evenly distributed on the boundary of the feasible (criterion) space (i.e., Pareto frontier), which facilitates decision making in choosing the most desirable Pareto solution. To apply NC to our MOO problem, however, we need to overcome the following challenges. 1) The generated SOO problems are nonconvex and hence hard to solve. NC does not provide any solution method to them. 2) In general, more Pareto solutions can better characterize the Pareto frontier but result in a heavier computing overhead. Unfortunately, NC does not tell how to optimally choose the number of Pareto solutions. 3) In practice, a designer may be only interested in some specific Pareto solutions (e.g., the Pareto solutions with relatively large system throughput). Accordingly, only a part of Pareto frontier needs to be explored. However, NC explores the entire Pareto frontier uniformly and has no preference over all Pareto solutions. In this paper, we propose several methods to address the above issues. The main contributions of this paper are summarized as follows. 1) To solve the generated SOO problems, we employ a stochastic global optimization algorithm to calculate the global optimum. 2) To determine the optimal number of Pareto solutions, we consider the constraint of maximally allowed computing overhead, and propose an integer optimization formulation. We also develop a simple optimization algorithm. 3) To satisfy a designer’s preference, we extend NC by providing a Pareto solution refinement method. Our method can identify the preferred part of Pareto frontier for optimizing the positively weighted sum of two objectives. The remainder of this paper is organized as follows. Section II presents the system model. Section III develops the NC-based MOO algorithm. We determine the optimal number of Pareto solutions for NC in Section IV, and propose a Pareto solution refinement method in Section V. Numerical results are provided in Section VI, followed by the conclusion in Section VII. II. P ROBLEM F ORMULATION Consider a time-slotted CR network (CRN) with a licensed channel, a secondary base station (BS) and N SUs. The channel is occasionally occupied by its PU with a probability PH1 . Each slot consists of two phases: 1) sensing phase and 2) transmission phase. In a sensing phase, the SUs perform LCSS to detect the PU. If the PU is not detected, the SUs are allowed to transmit in the following transmission phase. As the most common spectrum sensing method, energy detection has low

Fig. 1.

LCSS.

computational complexity and does not require any knowledge on the PUs signal [1]. In this paper, we assume energy detection is used at every SU. The essence of spectrum sensing is a binary hypothesistesting problem H0 : xi (k) = vi (k), i = 1, 2, . . . , N

(1)

H1 : xi (k) = hi s(k) + vi (k), i = 1, 2, . . . , N

(2)

where H0 represents the PU is absent and H1 indicates the PU is present. In the above equations, k is the time slot index, s(k) denotes the signal transmitted by the PU, xi (k) is the received signal by the ith SU, hi is the block fading gain coefficient, and vi (k) is the zero-mean additive white Gaussian noise, i.e., vi (k) ∼ CN (0, σi2 ) [12]. Without loss of generality, s(k) and {vi (k)} are assumed to be independent of each other in this paper. A. LCSS As shown in Fig. 1, LCSS is performed as follows. 1) Every SU performs local spectrum sensing independently. 2) All SUs send their summary statistics to the BS (i.e., FC) through a common control channel. 3) The BS makes a global decision and broadcasts it to all SUs. In step 2), the summary statistic is ui =

M−1 

|xi (k)|2 , i = 1, 2, . . . , N

(3)

k=0

where M is the number of samples during a detection interval. Since ui is the sum of squares of M Gaussian random variables, ui /σi2 follows a central chi-square χ 2 distribution with M degrees of freedom if H0 is true; otherwise, it would follow a noncentral χ 2 distribution with M degrees of freedom and parameter ηi [12]. That is  2 ui H0 (4) χM ∼ 2 2 σi χM (ηi ) H1 . (5) Here, ηi is the local signal-to-noise ratio (SNR) at the ith SU, and it is defined as ηi =

Es |hi |2 σi2

(6)

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. YUAN et al.: MULTIOBJECTIVE OPTIMIZATION OF LCSS: PARETO SOLUTIONS AND REFINEMENT

M−1 2 where Es = k=0 |s(k)| represents the transmitted signal energy over a sequence of M samples during each detection interval. The signals received by the BS are given by yi = ui + ni , i = 1, 2, . . . , N

(7)

where ni denotes the noise of common control channel. ni is assumed to be zero-mean, spatially uncorrelated Gaussian variables with variances δi2 . In step 3), the BS combines the received summary statistics linearly using a weight vector ω = (ω1 , . . . , ωN )T and obtains the global test statistics yc =

N 

ωi yi

(8)

i=1

N

where ∀i, 0 ≤ ωi ≤ 1 and i=1 ωi = 1. Here, the weight for an SU indicates its contribution to the global decision [12]. For example, if an SU experiences deep fading or shadowing, its weight can be set to a small value to alleviate its negative effect. With a threshold γc , the BS makes the global decision according to the decision rule H1

yc  γc .

(9)

H0

B. Sensing Performance Metrics The key metrics in LCSS include the missed detection probability Pm and the false alarm probability Pf . As shown in [12], yc is a normal random variable and we have   γc − E(yc |H1 ) Pm = 1 − Pr (yc > γc |H1 ) = 1 − Q √ (10) Var(yc |H1 ) and

  γc − E(yc |H0 ) . Pf = Pr (yc > γc |H0 ) = Q √ Var(yc |H0 )

(11)

+∞ 2 e−t /2 dt.

(12)

x

According to [12], the mean of yc is  Nσ T ω E(yc ) = (Nσ + Es g)T ω

H0 H1 .

and Var(yc |H1 ) = ωT  H1 ω

(17)

with  H1 = 2Mdiag2 (σ ) + diag(δ) + 4Es diag(g)diag(σ ). (18) Then, we have



γc − (Mσ + Es g)T ω

Pm = 1 − Q ω T  H1 ω and



(13) (14)

The variances under different hypotheses are given by respectively Var(yc |H0 ) = ωT  H0 ω

(15)

 H0 = 2Mdiag2 (σ ) + diag(δ)

(16)

with

(19)



γc − Mσ T ω . Pf = Q

ω T  H0 ω

(20)

To protect the PU, Pm is often required to be below a threshold θ (0 < θ < 1) [7]. C. Optimization Model Now, we derive the average throughput of SUs and then propose an LCSS optimization model. The SUs can transmit under these two scenarios: 1) the absence of the PU is detected and 2) the presence of the PU is not detected. We denote R0 and R1 as the throughput of the SUs if they are allowed to continuously transmit in the absence and the presence of the PU, respectively. The average throughput can be calculated as    τ  1 − PH1 1 − Pf R0 + PH1 Pm R1 (21) R = 1− T where τ is the time for LCSS, T is the slot length and PH1 is the probability of the PU being present in the channel. From the perspective of the SUs, the average throughput should be maximized. From the perspective of the PU, the missed detection probability should be minimized. However, these two objectives are not completely compatible. Hence the system design problem can be formulated as P1 : min [Pm , −R] γc ,ω

Let σ = (σ1 , σ2 , . . . , σN )T , g = (|h1 |2 , |h2 |2 , . . . , |hN |2 ), δ = (δ12 , δ22 , . . . , δN2 )T , and y = (y1 , y2 , . . . , yN )T . Let diag(·) denote the square diagonal matrix with the elements of a given vector on the diagonal and Q(·) be the complementary cumulative distribution function defined as 1 Q(x) = √ 2π

3

⎧ Pm ≤ θ ⎪ ⎪ ⎪ ⎪ 0 ≤ ωi ≤ 1, i = 1, 2, . . . , N ⎪ ⎪ ⎨ N  subject to ωi = 1 ⎪ ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎩ 0 γc ≤ γc ≤ γc1

(22) (23) (24) (25) (26)

where γc0 and γc1 correspond to the upper and lower bounds of γc , respectively. In P1 , γc and (ω1 , . . . , ωN ) are the variables. Since Q(·) is monotonously decreasing, constraint (23) is equivalent to γc − (Mσ + Es g)T ω

≤ Q−1 (1 − θ1 ) ω T  H1 ω

(27)

where Q−1 is the inverse of Q(·). For simplicity, let a = (1 − PH1 )R0 and b = PH1 R1 . Equation (22) can be written as min [μ1 , μ2 ]

(28)

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 4

IEEE TRANSACTIONS ON CYBERNETICS

where



γc − (Mσ + Es g)T ω

μ1 = −Q (29) ω T  H1 ω γc − Mσ T ω γc − (Mσ + Es g)T ω

μ2 = aQ

+ bQ . ω T  H0 ω ω T  H1 ω (30)

Due to the monotonicity of the function Q(·), the objective function

μ1 can be replaced by ν = γc − (Mσ + Es g)T ω/ ωT  H1 ω. In the remainder of this paper, we mainly consider the objective function μ1 . It is noted that our proposed methods still work when ν is chosen as the objective. III. E VENLY D ISTRIBUTED PARETO S OLUTIONS In this section, we aim to obtain the solutions to P1 . For a general MOO problem min μ(x) = [μ1 (x), μ2 (x), . . . , μm (x)]T x  gi (x) ≤ 0, i = 1, 2, . . . , k hj (x) = 0, j = 1, 2, . . . , e its solution is termed Pareto solution [22]. Definition 1: Let X denote {x | gi (x) ≤ 0, i = 1, 2, . . . , k; and hj = 0, j = 1, 2, . . . , e}. A point, x∗ ∈ X, is Pareto optimal iff there does not exist another point, x ∈ X, such that μ(x) μ(x∗ ). In general, Pareto frontier usually contains infinite Pareto solutions. Accordingly, it is important to obtain an even distribution of Pareto points to obtain maximum information on the Pareto frontier at minimum computational cost. A. MOO Methods A classical MOO technique is to minimize the weighted sums of the objectives. It seeks Pareto solutions one by one by systematically changing the weights among the objective functions. However, an evenly distributed set of weights often fails to produce an even distribution of Pareto solutions [28]. Kim and de Weck [29] proposed to adjust the weights adaptively and impose additional inequality constraints in the objective space. Accordingly, their adaptive weightedsum (AWS) method can produce evenly distributed Pareto solutions and work well in nonconvex regions. Another class of evolutionary methods such as the genetic-based algorithms generate a set of Pareto solutions simultaneously [30]. However, the evolutionary methods do not guarantee either the generation of evenly distributed Pareto solutions or the representation of the entire Pareto frontier. The normal boundary intersection (NBI) method proposed by Das and Dennis [31] has a clear geometrical interpretation and gives fairly uniform solutions. Similar to NBI, the NC method proposed by Messac and Mattson [25] also generates the Pareto solutions evenly distributed on the Pareto frontier. Moreover, NC is computationally stable, and is less likely to generate non-Pareto and locally Pareto solutions. Furthermore, it is convenient for a designer to tell NC how many Pareto

Fig. 2.

Preferred Pareto solutions.

solutions should be generated.2 Therefore, here, we choose NC to solve the MOO problem P1 . B. Applying NC to Our MOO Problem The NC method provides a simple framework to solve a general MOO problem. It translates an MOO problem into a set of SOO problems, which usually results in an even distribution of Pareto solutions. For the illustration purpose, we consider a general biobjective optimization problem. We assume that the objectives of this problem are normalized. The feasible space of the problem is shown in Fig. 2. The lower boundary of the feasible space corresponds to Pareto frontier. The endpoints of the Pareto frontier are termed anchor points. The ith (i = 1, 2) anchor point μ∗i is obtained by minimizing the ith objective independently, that is min μi (x) x gi (x) ≤ 0, i = 1, 2, . . . , k hj (x) = 0, j = 1, 2, . . . , e. The line joining the anchor points is termed Utopia line. NC divides the Utopia line into + 1 ( ∈ Z+ ) segments of equal length, resulting in auxiliary points. Each auxiliary point generates a normal line to the Utopia line, which is used to reduce the feasible space. Within a reduced feasible space, we solve the corresponding SOO problem and then obtain a Pareto solution. For example, the normal line through the auxiliary point Pb divides the feasible space into two regions. The upper region is given by

T  ∗  μ2 − μ∗1 μ − Pb ≤ 0 (31) where μ is a generic point in the feasible space. If we minimize the second objective (i.e., μ2 ) under constraint (31), the resulting optimum point is b, which is a Pareto solution of the considered problem.3 By repeatedly shifting the normal 2 In general, it is hard to accurately predict or determine how many Pareto solutions will be derived by AWS before running AWS. 3 Alternatively, we can choose to minimize μ under the constraint 1 (μ∗1 − μ∗2 )(μ − Pb )T ≤ 0.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. YUAN et al.: MULTIOBJECTIVE OPTIMIZATION OF LCSS: PARETO SOLUTIONS AND REFINEMENT

line along the Utopia line with a fixed step size and solving the corresponding SOO problem, we can get a set of Pareto solutions evenly distributed on the Pareto frontier. Under rare circumstances, NC may generate non-Pareto solutions. Accordingly, we can use a Pareto filter to eliminate the non-Pareto and locally Pareto points from a given set. We refer the readers to [27] for details. To apply NC to our problem P1 , we just need to conduct the following steps. 1) Normalization: We first normalize the two objective functions as μ¯ i =

μi − μU i U μN i − μi

, i = 1, 2

(32)

U where μN i and μi are the ith element of the Nadir point and Utopia point. Here Nadir point is the point with all the objectives at their worst values, and Utopia point is a specific point corresponding to all objectives simultaneously being at their best possible values. 2) Deriving the SOO Problems: Then, we calculate the anchor points and the auxiliary points according to a prescribed number of Pareto solutions . For convenience, we use pk to represent the kth auxiliary point, given by   k k μ¯ ∗1 + μ¯ ∗ , k = 1, . . . , . (33) pk = 1 − +1 +1 2

With an auxiliary point pk , we get an SOO problem defined as −Q P1k : subject to

min

γc −(Mσ + Es g)T ω ω T  H1 ω

− μU 1

(34) U μN 1 − μ1 γc − (Mσ + Es g)T ω

≤ Q−1 (1 − θ ) (35) ω T  H1 ω 0 ≤ ωi ≤ 1, i = 1, 2, . . . , N (36) N  ωi = 1 (37) γc ,ω

i=1 γc0 ≤ γc ≤ γc1 T    ∗ ¯ c , ω) − pk ¯ ∗2 μ(γ ¯1 −μ μ

(38) ≤ 0.

(39)

It is noted that constraint (39) is introduced to reduce the feasible space. 3) Generating Pareto Solutions: Now, we can obtain a set of evenly distributed Pareto solutions by solving a set of SOO problems P1k (k = 1, 2, . . . , m) independently. In this paper, we mainly consider the case where the Pareto frontier of our problem is strictly convex, which is verified via many simulations. Due to space limit, we ignore the discussion on Pareto filter in this paper. The rest of the problem is how to solve the SOO problem P1k . C. Algorithm Development Due to the complexity of the objective function, it is usually hard to globally solve P1k . In fact, P1k can remain nonconvex even if we simplify the objective function as ν. Accordingly,

5

finding its global optimum is NP-hard. A detailed discussion on the nonconvexity of ν can be found in the Appendix. In general, P1k can be solved using deterministic and stochastic algorithms [32]. The complexity resulted by deterministic algorithms often grows exponentially as a function of number of variables [33]. Stochastic algorithms are usually faster in locating a global optimum than deterministic ones. In this paper, we employ stochastic algorithms to solve the generated SOO problems. Most stochastic algorithms use stochastic methods to search for the location of local optimum and then utilize deterministic methods to solve a local optimization problem [34]. In the first phase, the objective function is evaluated at a number of randomly sampled points from a uniform distribution. In the second phase, the sample points are used as starting points for a local search (i.e., local optimization). Clearly, local search is the most computationally intensive stage. Unfortunately, some local optima may be obtained many times. To reduce the complexity, a stochastic algorithm should try to perform local search just once in every region of attraction.4 This is the motivation behind various versions of clustering methods. Among these clustering methods, the multilevel single linkage (MLSL) is one of the most efficient solution methods [34]. In MLSL, only the sample points with small objective functions values are considered for local search. Furthermore, these points are grouped into clusters, each of which is initiated by a local minimum that has already been found. In this paper, we employ a special kind of MLSL algorithm, i.e., SobolOPT, to solve P1k . SobolOPT employs the Sobol’ sequences to generate the sampled points, which can fill the n-dimensional hypercube as uniformly as possible [33]. The main procedure of SobolOPT is described as follows. 1) Sample a set of points from a uniform distribution using the Sobol sequence. 2) Select the points with relatively small objective function values. 3) Attempt to assign the selected points to some clusters. If a point is not assigned to any cluster, then start a local search. If the derived local optimum is not located in the previous steps, then initiate a new cluster with it. 4) Go back to 1) until a stopping criterion is met. Note that the SobolOPT algorithm described in [33] does not take the constraints into consideration. To apply it to P1k , we introduce a simple penalty function to relax the constraints.5 The objective function of the relaxed P1k is given by −Q μ 1 =

γc −(Mσ + Es g)T ω ω T  H1 ω

U μN 1 − μ1

− μU 1 + P(γc , ω)

(40)

4 Here, the region of attraction of a local optimum is defined as the set of points (or solutions) starting from which a given local search procedure converges to that local optimum. 5 In our algorithm, the penalty function is used to determine whether a sample point should be discarded or not. For simplicity, we choose a constant penalty function. When conducting a local search, SobolOPT considers the constrained optimization problem P1k and does not use the penalty function.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 6

IEEE TRANSACTIONS ON CYBERNETICS

Algorithm 1 SobolOPT Algorithm Step 1: Set k = 1, W = 0 and the set of local minima X ∗ = ∅. Step 2: Set i = 0 and sample a set X of S points from a uniform distribution over Hn using the Sobol’s sequence. Step 3: Evaluate the objective function on X, get the sequence μ 1 (xi ) and select a reduced set Xr = {xi ∈ X | i = 1, . . . , κS} (0 < κ < 1, κS ∈ Z+ ). Step 4: Set i = i + 1 and take xi ∈ Xr . Step 5: Assign xi to a cluster Cl if ∃xj , xj ∈ Cl such that ρ(xi , xj ) ≤ rk and μ 1 (xj ) ≤ μ 1 (xi ). If xi is not assigned to any cluster, conduct a local search at xi to yield a local minimum x∗ for P1k . If x∗ ∈ X ∗ , add x∗ to X ∗ , set W = W+1, initiate the W-th cluster by x∗ and assign xi to that cluster. Step 6: If i < κS, go back to Step 4. Step 7: If the stopping criterion is met, then stop. Otherwise, set k = k + 1 and go back to Step 2.

where the penalty function is defined as  P, (34)–(39) are not all satisfied P(γc , ω) = 0, otherwise.

(41)

Here, P (P > 1) is a large positive number. The SobolOPT algorithm designed for P1k is described in Algorithm 1. In this algorithm, H n = {xi | 0 ≤ xi ≤ 1, i = 1, 2, . . . , n} denotes an n-dimensional unit hypercube, ρ(xi , xj ) is the Euclidian distance between xi and xj , μ 1 (xi ) | xi ∈ X represents the sequence of objective function values ordered so that μ 1 (xi ) < μ 1 (xi+1 ) for all i < S. rk is the critical distance defined as 1

⎡ ⎤ N+1 2 1 + N+1 2 log(kS) ⎦ rk = ⎣ (42) N+1 kS π 2 where k is the iteration index [34]. It is noted that we can use the classical optimization algorithms such as interior point, active set, trust-region, and sequential quadratic programming [35] to implement the local search in Step 5. For simplicity, the stopping criterion of SobolOPT can be set to the maximally allowed number of iterations or sample points considered for clustering. In addition, SobolOPT has an asymptotic convergence to a global optimum. A simple discussion on the asymptotic convergence of SobolOPT can be found in [36]. With SobolOPT, we propose an NC-based MOO algorithm to solve P1 . The main skeleton of this algorithm is given in Algorithm 2. In this algorithm, denotes the number of Pareto solutions. IV. O PTIMAL N UMBER OF PARETO S OLUTIONS The computational complexity of our NC-based method mainly depends on the number of Pareto solutions and the number of iterations (or sample points considered for local search) in SobolOPT. On one hand, more Pareto solutions often provide a better characterization of Pareto frontier. On the other hand, more iterations can enhance the probability of

Algorithm 2 NC-Based MOO Algorithm Step 1: Normalize the objective functions and calculate the anchor points. Step 2: Obtain a set of auxiliary points. Step 3: Set k = 1. Step 4: Solve the SOO problem P1k with SobolOPT and obtain the corresponding Pareto solution. Step 5: Set k = k + 1. If k = holds, then stop. Otherwise, go to Step 4.

obtaining a global optimum. In practice, however, the computational resource is limited. Accordingly, we need to achieve an optimal balance between quantity and quality of Pareto solutions. Let  represent the maximal number of sample points considered for clustering in SobolOPT. Next we will obtain an optimal ( , ), denoted by ( ∗ , ∗ ), under the constraint of maximally allowed computing overhead ϒ. We use a function F( , ) to quantify the designer’s satisfaction of the obtained Pareto solutions, and a nonnegative J() to represent the maximum computing overhead for obtaining one Pareto solution. Generally speaking, F(·) monotonically increases with  or , and J(·) also monotonically increases with .6 Let min and max be the minimum and maximum of , respectively. Let max be the maximum of . We can obtain ( ∗ , ∗ ) through solving P2 : max F( , ) ,

(43)

⎧ (44) ⎨ min ≤ ≤ max  ≤ max (45) ⎩ · J() ≤ ϒ. (46) The above problem falls in the category of nonlinear integer programming problems since F(·) is usually nonlinear. Fortunately, this problem has the following properties. 1) If max · J(max ) ≤ ϒ, then, we have ∗ = max and ∗ = max . 2) If max · J(max ) > ϒ, then, we have ∗ = min{J −1 (ϒ/ ∗ ), max }. In the first case, the computational resource is sufficient and constraint (46) is always met. Accordingly, we can maximize the quantity and quality of solutions at the same time. In the second case, the computational resource is limited and constraint (46) is not always satisfied. Note that the objective function F( , ) monotonously increases with both and . To maximize F( , ), one needs to enhance and  as much as possible. Clearly, the increase of and  is bounded by constraint (46). Thus, the optimal solution is reached when ϒ − · J() remains nonnegative and cannot be further reduced by unilaterally increasing or . Accordingly, we have ∗ = min{J −1 (ϒ/ ∗ ), max }. Based on the properties of P2 , we propose a low-complexity algorithm to derive ( ∗ , ∗ ), given in Algorithm 3. Clearly, the algorithm consists of max − min + 1 iterations. 6 Our method does not require the concavity of F(·) and J(·). Different from the classic integer programming methods such as branch-and-bound, our method does not need to conduct the continuously relaxed problem. Hence our method does not require that the continuously relaxed problem is convex.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. YUAN et al.: MULTIOBJECTIVE OPTIMIZATION OF LCSS: PARETO SOLUTIONS AND REFINEMENT

Algorithm 3 Parameter Optimization Algorithm Step 1: If max · J(max ) ≤ ϒ, then ∗ = max , ∗ = max and the algorithm stops. Step 2: Set F ∗ = −∞, ∗ = 0, ∗ = 0 and k = 0. ϒ Step 3: Set  = min{J −1 ( max −k ), max }. Step 4: If F( max − k, ) > F ∗ , set ∗ = max − k, ∗ =  and F ∗ = F( ∗ , ∗ ). Step 5: Set k = k + 1. If max − k < min , then stop. Otherwise, go to Step 3.

V. PARETO S OLUTION R EFINEMENT Our NC-based MOO method generates a set of Pareto solutions evenly distributed on the entire Pareto frontier. In practice, a designer may only want to explore a part of Pareto frontier. For example, the designer is interested in the Pareto solutions with a relatively high system throughput. Here system throughput is defined as the throughput of both primary network and secondary network. Let R˜ 0 and R˜ 1 (R˜ 0 > R˜ 1 ) denote the throughput of primary network when the presence of PU is detected and not detected, respectively. Then the system throughput can be calculated as τ R1 Rsys = PH1 (1 − Pm )R˜ 0 + PH1 Pm R˜ 1 + 1 − T

τ (1 − Pf )R0 . (47) + (1 − PH1 ) 1 − T Accordingly, the designer desires to obtain the Pareto solutions resulting in a relatively small value of     PH1 R˜0 − R˜1 Pm − R = PH1 R˜0 − R˜1 μ1 + μ2. (48) Here, PH1 (R˜0 − R˜1 )μ1 + μ2 corresponds to the positively weighted sum of μ1 and μ2 . In this situation, the Pareto solutions that cannot satisfy the preference of the designer should be ignored. This raises an interesting issue that must be dealt with, i.e., how to identify the preferred Pareto solutions and characterize the interested Pareto segment according to a certain preference? In this paper, this issue is termed Pareto solution refinement. As shown in Fig. 2, the dotted line represents the preference of a designer, and the bold segment corresponds to the interested Pareto segment. NC generates the auxiliary points evenly distributed on the entire Utopia line. Accordingly, the Pareto solutions derived by NC (e.g., a, b, and c) cover the entire Pareto frontier. To explore the interested Pareto segment, we need to identify the interested Utopia segment. Based on the above analysis, here, we propose a method to extend NC to fulfill this goal. Consider a general case where the Pareto solutions with a relatively small value of αμ1 + βμ2

(49)

are desired (α > 0 and β > 0). In this paper, we consider the case where there exists one unique Pareto solution globally minimizing αμ1 + βμ2 , denoted by (μ∗1 , μ∗2 ). Clearly, the interested Pareto segment should contain (μ∗1 , μ∗2 ). Let L and l denote the length of the entire Utopia line and the interested Utopia segment, respectively. To identify the interested Pareto segment, we require l/L ≤ ξ < 1, where ξ is introduced

7

for precision control. Let (μˇ 1 , μˆ 2 ) and (μˆ 1 , μˇ 2 ) (μˇ 1 < μˆ 1 , μˇ 2 < μˆ 2 ) be the anchor points. For any two Pareto solutions (μa1 , μa2 ) and (μb1 , μb2 ) (μˇ 1 ≤ μa1 < μb1 ≤ μˆ 1 ), we have the following theorem. Theorem 1: If μa1 < μb1 ≤ μ∗1 , we have αμa1 + βμa2 > αμb2 + βμb2 . If μ∗1 < μa1 < μb1 , we have αμa1 + βμa2 < αμb2 + βμb2 . With Theorem 2, we propose the following corollary. Corollary 1: 1) If αμa1 + βμa2 > αμb2 + βμb2 , then αμc1 + βμc2 > αμb2 + βμb2 holds for any Pareto solution (μc1 , μc2 ) (μˇ 1 ≤ μc1 < μa1 ). 2) If αμa1 + βμa2 < αμb2 + βμb2 , then αμa1 + βμa2 < αμc2 + βμc2 holds for any Pareto solution (μc1 , μc2 ) (μb1 < μc1 ≤ μˆ 1 ). Due to space limit, the proofs of Theorem 1 and Corollary 1 are omitted.7 Inspired by Theorem 1 and Corollary 1, we propose a method to extend NC to support Pareto solution refinement. The method consists of three stages. 1) Stage I: Determine the interested Utopia segment. This is the key step of our method. Initially, the interested segment is set to the entire Utopia line. Then it is shortened iteratively until l/L ≤ ξ is met. 2) Stage II: Generate the auxiliary points on the interested Utopia segment. Similar to the traditional NC method, our method generates the uniformly distributed auxiliary points. However, the auxiliary points of our method only cover the interested Utopia segment instead of the entire Utopia line. 3) Stage III: Obtain the interested Pareto solutions by solving a set of SOO problems corresponding to the auxiliary points generated in Stage II. This procedure is similar to that in the traditional NC method. Assume that the objectives μ1 and μ2 are normalized. Let EL (k) = (μL1 (k), μL2 (k)) and ER (k) = (μR1 (k), μR2 (k)) (μL1 (k) < μR1 (k)) denote the endpoints of the interested Utopia segment in the kth iteration. Let μa (k) = (μa1 (k), μa2 (k)) and μb (k) = (μb1 (k), μb2 (k)) be the Pareto solutions corresponding to the auxiliary points pa (k) and pb (k), respectively. Furthermore, we refer   L 2  2 l(k) = μ1 (k) − μR1 (k) + μL2 (k) − μR2 (k) (50) to the length of the interested Utopia segment. The main procedure of Stage I is described in Algorithm 4, where λ (0.5 < λ < 1) is the reduction ratio of the interested Utopia segment. It is obvious that the maximal number of iterations in Algorithm 4 is log ξ/λ. VI. P ERFORMANCE E VALUATION In this section, we evaluate the proposed algorithms through MATLAB simulations. Some system parameters are chosen as follows: M = 20, γc0 = 1, γc1 = 100, δi2 = 0.5 (∀i), PH1 = 0.4, R0 = 40 Mb/s, R1 = 10 Mb/s, and θ = 0.1. Similar to [12], s(k) is set to 1. Furthermore, the number of sample points considered for clustering in SobolOPT is 48. 7 The skeleton of proofs can be found in [36].

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 8

IEEE TRANSACTIONS ON CYBERNETICS

Algorithm 4 Pareto Solution Refinement Algorithm Step 1: Set k = 1, EL (1) = (μˇ 1 , μˆ 2 ) and ER (1) = (μˆ 1 , μˇ 2 ). Step 2: If l(k) ≤ ξ L, go to Step 8. If μL1 (k) ≥ μR1 (k), stop. Step 3: Set pa (k) = EL (k) + (1 − λ)(ER (k) − EL (k)), and pb (k) = EL (k) + λ(ER (k) − EL (k)). Step 4: Solve the SOO problems corresponding to pa (k) and pb (k), and obtain the Pareto solutions μa (k) and μb (k). Step 5: If αμa1 (k) + βμa2 (k) > αμb1 (k) + βμb2 (k), set EL (k + 1) = pa (k) and ER (k + 1) = ER (k). Go to Step 7. Step 6: If αμa1 (k) + βμa2 (k) ≤ αμb1 (k) + βμb2 (k), set ER (k + 1) = pb (k) and EL (k + 1) = EL (k). Step 7: Set k = k + 1, and go to Step 2. Step 8: EL (k) and ER (k) are the endpoints of the interested Utopia segment and the algorithm stops.

Fig. 3.

Fig. 4.

Fig. 5.

Pareto solutions, N = 4.

Fig. 6.

Pareto solutions, N = 5.

Fig. 7.

Pareto solutions, N = 6.

Pareto solutions, N = 3, low SNR.

Pareto solutions, N = 3, high SNR.

A. Evenly Distributed Pareto Solutions We first demonstrate that the proposed NC-based MOO method can characterize the Pareto frontier of our MOO problem in a complete and even fashion. The simulation results are given in Figs. 3–7. In these figures, every cross represents an auxiliary point and every circle denotes a Pareto solution. In Fig. 3, the shaded area indicates the feasible space of our MOO problem, and the lower boundary of this area corresponds to the Pareto frontier. It is obvious that the solutions

derived by our method are evenly distributed over the entire Pareto frontier. Now, we compare the Pareto solutions shown in Figs. 3 and 4. In Fig. 3, the SNRs are (14.10, 13.95, 14.40) in dB. In Fig. 4, the SNRs are enhanced to (15.98, 15.81, 16.32) in dB. It can be seen that higher SNRs usually result in better Pareto solutions. B. System Performance To analyze the system performance, we evaluate Pm and R corresponding to the derived Pareto solutions. Here, we only show the values of Pm and R under the scenario of N = 3 and

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. YUAN et al.: MULTIOBJECTIVE OPTIMIZATION OF LCSS: PARETO SOLUTIONS AND REFINEMENT

Fig. 8.

Missed detection probability, N = 3.

Fig. 10.

GA, N = 3.

Fig. 9.

Average throughput of SUs, N = 3.

Fig. 11.

PRS, N = 3.

Fig. 12.

GA, N = 6.

SNRs = (15.98, 15.81, 16.32). From Figs. 8 and 9, we draw the following conclusions. 1) As shown in both figures, when the average throughput of SUs is high (low), the missed detection probability is also high (low). That is to say, the objectives of missed detection probability minimization and average throughput maximization are actually conflicting. 2) In Fig. 8, the missed detection probability corresponding to the Pareto solution is always below θ = 0.1. Therefore, the requirement for missed detection probability [i.e., constraint (23)] is satisfied by our method. 3) In both figures, γc increases from 1.6 to 12.8. Accordingly, Pm increases and Pf decreases and the average throughput raises. 4) The solutions derived by the SOO method correspond to the anchor points. They are Pareto optimal and lie on the Pareto frontier. However, our MOO method provides the flexibility in determining the appropriate design (i.e., ω and γc ). If the SUs enhance their average throughput, they will also increase Pm . As a result, the PUs will be frequently interfered. Hence, the Pareto solutions actually represent the tradeoffs between the interests of the SUs and the PU. If a designer attaches more importance to the interests of the PU, he/she tends to select the Pareto solutions with low Pm such as P1 . Otherwise, the designer can choose the Pareto solutions with high throughput such as P2 .

9

C. Optimality and Complexity of SobolOPT Now, we study the performance of SobolOPT with respect to optimality and complexity. For comparison, we show the solutions derived by genetic algorithm (GA) and pure random search (PRS) in Figs. 10–13. GA is a search heuristic that mimics the natural evolution process. The evolution starts from a population of randomly generated individuals and happens in generations. As for PRS, it evaluates the objective function in a set of sample points drawn from a uniform distribution, and chooses the smallest function value found as the candidate of optimum. To enhance the probability of the global optimum being captured, the number of evaluations is set to 25 600 in our simulations.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 10

IEEE TRANSACTIONS ON CYBERNETICS

TABLE III C OMPLEXITY AND O PTIMALITY (N = 6,  = 48)

Fig. 13.

PRS, N = 6. TABLE I C OMPLEXITY AND O PTIMALITY (N = 3,  = 48)

Fig. 14. TABLE II C OMPLEXITY AND O PTIMALITY (N = 4,  = 48)

It can be seen that SobolOPT consistently finds the global optimum but GA and PRS fail to do it. Some observations and analyses are given as follows. 1) In Figs. 10 and 12, GA often misses the global optimum. This is not surprising since GA may converge toward local optima or even arbitrary points rather than the global optimum. 2) In Figs. 11 and 13, PRS finds some feasible (but not optimal) solutions, and some of them are close to the Pareto frontier. Although PRS cannot guarantee the generation of global optima, it may find a near-optimal solution close to the global optimum, especially when the number of sample points is sufficiently large. 3) In some cases, the near-optimal solutions derived by PRS are better than the local optima and the feasible solutions derived by GA, as shown in the scenario of N = 3. However, when the dimension of the problem is larger, the probability of PRS failing to obtain a feasible solution also increases. For example, PRS cannot find a feasible solution for six times under the scenario of N = 6. Accordingly, only 14 solutions are shown in Fig. 13. Furthermore, we compare SobolOPT with GA and PRS in terms of complexity. We conduct the simulations on a PC whose CPU speed is 3.20 GHz and RAM is 4 GB. Under every scenario, the same twenty SOO problems are solved using SobolOPT, GA, and PRS. The results are shown in

All possible ( , ).

Tables I–III, respectively. In these tables, the second column represents the average number of local searches for SobolOPT, the function tolerance (i.e., TolFun) and nonlinear constraint tolerance (i.e., TolCon) for GA,8 and the number of objective function evaluations for PRS. The third column shows the average running time of these algorithms for one SOO problem. The simulation results show that the running time of SobolOPT increases with the size of secondary network. However, the number of LCSS participants cannot be large due to the limited channel bandwidth [37]. Moreover, some CSS participant selection mechanisms [8] or clustering methods [38] can be developed to reduce the number of CSS participants in a large-scale CRN. Hence, the running time of SobolOPT is acceptable in practice.9 As compared to SobolOPT, GA has a shorter running time in some cases. But GA often gets trapped in a local optimum. As for PRS, it has no advantage over GA and SobolOPT with respect to running time. D. Optimal Number of Pareto Solutions In this section, we evaluate the efficiency of Algorithm 3. For simplicity, we use two functions: 1) F( , ) = log() and 2) J() =  + 2 in our simulations. It should be pointed out that Algorithm 3 is applicable to more complicated functions. In fact, the number of iterations in Algorithm 3 does not depend on the function forms of F(·) and J(·). In our simulations, some parameters are chosen as follows: min = 4, max = 50, max = 25 600, and ϒ = 512 000. 8 The GA algorithm runs until the cumulative change in the fitness function value over the stall generations is less than TolFun. TolCon is used to determine the feasibility with respect to nonlinear constraints. Furthermore, the maximal number of generations is set to a large number 100 000 in our simulations. 9 Due to the interpretive execution of MATLAB, the running time of SobolOPT is actually overestimated compared to that in a practical system.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. YUAN et al.: MULTIOBJECTIVE OPTIMIZATION OF LCSS: PARETO SOLUTIONS AND REFINEMENT

Fig. 15.

Fig. 16.

Fig. 17.

11

Finding ( ∗ , ∗ ) using our algorithm. Fig. 18.

Pareto solution refinement (N = 5, ξ = 0.2, and α/β = 1.0).

Fig. 19.

Pareto solution refinement (N = 5, ξ = 0.1, and α/β = 1.0).

Pareto solution refinement (N = 3, ξ = 0.2, and α/β = 0.5).

Pareto solution refinement (N = 3, ξ = 0.2, and α/β = 2.0).

The simulation results are given in Figs. 14 and 15. Fig. 14 shows all possible ( , ) and their objective function values. The circle represents the optimal solution, i.e., the optimal configuration ( ∗ , ∗ ). The number of points in Fig. 14 is 1 203 200, which corresponds to a large search space. With Algorithm 3, the search space is greatly reduced. As shown in Fig. 15, Algorithm 3 only visits 47 points and succeeds in finding the optimal configuration. E. Pareto Solution Refinement Here, we verify the effectiveness of the proposed Pareto solution refinement method. To this end, we consider a general optimization problem, i.e., min αμ1 +βμ2 . Due to space limit,

we only show the results of four scenarios, which are shown in Figs. 16–19. Some observations and analyses are given as follows. 1) The obtained Pareto segment contains the solution minimizing αu1 + βu2 , which corresponds to the tangent point of the line and the Pareto frontier. In addition, the refined Pareto solutions result in relatively small values of αμ1 + βμ2 , which is consistent with the designer’s preference. 2) In Fig. 16, α/β = 0.5 means that the designer attaches more importance to μ2 . To satisfy this preference, our method identifies the Pareto solutions with relatively small values of μ2 . Similarly, when α/β = 2.0 holds, our method finds the Pareto solutions with relatively small values of μ1 , as shown in Fig. 17. 3) As shown in Figs. 18 and 19, when ξ is reduced from 0.2 to 0.1, the length of the interested Utopia segment is also reduced. Accordingly, our method identifies a smaller part of the Pareto frontier. VII. C ONCLUSION This paper investigates weight allocation and detection threshold selection for LCSS. We propose an MOO formulation to simultaneously optimize the missed detection probability and the average throughput of SUs. We first introduce the NC method to transform the considered MOO problem into a set of SOO problems. We then employ a stochastic

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 12

IEEE TRANSACTIONS ON CYBERNETICS

method to solve the SOO problems. We also develop a simple method to determine the optimal number of Pareto solutions. In addition, we propose a Pareto solution refinement method to identify the interested Pareto solutions according to the preference of a designer. Numerical results validate the effectiveness and efficiency of our proposed methods. A PPENDIX N ONCONVEXITY OF ν Proof: We verify the nonconvexity of ν using proof by contradiction. Assume ν is convex with respect to γc and ωi (1 ≤ i ≤ N) [16], [39]. Consequently, the secondorder derivative of ν over each individual variable should be nonnegative [16]. Here, we only consider the second-order derivatives of ν over ωi . For simplicity, let (Mσ + Es g)T = [α1 , α2 , . . . , αN ] and



 H1

Note that

 H1

β1 ⎜ =⎝ 0 .. .

⎞ ... ...⎟ ⎠. .. .

0 β2 .. .

is a diagonal matrix. ν can be written as  αk ωk γc − N ν =  k=1 . N 2 β ω k k=1 k

As a result, we have N − 32 N   ∂ν 2 = βk ωk αk ωk −γc βi ωi + βi ωi ∂ωi k=1 k=1 N  − αi βk ωk2 k=1

and ∂ 2ν ∂ωi2

=

N 

− 52 ⎡ βk ωk2



⎣2βi2 ⎝γc −

N 

⎞ αk ωk ⎠ωi2

k=i

k=1

⎛ ⎞ N  + 3αi βi ⎝ βk ωk2 ⎠ωi k=i

+ βi

N 



βk ωk2 ⎝−γc +

k=i

N 

⎞⎤ αk ωk ⎠⎦.

k=i

 2 −5/2 is obviIn the second-order derivative, ( N k=1 βk ωk ) ously positive, and the second factor is a quadratic polynomial. The discriminant of the second factor is ⎛ ⎞2 ⎛ ⎞2 N N N     = 9αi2 βi2 ⎝ βk ωk2 ⎠ + 8βi3 βk ωk2 ⎝γc − αk ωk ⎠ k=i

k=i

k=i

which is obviously nonnegative. It can be proved that  = 0 holds only if ωi = 1 and ∀k = i, ωk = 0. That is,  is zero at one single point. In the other cases,  > 0 and the

graph of the quadratic polynomial crosses the x-axis twice. Hence ∂ 2 ν/∂ωi2 < 0 holds in some cases, which contradicts our assumption on convexity. This completes the proof. R EFERENCES [1] I. F. Akyildiz, B. F. Lo, and R. Balakrishnan, “Cooperative spectrum sensing in cognitive radio networks: A survey,” Phys. Commun., vol. 4, no. 1, pp. 40–62, 2011. [2] S. Chaudhari, J. Lunden, V. Koivunen, and H. V. Poor, “Cooperative sensing with imperfect reporting channels: Hard decisions or soft decisions?” IEEE Trans. Signal Process., vol. 60, no. 1, pp. 18–28, Jan. 2012. [3] X. Wang et al., “Spectrum sharing in cognitive radio networks—An auction-based approach,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 587–596, Jun. 2010. [4] B. A. Bastami and E. Saberinia, “A practical multibit data combining strategy for cooperative spectrum sensing,” IEEE Trans. Veh. Technol., vol. 62, no. 1, pp. 384–389, Jan. 2013. [5] A. Mukherjee, “Diffusion of cooperative behavior in decentralized cognitive radio networks with selfish spectrum sensors,” IEEE J. Sel. Topics Signal Process., vol. 7, no. 2, pp. 175–183, Apr. 2013. [6] B. Wang, K. J. R. Liu, and T. C. Clancy, “Evolutionary cooperative spectrum sensing game: How to collaborate?” IEEE Trans. Commun., vol. 58, no. 3, pp. 890–899, Mar. 2010. [7] E. C. Y. Peh, Y.-C. Liang, Y. Guan, and Y. Zeng, “Optimization of cooperative sensing in cognitive radio networks: A sensing-throughput tradeoff view,” IEEE Trans. Veh. Technol., vol. 58, no. 9, pp. 5294–5299, Nov. 2009. [8] W. Yuan, H. Leung, W. Cheng, S. Chen, and B. Chen, “Participation in repeated cooperative spectrum sensing: A game-theoretic perspective,” IEEE Trans. Wireless Commun., vol. 11, no. 3, pp. 1000–1011, Mar. 2012. [9] A. S. Cacciapuoti, I. F. Akyildiz, and L. Paura, “Correlation-aware user selection for cooperative spectrum sensing in cognitive radio ad hoc networks,” IEEE J. Sel. Areas Commun., vol. 30, no. 2, pp. 297–306, Feb. 2012. [10] S. Maleki and G. Leus, “Censored truncated sequential spectrum sensing for cognitive radio networks,” IEEE J. Sel. Areas Commun., vol. 31, no. 3, pp. 364–378, Mar. 2013. [11] J. Ma, G. Zhao, and Y. Li, “Soft combination and detection for cooperative spectrum sensing in cognitive radio networks,” IEEE Trans. Wireless Commun., vol. 7, no. 11, pp. 4502–4507, Nov. 2008. [12] Z. Quan, S. Cui, and A. H. Sayed, “Optimal linear cooperation for spectrum sensing in cognitive radio networks,” IEEE J. Sel. Topics Signal Process., vol. 2, no. 1, pp. 28–40, Feb. 2008. [13] Z. Quan, W.-K. Ma, S. Cui, and A. H. Sayed, “Optimal linear fusion for distributed detection via semidefinite programming,” IEEE Trans. Signal Process., vol. 58, no. 4, pp. 2431–2436, Apr. 2010. [14] G. Taricco, “Optimization of linear cooperative spectrum sensing for cognitive radio networks,” IEEE J. Sel. Topics Signal Process., vol. 5, no. 1, pp. 77–86, Feb. 2011. [15] Z. Quan, S. Cui, A. H. Sayed, and H. V. Poor, “Optimal multiband joint detection for spectrum sensing in cognitive radio networks,” IEEE Trans. Signal Process., vol. 57, no. 3, pp. 1128–1140, Mar. 2009. [16] R. Fan and H. Jiang, “Optimal multi-channel cooperative sensing in cognitive radio networks,” IEEE Trans. Wireless Commun., vol. 9, no. 3, pp. 1128–1138, Mar. 2010. [17] M. Sanna and M. Murroni, “Optimization of non-convex multiband cooperative sensing with genetic algorithms,” IEEE J. Sel. Topics Signal Process., vol. 5, no. 1, pp. 87–96, Feb. 2011. [18] W. Han, J. Li, Z. Li, J. Si, and Y. Zhang, “Efficient soft decision fusion rule in cooperative spectrum sensing,” IEEE Trans. Signal Process., vol. 61, no. 8, pp. 1931–1943, Apr. 2013. [19] A. S. Cacciapuoti, M. Caleffi, L. Paura, and R. Savoia, “Decision maker approaches for cooperative spectrum sensing: Participate or not participate in sensing?” IEEE Trans. Wireless Commun., vol. 12, no. 5, pp. 2445–2457, May 2013. [20] E. Masazade et al., “A multiobjective optimization approach to obtain decision thresholds for distributed detection in wireless sensor networks,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 2, pp. 444–457, Apr. 2010. [21] H. Xia, J. Zhuang, and D. Yu, “Combining crowding estimation in objective and decision space with multiple selection and search strategies for multi-objective evolutionary optimization,” IEEE Trans. Cybern., vol. 44, no. 3, pp. 378–393, Mar. 2014.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. YUAN et al.: MULTIOBJECTIVE OPTIMIZATION OF LCSS: PARETO SOLUTIONS AND REFINEMENT

[22] R. T. Marler and J. S. Arora, “Survey of multi-objective optimization methods for engineering,” Struct. Multidiscipl. Optim., vol. 26, no. 6, pp. 369–395, Apr. 2004. [23] J. Xiao, S. Cui, Z. Luo, and A. J. Goldsmith, “Power scheduling of universal decentralized estimation in sensor networks,” IEEE Trans. Signal Process., vol. 54, no. 2, pp. 413–422, Feb. 2006. [24] F. Sun, V. O. K. Li, and Z. Diao, “Modified bipartite matching for multiobjective optimization: Application to antenna assignments in MIMO systems,” IEEE Trans. Wireless Commun., vol. 8, no. 3, pp. 1349–1355, Mar. 2009. [25] A. Messac and C. A. Mattson, “Normal constraint method with guarantee of even representation of complete Pareto frontier, ” AIAA J., vol. 42, no. 10, pp. 2101–2111, 2004. [26] A. Ismail-Yahaya and A. Messac, “Effective generation of the Pareto frontier using the normal constraint method,” in Proc. AIAA 40th Aerosp. Sci. Meeting Exhibit., Reno, NV, USA, 2002, pp. 1–12. [27] A. Messac, A. Ismail-Yahaya, and C. A. Mattson, “The normalized normal constraint method for generating the Pareto frontier,” Struct. Multidiscipl. Optim., vol. 25, no. 2, pp. 86–98, Jul. 2003. [28] I. Das and J. E. Dennis, “A closer look at drawbacks of minimizing weighted sums of objectives for Pareto set generation in multicriteria optimization problems,” Struct. Optim., vol. 14, no. 1, pp. 63–69, Aug. 1997. [29] I. Y. Kim and O. L. de Weck, “Adaptive weighted-sum method for bi-objective optimization: Pareto front generation,” Struct. Multidiscipl. Optim., vol. 29, no. 2, pp. 149–158, Feb. 2005. [30] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms. Chichester, U.K.: Wiley, 2001. [31] I. Das and J. E. Dennis, “Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems,” SIAM J. Meter Design Optim. Probl., vol. 8, no. 3, pp. 77–89, 1998. [32] B. G.-Tóth and E. M. T. Hendrix, Introduction to Nonlinear and Global Optimization. New York, NY, USA: Springer, 2010. [33] S. Kucherenko and Y. Sytsko, “Application of deterministic lowdiscrepancy sequences in global optimization,” Comput. Optim. Appl., vol. 30, no. 3, pp. 297–318, 2005. [34] L. Liberti and S. Kucherenko, “Comparison of deterministic and stochastic approaches to global optimization,” Int. Trans. Oper. Res., vol. 12, no. 3, pp. 263–285, May 2005. [35] J. Nocedal and S. J. Wright, Numerical Optimization. New York, NY, USA: Springer, 1999. [36] W. Yuan et al., “Multi-objective optimization of linear cooperative spectrum sensing: Pareto solutions and refinement,” IEEE Trans. Cybern., to be published. [Online]. Available: http://ei.hust.edu.cn/aprofessor/yuanwei/2015_MOOforLCSS.pdf [37] C. Sun, W. Zhang, and K. B. Letaief, “Cooperative spectrum sensing for cognitive radios under bandwidth constraints,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Hong Kong, 2010, pp. 1–5. [38] W. Saad, Z. Han, T. Basar, M. Debbah, and A. Hjorungnes, “Coalition formation games for collaborative spectrum sensing,” IEEE Trans. Veh. Technol., vol. 60, no. 1, pp. 276–297, Jan. 2011. [39] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.: Cambridge Univ. Press, 2004.

Wei Yuan (M’12) received the B.S. degree in electronic engineering from Wuhan University, Wuhan, China, and the Ph.D. degree in electronic engineering from the University of Science and Technology of China, Hefei, China, in 1999 and 2006, respectively. He is currently an Associate Professor with the School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan. His current research interests include wireless networks and the applications of optimization, game theory, and machine learning.

13

Xinge You (M’08–SM’10) received the B.S. and M.S. degrees in mathematics from Hubei University, Wuhan, China, in 1990 and 2000, respectively, and the Ph.D. degree from the Department of Computer Science, Hong Kong Baptist University, Hong Kong, in 2004. He is currently a Professor with the School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan. His current research interests include wavelets and its application, signal and image processing, pattern recognition, machine learning, and computer vision.

Jing Xu received the B.E. degree in communication engineering and the Ph.D. degree in communication and information systems, both from the Huazhong University of Science and Technology (HUST), Wuhan, China, in 2001 and 2011, respectively. He is currently a Lecturer with the School of Electronics Information and Communications, HUST. His current research interests include cyber and physical layer security, heterogeneous network, and network content analysis.

Henry Leung (F’15) received the Ph.D. degree in electrical and computer engineering from McMaster University, Hamilton, ON, Canada. He was at Defence Research Establishment, Ottawa, ON, Canada, where he was involved in the design of automated systems for air and maritime multisensor surveillance. He is currently a Professor with the Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB, Canada. His current research interests include chaos, computational intelligence, information fusion, data mining, robotics, sensor networks, and wireless communications.

Tianhang Zhang received the B.S. and M.S. degrees in electronic engineering from the Huazhong University of Science and Technology, Wuhan, China, in 2011 and 2014, respectively. He is currently an Communication Engineer with the China Mobile Group, Hubei Company Ltd., Wuhan. His current research interests include vehicle-to-grid interaction in smart grid and the applications of optimization and game theory.

Chun Lung Philip Chen (S’88–M’88–SM’94– F’07) received the M.S. degree in electrical engineering from the University of Michigan, Ann Arbor, MI, USA, and the Ph.D. degree in electrical engineering from Purdue University, West Lafayette, IN, USA, in 1985 and 1988, respectively. He was a Tenured Professor, a Department Head, and an Associate Dean in two different universities in the U.S. for 23 years. He is currently the Dean of the Faculty of Science and Technology, and a Chair Professor with the Department of Computer and Information Science, University of Macau, Macau, China. His current research interests include systems, cybernetics, and computational intelligence. Dr. Chen has been an Editor-in-Chief of the IEEE T RANSACTIONS ON S YSTEMS , M AN , AND C YBERNETICS : S YSTEMS since 2014. He is currently an Associate Editor of several IEEE transactions. He is also the Chair of Technical Committees 9.1 Economic and Business Systems of International Federation of Automatic Control. He is a fellow of the AAAS.

Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement.

In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection...
2MB Sizes 7 Downloads 8 Views