Hindawi Publishing Corporation The Scientific World Journal Volume 2013, Article ID 246596, 6 pages http://dx.doi.org/10.1155/2013/246596
Research Article An Accelerated Proximal Gradient Algorithm for Singly Linearly Constrained Quadratic Programs with Box Constraints Congying Han,1 Mingqiang Li,2 Tong Zhao,1 and Tiande Guo1 1 2
School of Mathematical Sciences, University of Chinese Academy of Sciences, Shijingshan District, Beijing 100049, China College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
Correspondence should be addressed to Tong Zhao;
[email protected] Received 4 August 2013; Accepted 23 September 2013 Academic Editors: I. Ahmad and P.-y. Nie Copyright Β© 2013 Congying Han et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Recently, the existed proximal gradient algorithms had been used to solve non-smooth convex optimization problems. As a special nonsmooth convex problem, the singly linearly constrained quadratic programs with box constraints appear in a wide range of applications. Hence, we propose an accelerated proximal gradient algorithm for singly linearly constrained quadratic programs with box constraints. At each iteration, the subproblem whose Hessian matrix is diagonal and positive definite is an easy model which can be solved efficiently via searching a root of a piecewise linear function. It is proved that the new algorithm can terminate at an π-optimal solution within π(1/βπ) iterations. Moreover, no line search is needed in this algorithm, and the global convergence can be proved under mild conditions. Numerical results are reported for solving quadratic programs arising from the training of support vector machines, which show that the new algorithm is efficient.
1. Introduction In this paper, we mainly consider the following quadratic programming problem: 1 minπ (π₯) = π₯π πΊπ₯ β ππ π₯, π₯βΞ© 2
(1)
where πΊ β π
πΓπ is symmetric and positive semidefinite, π₯, π β π
π , and the feasible region Ξ© is defined by Ξ© = {π₯ β π
π , π β€ π₯ β€ π’, ππ π₯ = π} ,
(2)
where π, π’, π β π
π and π is a scalar. The singly linearly constrained quadratic programs with box constraints appear in a wide range of applications such as image processing, biological information and machine learning. Specifically, support vector machine (SVM) is one of the most classical models of (1). It is a promising technique for solving a variety of machine learning and function estimation problems. The SVM learning methodology has been shown to give good performance in a wide variety of problems such as face detection, text categorization, and handwritten character
recognition. The number of variables in SVM is so huge that traditional optimization methods cannot be directly applied. Some decomposition method [1β4] with its subproblem being a special case of (1) is the main approach for largescale SVM problems. The solution of the subproblem by generalized variable projection method (GVPM) and projected gradient method (PGM) is introduced in [5, 6], respectively. Moreover, Zanni, and so forth, also proposes the parallel decomposition algorithms based on these two methods in [7, 8]. For more general large-scale model, some of the parallelization pieces of literature in [9, 10] recently are proposed, but these methods cannot be applied specifically to SVM so far. In this work, we will give an accelerated proximal gradient algorithm for (1). First, we consider the following nonsmooth convex optimization problem: minπ πΉ (π₯) = β (π₯) + π (π₯) , π₯βπ
(3)
where π : π
π β (ββ, β] is a proper, lower semicontinuous (lsc), convex function and β is convex smooth (i.e., continuously differentiable) on an open subset of π
π containing
2
The Scientific World Journal
dom π = {π₯ | π(π₯) < β}. ββ is Lipschitz continuous on dom π. That is, σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©ββ (π₯) β ββ (π¦)σ΅©σ΅©σ΅© β€ πΏ σ΅©σ΅©σ΅©π₯ β π¦σ΅©σ΅©σ΅©
βπ₯, π¦ β dom π
(5)
Recently, great attention has been paid to the solution of (3). Nesterov and Nemirovski [11, 12] study the accelerated proximal gradient method for (1) with an attractive iteration complexity of π(1/βπ) for achieving π-optimality. Almost at the same time, Beck and Teboulle [13] give a fast iterative shrinkage-thresholding algorithm (FISTA) which achieves the same convergence rate. After that, Tseng [14] summarizes these algorithms and presents a unified treatment of these methods. All these algorithms have a good performance on large-scale problems, such as linear inverse problems, matrix game problems, and matrix completion. Motivated by the successful use of the accelerated proximal gradient method for (3), we extend Beck and Teboulleβs algorithm to solve (1). In particular, the subproblem is solved by searching a root of a piecewise linear continuous function. Numerical results show that the new algorithm is efficient. The paper is organized as follows. In Section 2, the proximal gradient algorithm and its accelerated version are presented for (1). The convergence of these algorithms is also discussed. In Section 3, the method to efficiently solve the subproblem is introduced. Numerical results on SVM classification problems generated by the random and real world data sets are shown in Section 4. Section 5 is devoted to some conclusions and further study.
2. Proximal Gradient Algorithms In this section, we introduce the proximal gradient algorithm and its accelerated version which can be applied to solve (1). For any π§ β π
π , consider the following quadratic approximation of π(π₯) at π§: ππΏ (π₯, π§) = π (π§) + β¨π₯ β π§, βπ (π§)β© + =
πΏ βπ₯ β π§β2 2
πΏ π π₯ π₯ + (πΊπ§ β πΏπ§ β π)π π₯ + const, 2
(6) (7)
where πΏ > 0 is a given parameter, const = (πΏ/2)π§π π§ β βπ(π§)π π§ + π(π§). Since (6) is a strongly convex function of π₯, ππΏ (π₯, π§) has a unique minimizer in Ξ© which we denote by ππΏ (π§) = argmin {ππΏ (π₯, π§) : π₯ β Ξ©} .
βπ (π§) + πΏ (π€ β π§) + πΎ (π§) = 0,
(4)
for some πΏ > 0, where, and in what follows, β β
β denotes the spectral norm. Obviously, problem (1) is a special case of (3) with β(π₯) = π(π₯) and π(π₯) being the indicator function for the feasible region Ξ© defined by 0 if π₯ β Ξ©, π (π₯) = { +β else.
Lemma 1. For any π§ β π
π , π€ = ππΏ (π§) if and only if there exists πΎ(π§) β πΞ© (π€), such that
(8)
First, we present a simple result which characterizes the optimality of ππΏ (π§).
(9)
where πΞ© (π€) is the normal cone of Ξ© at the point π€ denoted by πΞ© (π€) = {π€β β π
π | β¨π€β , V β π€β© β€ 0 βV β Ξ©} .
(10)
Proof. This result can be obtained directly from the optimality conditions of (8), which is a strongly convex problem. Then, similarly to Lemma 2.3 in [13], we have the following key result. Lemma 2. Let π§ β π
π and πΏ > 0 such that π (ππΏ (π§)) β€ π (ππΏ (π§) , π§) .
(11)
Then for any π₯ β Ξ©, π (π₯) β π (ππΏ (π§)) β₯
πΏ σ΅©σ΅© σ΅©2 σ΅©π (π§) β π§σ΅©σ΅©σ΅© + πΏ β¨π§ β π₯, ππΏ (π§) β π§β© . 2σ΅© πΏ (12)
Proof. Since πΎ(π§) β πΞ© (π€) and π is convex, it follows that π (π₯) β₯ π (π§) + β¨π₯ β π§, βπ (π§)β© + β¨πΎ (π§) , π₯ β ππΏ (π§)β© .
(13)
Hence, from (9), (11), (13), and the definition of ππΏ (π§) we have π (π₯) β π (ππΏ (π§)) β₯ π (π₯) β π (ππΏ (π§) , π§) πΏσ΅© σ΅©2 β₯ β σ΅©σ΅©σ΅©ππΏ (π§) β π§σ΅©σ΅©σ΅© + β¨π₯ β ππΏ (π§) , βπ (π§) + πΎ (π§)β© 2
(14)
πΏσ΅© σ΅©2 = β σ΅©σ΅©σ΅©ππΏ (π§) β π§σ΅©σ΅©σ΅© + πΏ β¨π₯ β ππΏ (π§) , π§ β ππΏ (π§)β© 2 =
πΏ σ΅©σ΅© σ΅©2 σ΅©π (π§) β π§σ΅©σ΅©σ΅© + πΏ β¨π§ β π₯, ππΏ (π§) β π§β© . 2σ΅© πΏ
This completes the proof. Note that π(π₯) is a convex quadratic function and (11) can always be satisfied for ππΏ (π§) if πΏ β₯ π max (πΊ), where π max denotes the maximum eigenvalue of πΊ. The proximal gradient algorithm for (1) is as follows. Algorithm 3. We have the following steps. Step 1. Choose π₯0 β π
π . Step 2. While (π₯πβ1 does not satisfy the terminal conditions) π₯π = ππΏ (π₯πβ1 ) π = π + 1.
(15)
The following theorem is about the iteration complexity of Algorithm 3. In what follows, πβ and πβ denote the set of optimal solutions and optimal value of (1), respectively.
The Scientific World Journal
3
Theorem 4. Let {π₯π } and {π(π₯π )} be the sequences generated by Algorithm 3 with πΏ β₯ π max (πΊ); then, for π β₯ 1, the following results hold: (a) {π(π₯π )} is nonincreasing and π (π₯π ) β π (π₯π+1 ) β₯
πΏ σ΅©σ΅© σ΅©2 σ΅©π₯ β π₯π+1 σ΅©σ΅©σ΅© ; 2σ΅© π
(16)
2
(b) π(π₯π ) β π(π₯β ) β€ πΏβπ₯0 β π₯β β /2π, for all π₯β β πβ ;
(d) if πΊ is positive definite, then (17)
(18)
Furthermore, take π₯ = π₯π in (12); we get π (π₯π ) β π (π₯π+1 ) β₯
πΏ σ΅©σ΅© σ΅©2 σ΅©π₯ β π₯π+1 σ΅©σ΅©σ΅© . 2σ΅© π
(19)
(b) We have π (π (π₯π ) β π (π₯β )) πβ1
πβ1
π=0
π=0
π
β₯ (π₯π β π₯β ) πΊ (π₯π β π₯β )
where the first inequality is established by the definition of π₯β and the second inequality, with π min (πΊ) denoting the minimum eigenvalue of πΊ, is obtained from the positive definite property of πΊ. From (23) and (b), we get πΏ σ΅©σ΅© σ΅©σ΅© βσ΅© βσ΅© σ΅©π₯ β π₯ σ΅©σ΅©σ΅© . σ΅©σ΅©π₯π β π₯ σ΅©σ΅©σ΅© β€ β 2π min (πΊ) π σ΅© 0
π₯π = ππΏ (π§π ) ,
β
π=0
π‘π+1 =
πβ1
πΏσ΅© σ΅©2 β€ β β ( σ΅©σ΅©σ΅©π₯π+1 β π₯π σ΅©σ΅©σ΅© β πΏ β¨π₯π β π₯β , π₯π+1 β π₯π β©) (20) 2 π=0 πΏ πβ1 σ΅© σ΅©2 σ΅© σ΅©2 = β ( β (σ΅©σ΅©σ΅©π₯β β π₯π+1 σ΅©σ΅©σ΅© β σ΅©σ΅©σ΅©π₯β β π₯π σ΅©σ΅©σ΅© )) 2 π=0
πΏ σ΅©σ΅© β σ΅©2 σ΅©π₯ β π₯ σ΅©σ΅©σ΅© , 2σ΅© 0
where the first inequality uses the fact that {π(π₯π )} is nonincreasing and the second inequality is established by taking π₯ = π₯β , π§ = π₯π in (12). Dividing both sides by π in the last inequality, we get σ΅©2 σ΅© πΏσ΅©σ΅©π₯ β π₯β σ΅©σ΅©σ΅© π (π₯π ) β π (π₯ ) β€ σ΅© 0 . 2π
(21)
(c) It follows from (b) that, for all π₯β β πβ , σ΅©2 σ΅© πΏσ΅©σ΅©π₯ β π₯β σ΅©σ΅© π (π₯ππ ) β π (π₯β ) β€ σ΅© 0 π σ΅© . 2π
π§π+1
1 + β1 + 4π‘π2 2
(25) ,
π‘ β1 = π₯π + ( π ) (π₯π β π₯πβ1 ) , π‘π+1 π = π + 1.
πΏ σ΅© σ΅©2 σ΅© σ΅©2 = β (σ΅©σ΅©σ΅©π₯β β π₯π σ΅©σ΅©σ΅© β σ΅©σ΅©σ΅©π₯0 β π₯β σ΅©σ΅©σ΅© ) 2
β
(24)
Theorem 4 shows that Algorithm 3 has a result complexity of π(1/π). Now we introduce a new algorithm with an improved complexity result of π(1/π2 ).
β€ β (π (π₯π+1 ) β π (π₯ ))
β€
(23)
Algorithm 5. We have the following steps. Step 1. Choose π§1 = π₯0 β π
π , set π = 1, π‘1 = 1. Step2. While (π₯πβ1 does not satisfy the terminal conditions)
= ππ (π₯π ) β β π (π₯π+1 ) + β π (π₯π+1 ) β ππ (π₯β ) πβ1
π
= βπ(π₯β ) (π₯π β π₯β ) + (π₯π β π₯β ) πΊ (π₯π β π₯β )
σ΅© σ΅©2 β₯ π min (πΊ) σ΅©σ΅©σ΅©π₯π β π₯β σ΅©σ΅©σ΅© ,
Proof. (a) Let π§ = π₯π , π₯π+1 = ππΏ (π§); then from (11) and the optimality of ππΏ (π§) we have π (π₯π+1 ) β€ π (π₯π+1 , π₯π ) β€ π (π₯π , π₯π ) = π (π₯π ) .
π (π₯π ) β π (π₯β ) π
(c) if {π₯ππ } is a convergent subsequence of {π₯π } and limπ β β π₯ππ = π₯ββ then π₯ββ β πβ ; πΏ σ΅©σ΅© σ΅©σ΅© βσ΅© βσ΅© σ΅©σ΅©π₯π β π₯ σ΅©σ΅©σ΅© β€ β σ΅©π₯ β π₯ σ΅©σ΅©σ΅© . 2Ξ min (πΊ) π σ΅© 0
Let π β β in the above inequality; we have π(π₯ββ ) β€ π(π₯β ). On the other hand, by the definition of π₯β , we have π(π₯ββ ) β₯ π(π₯β ). Hence, π(π₯ββ ) = π(π₯β ), which implies that π₯ββ β πβ . (d) It follows from the definition of π(π₯) that, for all π₯β β β π ,
(26)
(27)
Remark 6. In Algorithm 5, the operator ππΏ (β
) is employed on the point π§π which is a specific linear combination of the previous two points {π₯πβ1 , π₯πβ2 }. However, in Algorithm 3, the operator only uses the previous point π₯πβ1 . On the other hand, the computational effort of these two algorithms is almost the same except for the computation of (26) in Algorithm 5 which is negligible. Now we give the promising improved complexity result for Algorithm 5. Theorem 7. Let {π₯π } and {π(π₯π )} be the sequences generated by Algorithm 5 with πΏ β₯ π max (πΊ). Then, for π β₯ 1, the following results hold: 2
(a) π(π₯π )βπ(π₯β ) β€ 2πΏβπ₯0 β π₯β β /(π+1)2 , for all π₯β β πβ ; (22)
(b) if {π₯ππ } is a converging subsequence of {π₯π } and limπ β β π₯ππ = π₯ββ , then π₯ββ β πβ ;
4
The Scientific World Journal (c) if G is positive definite, then σ΅©σ΅© βσ΅© σ΅©σ΅©π₯π β π₯ σ΅©σ΅©σ΅© β€ β
πΏ σ΅©σ΅© βσ΅© σ΅© σ΅©π₯0 β π₯ σ΅©σ΅© . 2 σ΅© 2π min (πΊ) (π + 1)
(28)
we need to find a hyperplane to separate the two classes of points. This can be done by solving the following convex quadratic programming problem: min
Proof. (a) Using Lemma 2, the proof is similar to the proof of Theorem 4.4 in [13]. (b) and (c) can be proved in the same way as that of (c) and (d) in Theorem 4, respectively.
3. How to Solve ππΏ(β
) In this section, we extend the method studied by Dai and Flecher [6] to solve ππΏ (β
). In order to obtain ππΏ (β
) in (8), we need to solve the following problem: min s.t.
πΏ π π₯ π₯ β ππ π₯ 2 π
π π₯ = π,
(29)
π β€ π₯ β€ π’,
where π = πΏπ§ + π β πΊπ§. For a given value of π, we consider the following box constrained QP problem: min
πΏ π π₯ π₯ β πππ₯ β π (ππ π₯ β π) 2
s.t.
πβ€π₯β€π’
(30)
and denote the minimizer of (30) as π₯(π). Then, π₯ (π) = mid (π, πΌ, π’) ,
(31)
where πΌ = πΌ(π) has components πΌπ = (ππ + πππ )/πΏ (π = 1, 2, . . . π) and mid(π, πΌ, π’) is the componentwise operation that supplies the median of its three arguments. That is, [π₯ (π)]π = ππ + πΌπ β π’π β max (ππ , πΌπ , π’π ) β min (ππ , πΌπ , π’π )
(π = 1, 2, . . . π) .
(32)
Define π (π) = ππ π₯ (π) β π;
(33)
then from [6], we know that π(π) is a piecewise linear continuous and monotonically increasing function of π. Furthermore, π₯(πβ ) is the optimal of (29) if πβ is located such that π(πβ ) = 0. Hence, the main task of solving (29) is to find a πβ such that π(πβ ) = 0. For this purpose, we adopt the algorithm introduced in [6] which consists of a bracketing phase and a secant phase. Numerical experiments in the next section show that this algorithm achieves high efficiency.
4. Numerical Experiments As we know, one of the important applications of SVM is classification. Given a training set π· = {(π§π , π¦π ) , π = 1, . . . , π, π§π β π
π , π¦π β {β1, 1}} ,
(34)
s.t.
1 π π₯ ππ₯ β ππ π₯ 2 π
π¦ π₯ = 0,
(35)
0 β€ π₯ β€ πΆπ,
where π¦ = [π¦1 , π¦2 , . . . , π¦π ]π , π β π
π is the vector of all ones, and πΆ is a positive scalar. π is a π Γ π symmetric and positive semidefinite matrix with entries πππ = π¦π π¦π πΎ(π₯π , π₯π ), π, π = 1, 2, . . . , π, where πΎ : π
π Γ π
π β π
is some kernel function. In this section, we illustrate the performance of Algorithms 3 and 5 with Matlab 7.10 on a Windows XP professional computer (Pentium R, 2.79 GHZ). We conduct a test on SVM classification problems with the random data sets and the real world data sets. The generation of the random data sets is based on four parameters π, π, π½, and π, where π is the number of samples and π is the dimension of π§π (π = 1, . . . , π). Each element of π§π is randomly generated in [π½, π] (π = 1, . . . , π), and β1 or 1 randomly emerges in the πth entry of π¦ (π = 1, . . . , π). We have generated three random data sets with π½ = β1, π = 1, π = 5, and π = 200, 600, and 1000, respectively. The two real world data sets are the UCI adult data set and the heart data set. For the UCI adult, the versions with π = 1605, 2265, and 3185, are considered. The Gauss radial basis function σ΅©σ΅© σ΅©2 σ΅©σ΅©π§π β π§π σ΅©σ΅©σ΅© σ΅© σ΅© ) πΎ (π§π , π§π ) = exp (β π2
(36)
is used in our tests. The parameters πΆ in (35) and π in (36) are set to (10, β40), (1, β13), and (1, 1000) for random data sets, heart, and UCI adult data sets respectively. For all test problems, the initial point is chosen as π₯0 = (0, . . . 0)π and the const πΏ = βππ=1 πππ . The terminal rules for solving ππΏ (β
) are |π(π)| β€ 10β8 . For the test problem with random data set (π = 200) and heart data set, the running steps are 8000 and 1000, respectively. In order to make a visual comparison between Algorithms 3 and 5, we plot the values of π(π₯π ) with iterations in Figure 1. Furthermore, we compute π
πΆπ
(π) = β ππ (π Γ (π β 1) + π)
(π = 1, 2, . . . π½) , (37)
π=1
where ππ(π Γ (π β 1) + π) denotes the number of calculating π(π) in step π Γ (π β 1) + π. In Figure 2, we plot the values of πΆπ
(π) (π = 1, 2, . . . π½) with π = 100, π½ = 80 and π = 20, π½ = 50, respectively. For the test problems with the rest data sets, the terminal condition is based on the fulfilment of the KKT conditions within 0.001 (see [15]). The numerical results are shown in
The Scientific World Journal
5
Function curves with random data set
0
β40
β200
β50 β60
β600
Function value
Function value
β400
β800 β1000 β1200
β70 β80 β90 β100
β1400
β110
β1600
β120
β1800
Function curves with heart data set
β30
0
1000
2000
3000
4000
5000
6000
7000
β130
8000
0
200
400
600
800
1000
Step k
Step k Algorithm 3 Algorithm 5
Algorithm 3 Algorithm 5 (a) Random data set
(b) Heart data set
Figure 1: Function value curve with iterations.
Performance profiles with heart data set
Performance profiles with random data set
450
100
400
90
350
80 70
300
60 CR
CR
250 200
50 40
150
30
100
20
50
10
0
0
10
20
30
40 j
50
60
70
80
0
0
10
20
30
40
50
j
Algorithm 3 Algorithm 5
Algorithm 3 Algorithm 5 (a) Random data set
(b) Heart data set
Figure 2: Number of calculating π(π) for every finite step.
Tables 1 and 2, where the meaning of the indexes are as follows: (i) π: the scale of the data sets, (ii) sec: the computing time (in seconds), (iii) π: the number of iterations, (iv) raver: the average number of calculating π(π) in each step.
From Figure 2, we can see that although the total number of calculating π(π) every 100 (27) steps in Algorithm 3 is less than that the number in Algorithm 5. However, Algorithm 5 uses much less iterations and time to achieve the approximate solution, compared with Algorithm 3 from Figure 1 and Tables 1 and 2. For the subproblem, less than 4 iterations are used to achieve a very accurate solution on average, which shows that ππΏ (β
) can be solved efficiently. Moreover, from the column raver in Tables 1 and 2, we can also see that the average
6
The Scientific World Journal Table 1: Numerical results for Algorithm 3. π 600 1000 1605 2265 3185
Problems Random
Adult
Sec 19.97 78.23 6.81 20.45 49.50
π 19524 32153 1064 1515 2103
Raver 2.98 2.99 1.34 1.50 1.50
Table 2: Numerical results for Algorithm 5. Problems Random
Adult
π 600 1000 1605 2265 3185
Sec 5.33 19.05 1.41 2.84 6.30
π 5021 7095 88 106 125
Raver 3.68 3.30 2.82 2.97 2.98
number of calculating π(π) with real world data sets is slightly less than that with random data sets.
5. Conclusion and Future Work We have extended the accelerated proximal point method to the solution of singly linearly constrained quadratic programming with box constraints. The new algorithm is proved to be globally convergent. Numerical results also show that the new algorithm performs well on medium-scale quadratic programs. On the other hand, Solving the subproblem by searching a root of a piecewise linear continuous function is a very cheap. Considering the good performance of the new Algorithm 5, we can apply it to the solution of the subproblems in decomposition methods for large-scale SVM problems. Moreover, a parallel version of this algorithm combined with the theory in [9, 10] is also a direction of our future research.
Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments This work is supported by the National Natural Science Foundation of China (11101420, 71271204) and the Excellent Young Scientist Foundation of Shandong Province (2010BSE06047).
References [1] L. Zanni, βAn improved gradient projection-based decomposition technique for support vector machines,β Computational Management Science, vol. 3, no. 2, pp. 131β145, 2006. [2] S. Lucidi, L. Palagi, A. Risi, and M. Sciandrone, βA convergent decomposition algorithm for support vector machines,β Computational Optimization and Applications, vol. 38, no. 2, pp. 217β 234, 2007.
[3] T. Serafini and L. Zanni, βOn the working set selection in gradient projection-based decomposition techniques for support vector machines,β Optimization Methods and Software, vol. 20, no. 4-5, pp. 583β596, 2005. [4] P. Tseng and S. Yun, βA coordinate gradient descent method for nonsmooth separable minimization,β Mathematical Programming B, vol. 117, no. 1-2, pp. 387β423, 2009. [5] T. Serafini, G. Zanghirati, and L. Zanni, βGradient projection methods for large quadratic programs and applications in training support vector machines,β Optimization Methods and Software, vol. 20, pp. 353β378, 2003. [6] Y. Dai and R. Fletcher, βNew algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds,β Mathematical Programming, vol. 106, no. 3, pp. 403β 421, 2006. [7] G. Zanghirati and L. Zanni, βA parallel solver for large quadratic programs in training support vector machines,β Parallel Computing, vol. 29, no. 4, pp. 535β551, 2003. [8] L. Zanni, T. Serafini, and G. Zanghirati, βParallel software for training large scale support vector machines on multiprocessor systems,β Journal of Machine Learning Research, vol. 7, pp. 1467β 1492, 2006. [9] H. Congying, W. Yongli, and H. Guoping, βOn the convergence of asynchronous parallel algorithm for large-scale linearly constrained minimization problem,β Applied Mathematics and Computation, vol. 2, pp. 434β441, 2009. [10] Z. Fangying, H. Congying, and W. Yongli, βParallel SSLE algorithm for large scale constrained optimization,β Applied Mathematics and Computation, vol. 217, no. 12, pp. 5377β5384, 2011. [11] Y. Nesterov, βSmooth minimization of non-smooth functions,β Mathematical Programming, vol. 103, no. 1, pp. 127β152, 2005. [12] A. Nemirovski, βProx-method with rate of convergence O(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems,β SIAM Journal on Optimization, vol. 15, no. 1, pp. 229β251, 2005. [13] A. Beck and M. Teboulle, βA fast iterative shrinkage-thresholding algorithm for linear inverse problems,β SIAM Journal on Imaging Sciences, vol. 2, pp. 183β202, 2009. [14] P. Tseng, βOn accelerated proximal gradient methods for convex-concave optimization,β SIAM Journal on Optimization. In press. [15] T. Joachims, βMaking large-scale SVM learning practical,β in Advances in Kernel Methods: Support Vector Learning, B. Scholkopf, C. J. C. Burges, and A. J. Smola, Eds., pp. 169β184, MIT Press, Cambridge, Mass, USA, 1998.
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014