Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 328787, 7 pages http://dx.doi.org/10.1155/2014/328787

Research Article Modified Projection Algorithms for Solving the Split Equality Problems Qiao-Li Dong and Songnian He College of Science, Civil Aviation University of China, Tianjin 300300, China Correspondence should be addressed to Qiao-Li Dong; [email protected] Received 20 August 2013; Accepted 12 November 2013; Published 19 January 2014 Academic Editors: G. Han and X. Zhu Copyright © 2014 Q.-L. Dong and S. He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The split equality problem (SEP) has extraordinary utility and broad applicability in many areas of applied mathematics. Recently, Byrne and Moudafi (2013) proposed a CQ algorithm for solving it. In this paper, we propose a modification for the CQ algorithm, which computes the stepsize adaptively and performs an additional projection step onto two half-spaces in each iteration. We further propose a relaxation scheme for the self-adaptive projection algorithm by using projections onto half-spaces instead of those onto the original convex sets, which is much more practical. Weak convergence results for both algorithms are analyzed.

1. Introduction The split equality problem (SEP) was introduced by Moudafi [1] and its interest covers many situations, for instance, in domain decomposition for PDE’s, game theory, and intensitymodulated radiation therapy (IMRT) (see [2–7] for more details). Let 𝐻1 , 𝐻2 , and 𝐻3 be real Hilbert spaces; let 𝐶 ⊂ 𝐻1 and 𝑄 ⊂ 𝐻2 be two nonempty closed convex sets; let 𝐴 : 𝐻1 → 𝐻3 and 𝐵 : 𝐻2 → 𝐻3 be two bounded linear operators. The SEP can mathematically be formulated as the problem of finding 𝑥 and 𝑦 with the property 𝑥 ∈ 𝐶,

𝑦 ∈ 𝑄,

such that 𝐴𝑥 = 𝐵𝑦,

(1)

which allows asymmetric and partial relations between the variables 𝑥 and 𝑦. If 𝐻2 = 𝐻3 and 𝐵 = 𝐼, then the split equality problem (1) reduces to the split feasibility problem (originally introduced in Censor and Elfving [8]) which is to find 𝑥 ∈ 𝐶 with 𝐴𝑥 ∈ 𝑄. For solving the SEP (1), Moudafi [1] introduced the following alternating 𝐶𝑄 algorithm: 𝑥𝑘+1 = 𝑃𝐶 (𝑥𝑘 − 𝛾𝑘 𝐴∗ (𝐴𝑥𝑘 − 𝐵𝑦𝑘 )) , 𝑦𝑘+1 = 𝑃𝑄 (𝑦𝑘 + 𝛾𝑘 𝐵∗ (𝐴𝑥𝑘+1 − 𝐵𝑦𝑘 )) ,

(2)

where 𝛾𝑘 ∈ (𝜀, min(1/𝜆 𝐴 , 1/𝜆 𝐵 ) − 𝜀) and 𝜆 𝐴 and 𝜆 𝐵 are the spectral radii of 𝐴∗ 𝐴 and 𝐵∗ 𝐵, respectively. By studying the

projected Landweber algorithm of the SEP (1) in a product space, Byrne and Moudafi [7] obtained the following 𝐶𝑄 algorithm: 𝑥𝑘+1 = 𝑃𝐶 (𝑥𝑘 − 𝛾𝑘 𝐴∗ (𝐴𝑥𝑘 − 𝐵𝑦𝑘 )) , 𝑦𝑘+1 = 𝑃𝑄 (𝑦𝑘 + 𝛾𝑘 𝐵∗ (𝐴𝑥𝑘 − 𝐵𝑦𝑘 )) ,

(3)

where 𝛾𝑘 , the stepsize at the iteration 𝑘, is chosen in the interval (𝜀, (2/(𝜆 𝐴 + 𝜆 𝐵 )) − 𝜀). It is easy to see that the alternating 𝐶𝑄 algorithm (2) is sequential but the algorithm (3) is simultaneous. Observe that in the algorithms (2) and (3), the determination of the stepsize 𝛾𝑛 depends on the operator (matrix) norms ‖𝐴‖ and ‖𝐵‖ (or the largest eigenvalues of 𝐴∗ 𝐴 and 𝐵∗ 𝐵). This means that, in order to implement the alternating 𝐶𝑄 algorithm (2), one has first to compute (or, at least, estimate) operator norms of 𝐴 and 𝐵, which is in general not an easy work in practice. Considering this, Dong and He [9] proposed algorithms without prior knowledge of operator norms. In this paper, we first propose a modification for 𝐶𝑄 algorithm (3), inspired by Tseng [10] (also see [11]). Our modified projection method computes the stepsize adaptively and performs an additional projection step onto two halfspaces, 𝑋𝑘 ⊂ 𝐻1 and 𝑌𝑘 ⊂ 𝐻2 , in each iteration. Then we

2

The Scientific World Journal

give a relaxation scheme for this modification by replacing the orthogonal projections onto the sets 𝐶 and 𝑄 by projections onto the two half-spaces 𝐶𝑘 and 𝑄𝑘 , respectively. Since projections onto half-spaces can be directly calculated, the relaxed scheme will be more practical and easily implemented. The rest of this paper is organized as follows. In the next section, some useful facts and tools are given. The weak theorem of the proposed self-adaptive projection algorithm is obtained in Section 3. In Section 4, we consider a relaxed self-adaptive projection algorithm, where the sets 𝐶 and 𝑄 are level sets of convex functions.

2. Preliminaries In this section, we review some definitions and lemmas which will be used in this paper. Let 𝐻 be a Hilbert space and let 𝐼 be the identity operator on 𝐻. If 𝑓 : 𝐻 → R is a differentiable functional, then denote by ∇𝑓 the gradient of 𝑓. If 𝑓 : 𝐻 → R is a subdifferentiable functional, then denote by 𝜕𝑓 the subdifferential of 𝑓. Given a sequence (𝑥𝑘 , 𝑦𝑘 ) in 𝐻1 ×𝐻2 , 𝜔𝑤 (𝑥𝑘 , 𝑦𝑘 ) stands for the set of cluster points in the weak topology. “𝑥𝑘 → 𝑥” (resp., “𝑥𝑘 ⇀ 𝑥”) means the strong (resp., weak) convergence of (𝑥𝑘 ) to 𝑥. Definition 1. A sequence (𝑥𝑘 ) is said to be asymptotically regular if 󵄩 󵄩 lim 󵄩󵄩𝑥𝑘+1 − 𝑥𝑘 󵄩󵄩󵄩 = 0. 𝑘→∞ 󵄩

(4)

Definition 2. The graph of an operator is called to be weaklystrongly closed if 𝑦𝑛 ∈ 𝑇(𝑥𝑛 ) with 𝑦𝑛 strongly converging to 𝑦 and 𝑥𝑛 weakly converging to 𝑥; then 𝑦 ∈ 𝑇(𝑥). The next lemma is well known (see [10, 12]) and shows that the maximal monotone operators are weakly-strongly closed. Lemma 3. Let 𝐻 be a Hilbert space and let 𝑇 : 𝐻 󴁂󴀱 𝐻 be a maximal monotone mapping. If (𝑥𝑘 ) is a sequence in 𝐻 bounded in norm and converging weakly to some 𝑥 and (𝑤𝑘 ) is a sequence in 𝐻 converging strongly to some 𝑤 and 𝑤𝑘 ∈ 𝑇(𝑥𝑘 ) for all 𝑘, then 𝑤 ∈ 𝑇(𝑥). The projection is an important tool for our work in this paper. Let Ω be a closed convex subset of real Hilbert space 𝐻. Recall that the (nearest point or metric) projection from 𝐻 onto Ω, denoted by 𝑃Ω , is defined in such a way that, for each 𝑥 ∈ 𝐻, 𝑃Ω 𝑥 is the unique point in Ω such that 󵄩󵄩 󵄩 󵄩󵄩𝑥 − 𝑃Ω 𝑥󵄩󵄩󵄩 = min {‖𝑥 − 𝑧‖ : 𝑧 ∈ Ω} .

(5)

The following two lemmas are useful characterizations of projections. Lemma 4. Given 𝑥 ∈ 𝐻 and 𝑧 ∈ Ω, then 𝑧 = 𝑃Ω 𝑥 if and only if ⟨𝑥 − 𝑧, 𝑦 − 𝑧⟩ ≤ 0,

∀𝑦 ∈ Ω.

(6)

Lemma 5. For any 𝑥, 𝑦 ∈ 𝐻 and 𝑧 ∈ Ω, it holds (i) ‖𝑃Ω (𝑥) − 𝑃Ω (𝑦)‖2 ≤ ⟨𝑃Ω (𝑥) − 𝑃Ω (𝑦), 𝑥 − 𝑦⟩; (ii) ‖𝑃Ω (𝑥) − 𝑧‖2 ≤ ‖𝑥 − 𝑧‖2 − ‖𝑃Ω (𝑥) − 𝑥‖2 . Throughout this paper, assume that the split equality problem (1) is consistent and denote by Γ the solution of (1); that is, Γ = {𝑥 ∈ 𝐶, 𝑦 ∈ 𝑄 : 𝐴𝑥 = 𝐵𝑦} .

(7)

Then Γ is closed, convex, and nonempty. The split equality problem (1) can be written as the following minimization problem: 1󵄩 󵄩2 min 𝜄𝐶 (𝑥) + 𝜄𝑄 (𝑦) + 󵄩󵄩󵄩𝐴𝑥 − 𝐵𝑦󵄩󵄩󵄩 , 2

𝑥∈𝐻1 , 𝑦∈𝐻2

(8)

where 𝜄𝐶(𝑥) is an indicator function of the set 𝐶 defined by 𝜄𝐶 (𝑥) = {

0, 𝑥∈𝐶 +∞, otherwise.

(9)

By writing down the optimality conditions, we obtain 1󵄩 󵄩2 0 ∈ ∇𝑥 { 󵄩󵄩󵄩𝐴𝑥 − 𝐵𝑦󵄩󵄩󵄩 }+𝜕𝜄𝐶 (𝑥) = 𝐴∗ (𝐴𝑥 − 𝐵𝑦) + 𝑁𝐶 (𝑥) , 2 1󵄩 󵄩2 0 ∈ ∇𝑦 { 󵄩󵄩󵄩𝐴𝑥 − 𝐵𝑦󵄩󵄩󵄩 }+𝜕𝜄𝑄 (𝑦) = −𝐵∗ (𝐴𝑥 − 𝐵𝑦) + 𝑁𝑄 (𝑦) , 2 (10) which implies, for 𝛾 > 0 and 𝛽 > 0, 𝑥 − 𝛾𝐴∗ (𝐴𝑥 − 𝐵𝑦) ∈ 𝑥 + 𝛾𝑁𝐶 (𝑥) , 𝑦 + 𝛽𝐵∗ (𝐴𝑥 − 𝐵𝑦) ∈ 𝑦 + 𝛽𝑁𝑄 (𝑦) ,

(11)

which in turn leads to the fixed point formulation −1

𝑥 = (𝐼 + 𝛾𝑁𝐶) (𝑥 − 𝛾𝐴∗ (𝐴𝑥 − 𝐵𝑦)) , −1

𝑦 = (𝐼 + 𝛽𝑁𝑄) (𝑦 + 𝛽𝐵∗ (𝐴𝑥 − 𝐵𝑦)) .

(12)

Since (𝐼 + 𝛾𝑁𝐶)−1 = 𝑃𝐶 and (𝐼 + 𝛽𝑁𝑄)−1 = 𝑃𝑄, we have 𝑥 = 𝑃𝐶 (𝑥 − 𝛾𝐴∗ (𝐴𝑥 − 𝐵𝑦)) , 𝑦 = 𝑃𝑄 (𝑦 + 𝛽𝐵∗ (𝐴𝑥 − 𝐵𝑦)) .

(13)

The following proposition shows that solutions of the fixed point equations (17) are exactly the solutions of the SEP (1). Proposition 6 (see [9]). Given 𝑥∗ ∈ 𝐻1 and 𝑦∗ ∈ 𝐻2 , then (𝑥∗ , 𝑦∗ ) solves the SEP (1) if and only if (𝑥∗ , 𝑦∗ ) solves the fixed point equations (13).

3. A Self-Adaptive Projection Algorithm Based on Proposition 6, we construct a self-adaptive projection algorithm for the fixed point equations (13) and prove the weak convergence of the proposed algorithm.

The Scientific World Journal

3

Define the function 𝐹 : 𝐻1 × 𝐻2 → 𝐻1 by ∗

𝐹 (𝑥, 𝑦) = 𝐴 (𝐴𝑥 − 𝐵𝑦)

(14)

and the function 𝐺 : 𝐻1 × 𝐻2 → 𝐻2 by ∗

𝐺 (𝑥, 𝑦) = 𝐵 (𝐵𝑦 − 𝐴𝑥) .

(15)

The self-adaptive projection algorithm is defined as follows. Algorithm 7. Given constants 𝜎0 > 0, 𝛽 ∈ (0, 1), 𝜃 ∈ (0, 1) and 𝜌 ∈ (0, 1), let 𝑥0 ∈ 𝐻1 and 𝑦0 ∈ 𝐻2 be arbitrary. For 𝑘 = 0, 1, 2, . . ., compute 𝑢𝑘 = 𝑃𝐶 (𝑥𝑘 − 𝜏𝑘 𝐹 (𝑥𝑘 , 𝑦𝑘 )) , V𝑘 = 𝑃𝑄 (𝑦𝑘 − 𝜏𝑘 𝐺 (𝑥𝑘 , 𝑦𝑘 )) ,

(16)

where 𝛾𝑘 is chosen to be the largest 𝛾 ∈ {𝜎𝑘 , 𝜎𝑘 𝛽, 𝜎𝑘 𝛽2 , . . .} satisfying 󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩󵄩𝐹(𝑥𝑘 , 𝑦𝑘 ) − 𝐹(𝑢𝑘 , V𝑘 )󵄩󵄩󵄩 + 󵄩󵄩󵄩𝐺(𝑥𝑘 , 𝑦𝑘 ) − 𝐺(𝑢𝑘 , V𝑘 )󵄩󵄩󵄩 (17) 󵄩󵄩𝑥𝑘 − 𝑢𝑘 󵄩󵄩󵄩2 + 󵄩󵄩󵄩𝑦𝑘 − V𝑘 󵄩󵄩󵄩2 2󵄩 󵄩 󵄩 󵄩 . ≤𝜃 󵄩 𝛾2 Construct the half-spaces 𝑋𝑘 and 𝑌𝑘 , the bounding hyperplanes of which support 𝐶 and 𝑄 at 𝑢𝑘 and V𝑘 , respectively, 𝑋𝑘 := {𝑢 ∈ 𝐻1 | ⟨𝑥𝑘 − 𝜏𝑘 𝐹 (𝑥𝑘 , 𝑦𝑘 ) − 𝑢𝑘 , 𝑢 − 𝑢𝑘 ⟩ ≤ 0} , 𝑌𝑘 := {V ∈ 𝐻2 | ⟨𝑦𝑘 − 𝜏𝑘 𝐺 (𝑥𝑘 , 𝑦𝑘 ) − V𝑘 , V − V𝑘 ⟩ ≤ 0} .

(18)

Proof. Obviously, 𝛾𝑘 ≤ 𝜎0 . If 𝛾𝑘 = 𝜎0 , then this lemma is proved; otherwise, if 𝛾𝑘 < 𝜎0 , by the search rule (17), we know that 𝛾𝑘 /𝛽 must violate inequality (17); that is, 󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩󵄩𝐹(𝑥𝑘 , 𝑦𝑘 ) − 𝐹(𝑢𝑘 , V𝑘 )󵄩󵄩󵄩 + 󵄩󵄩󵄩𝐺(𝑥𝑘 , 𝑦𝑘 ) − 𝐺(𝑢𝑘 , V𝑘 )󵄩󵄩󵄩 󵄩󵄩󵄩𝑥 − 𝑢𝑘 󵄩󵄩󵄩2 + 󵄩󵄩󵄩𝑦𝑘 − V𝑘 󵄩󵄩󵄩2 󵄩 󵄩 󵄩 . ≥ 𝜃2 󵄩 𝑘 𝛾𝑘2 /𝛽2

(22)

On the other hand, we have 󵄩󵄩󵄩𝐹 (𝑥𝑘 , 𝑦𝑘 ) − 𝐹 (𝑢𝑘 , V𝑘 )󵄩󵄩󵄩2 + 󵄩󵄩󵄩𝐺 (𝑥𝑘 , 𝑦𝑘 ) − 𝐺 (𝑢𝑘 , V𝑘 )󵄩󵄩󵄩2 󵄩 󵄩 󵄩 󵄩 󵄩󵄩 ∗ 󵄩󵄩2 ∗ = 󵄩󵄩𝐴 (𝐴𝑥𝑘 − 𝐵𝑦𝑘 ) − 𝐴 (𝐴𝑢𝑘 − 𝐵V𝑘 )󵄩󵄩 󵄩 󵄩2 + 󵄩󵄩󵄩𝐵∗ (𝐵𝑦𝑘 − 𝐴𝑥𝑘 ) − 𝐵∗ (𝐵V𝑘 − 𝐴𝑢𝑘 )󵄩󵄩󵄩 ≤ (‖𝐴‖2 + ‖𝐵‖2 ) 󵄩 󵄩 󵄩2 󵄩 × (‖𝐴‖ 󵄩󵄩󵄩𝑥𝑘 − 𝑢𝑘 󵄩󵄩󵄩 + ‖𝐵‖ 󵄩󵄩󵄩𝑦𝑘 − V𝑘 󵄩󵄩󵄩)

(23)

≤ 2 (‖𝐴‖2 + ‖𝐵‖2 ) 󵄩 󵄩2 󵄩 󵄩2 × (‖𝐴‖2 󵄩󵄩󵄩𝑥𝑘 − 𝑢𝑘 󵄩󵄩󵄩 + ‖𝐵‖2 󵄩󵄩󵄩𝑦𝑘 − V𝑘 󵄩󵄩󵄩 ) ≤ 2 (‖𝐴‖2 + ‖𝐵‖2 ) max {‖𝐴‖2 , ‖𝐵‖2 } 󵄩2 󵄩 󵄩2 󵄩 × (󵄩󵄩󵄩𝑥𝑘 − 𝑢𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − V𝑘 󵄩󵄩󵄩 ) . Consequently, we get

Set 𝑥𝑘+1 = 𝑃𝑋𝑘 (𝑢𝑘 − 𝛾𝑘 (𝐹 (𝑢𝑘 , V𝑘 ) − 𝐹 (𝑥𝑘 , 𝑦𝑘 ))) , 𝑦𝑘+1 = 𝑃𝑌𝑘 (V𝑘 − 𝛾𝑘 (𝐺 (𝑢𝑘 , V𝑘 ) − 𝐺 (𝑥𝑘 , 𝑦𝑘 ))) .

(19)

If 󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩󵄩𝐹(𝑥𝑘+1 , 𝑦𝑘+1 ) − 𝐹(𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩 + 󵄩󵄩󵄩𝐺(𝑥𝑘+1 , 𝑦𝑘+1 ) − 𝐺(𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩 󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩𝑥 − 𝑥𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘+1 − 𝑦𝑘 󵄩󵄩󵄩 ≤ 𝜌2 󵄩 𝑘+1 , 𝛾𝑘2 (20) then set 𝜎𝑘 = 𝜎0 ; otherwise, set 𝜎𝑘 = 𝛾𝑘 . In this algorithm, (19) involves projection onto halfspaces 𝑋𝑘 (resp., 𝑌𝑘 ) rather than onto the set 𝐶 (resp., 𝑄) and it is obvious that projections on 𝑋 (resp., 𝑌) are very simple. It is easy to show 𝐶 ⊂ 𝑋𝑘 and 𝑄 ⊂ 𝑌𝑘 . The last step is used to reduce the inner iterations for searching the stepsize 𝛾𝑘 . Lemma 8. The search rule (17) is well defined. Besides 𝛾 ≤ 𝛾𝑘 ≤ 𝜎0 , where { { 𝛽𝜃 𝛾 = min {𝜎0 , , { ‖𝐴‖ √2 (‖𝐴‖2 + ‖𝐵‖2 ) { } } . } } ‖𝐵‖ √2 (‖𝐴‖2 + ‖𝐵‖2 ) } 𝛽𝜃

(21)

{ { 𝛽𝜃 , 𝛾𝑘 ≥ min {𝜎0 , { ‖𝐴‖ √2 (‖𝐴‖2 + ‖𝐵‖2 ) {

(24)

} } , } } ‖𝐵‖ √2 (‖𝐴‖2 + ‖𝐵‖2 ) } which completes the proof. 𝛽𝜃

Theorem 9. Let (𝑥𝑘 , 𝑦𝑘 ) be the sequence generated by Algorithm 7 and let 𝑋 and 𝑌 be nonempty closed convex sets in 𝐻1 and 𝐻2 with simple structures, respectively. If (𝑋 × 𝑌) ∩ Γ is nonempty, then (𝑥𝑘 , 𝑦𝑘 ) converges weakly to a solution of the SEP (1). Proof. Let (𝑥∗ , 𝑦∗ ) ∈ Γ; that is, 𝑥∗ ∈ 𝐶, 𝑦∗ ∈ 𝑄, and 𝐴𝑥∗ = 𝐵𝑦∗ . Define 𝑠𝑘 = 𝑢𝑘 − 𝛾𝑘 (𝐹(𝑢𝑘 , V𝑘 ) − 𝐹(𝑥𝑘 , 𝑦𝑘 )); then we have 󵄩󵄩 ∗ 󵄩2 󵄩󵄩𝑥𝑘+1 − 𝑥 󵄩󵄩󵄩 󵄩2 󵄩 ≤ 󵄩󵄩󵄩𝑠𝑘 − 𝑥∗ 󵄩󵄩󵄩 󵄩2 󵄩 = 󵄩󵄩󵄩𝑠𝑘 − 𝑢𝑘 + 𝑢𝑘 − 𝑥𝑘 + 𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩 = 󵄩󵄩󵄩𝑠𝑘 − 𝑢𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 󵄩2 󵄩 + 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 2⟨𝑠𝑘 − 𝑢𝑘 , 𝑢𝑘 − 𝑥∗ ⟩ + 2⟨𝑢𝑘 − 𝑥𝑘 , 𝑥𝑘 − 𝑥∗ ⟩

4

The Scientific World Journal 󵄩 󵄩2 󵄩 󵄩2 󵄩 󵄩2 = 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑠𝑘 − 𝑢𝑘 󵄩󵄩󵄩 − 󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩

inequality follows from (17), and the last follows from (16) and Lemma 3 and 𝑥∗ ∈ 𝐶, 𝑦∗ ∈ 𝑄. Using the fact and 𝐴𝑥∗ = 𝐵𝑦∗ , we have ⟨𝐹 (𝑢𝑘 , V𝑘 ) , 𝑢𝑘 − 𝑥∗ ⟩ + ⟨𝐺 (𝑢𝑘 , V𝑘 ) , V𝑘 − 𝑦∗ ⟩

+ 2⟨𝑠𝑘 − 𝑥𝑘 , 𝑢𝑘 − 𝑥∗ ⟩ 󵄩2 󵄩 󵄩 󵄩2 = 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 𝛾𝑘2 󵄩󵄩󵄩𝐹(𝑢𝑘 , V𝑘 ) − 𝐹(𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩

= ⟨𝐴∗ (𝐴𝑢𝑘 − 𝐵V𝑘 ) , 𝑢𝑘 − 𝑥∗ ⟩

󵄩2 󵄩 − 󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 + 2⟨𝑠𝑘 − 𝑥𝑘 , 𝑢𝑘 − 𝑥∗ ⟩, (25) where the first inequality follows from nonexpansivity of the projection mapping 𝑃𝑋𝑘 . Similarly, defining 𝑡𝑘 = V𝑘 − 𝛾𝑘 (𝐺(𝑢𝑘 , V𝑘 ) − 𝐺(𝑥𝑘 , 𝑦𝑘 )), we get 󵄩󵄩 󵄩 ∗ 󵄩2 ∗ 󵄩2 󵄩󵄩𝑦𝑘+1 − 𝑦 󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩𝑦𝑘 − 𝑦 󵄩󵄩󵄩 󵄩 󵄩2 + 𝛾𝑘2 󵄩󵄩󵄩𝐺 (𝑢𝑘 , V𝑘 ) − 𝐺 (𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩 󵄩2 󵄩 − 󵄩󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 + 2⟨𝑡𝑘 − 𝑦𝑘 , V𝑘 − 𝑦∗ ⟩.

(26)

Adding the above inequalities, we obtain

+

(28)

+ ⟨𝐵V𝑘 − 𝐴𝑢𝑘 , 𝐵V𝑘 − 𝐵𝑦∗ ⟩ 󵄩2 󵄩 = 󵄩󵄩󵄩𝐴𝑢𝑘 − 𝐵V𝑘 󵄩󵄩󵄩 , which with (27) implies that 󵄩󵄩 󵄩 ∗ 󵄩2 ∗ 󵄩2 󵄩󵄩𝑥𝑘+1 − 𝑥 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘+1 − 𝑦 󵄩󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩 ≤ 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − 𝑦∗ 󵄩󵄩󵄩 − (1 − 𝜃2 )

(29)

󵄩 󵄩2 − 2𝛾𝑘 󵄩󵄩󵄩𝐴𝑢𝑘 − 𝐵V𝑘 󵄩󵄩󵄩 . 2

Consequently, the sequence Γ𝑘 (𝑥∗ , 𝑦∗ ) := ‖𝑥𝑘 − 𝑥∗ ‖ + 2 ‖𝑦𝑘 − 𝑦∗ ‖ is decreasing and lower bounded by 0 and thus converges to some finite limit, say, 𝑙(𝑥∗ , 𝑦∗ ). Moreover, (𝑥𝑘 ) and (𝑦𝑘 ) are bounded. This implies that 󵄩 󵄩 󵄩 󵄩 lim 󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 = 0, lim 󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 = 0, 𝑘→∞ 󵄩 𝑘→∞ 󵄩 (30) 󵄩 󵄩 lim 󵄩󵄩󵄩𝐴𝑢𝑘 − 𝐵V𝑘 󵄩󵄩󵄩 = 0. 𝑘→∞

󵄩 󵄩2 (󵄩󵄩󵄩𝐹(𝑢𝑘 , V𝑘 ) − 𝐹(𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩 󵄩 󵄩2 +󵄩󵄩󵄩𝐺(𝑢𝑘 , V𝑘 ) − 𝐺(𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩 )

󵄩2 󵄩 󵄩2 󵄩 − 󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 − 󵄩󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 + 2⟨𝑠𝑘 − 𝑥𝑘 , 𝑢𝑘 − 𝑥∗ ⟩ + 2⟨𝑡𝑘 − 𝑦𝑘 , V𝑘 − 𝑦∗ ⟩ 󵄩 󵄩2 󵄩 󵄩2 ≤ 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − 𝑦∗ 󵄩󵄩󵄩 − (1 − 𝜃2 )

From (30), we get 󵄩 󵄩 lim 󵄩󵄩𝐴𝑥𝑘 − 𝐵𝑦𝑘 󵄩󵄩󵄩 = 0.

󵄩2 󵄩 󵄩2 󵄩 × (󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 )

𝑘→∞ 󵄩

+ 2⟨𝑠𝑘 − 𝑥𝑘 , 𝑢𝑘 − 𝑥∗ ⟩ + 2⟨𝑡𝑘 − 𝑦𝑘 , V𝑘 − 𝑦∗ ⟩ 󵄩2 󵄩 󵄩2 󵄩 = 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − 𝑦∗ 󵄩󵄩󵄩 − (1 − 𝜃2 )

= ⟨𝐴𝑢𝑘 − 𝐵V𝑘 , 𝐴𝑢𝑘 − 𝐴𝑥∗ ⟩

󵄩2 󵄩 󵄩2 󵄩 × (󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 )

󵄩󵄩 󵄩 ∗ 󵄩2 ∗ 󵄩2 󵄩󵄩𝑥𝑘+1 − 𝑥 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘+1 − 𝑦 󵄩󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩 ≤ 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − 𝑦∗ 󵄩󵄩󵄩 𝛾𝑘2

+ ⟨𝐵∗ (𝐵V𝑘 − 𝐴𝑢𝑘 ) , V𝑘 − 𝑦∗ ⟩

(27)

󵄩2 󵄩 󵄩2 󵄩 × (󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 ) + 2⟨𝑢𝑘 − 𝛾𝑘 𝐹 (𝑢𝑘 , V𝑘 ) + 𝛾𝑘 𝐹 (𝑥𝑘 , 𝑦𝑘 ) − 𝑥𝑘 , 𝑢𝑘 − 𝑥∗ ⟩ + 2⟨V𝑘 − 𝛾𝑘 𝐺 (𝑢𝑘 , V𝑘 ) + 𝛾𝑘 𝐺 (𝑥𝑘 , 𝑦𝑘 ) − 𝑦𝑘 , V𝑘 − 𝑦∗ ⟩ 󵄩 󵄩2 󵄩 󵄩2 ≤ 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − 𝑦∗ 󵄩󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩 − (1 − 𝜃2 ) (󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 ) − 2𝛾𝑘 ⟨𝐹 (𝑢𝑘 , V𝑘 ) , 𝑢𝑘 − 𝑥∗ ⟩ − 2𝛾𝑘 ⟨𝐺 (𝑢𝑘 , V𝑘 ) , V𝑘 − 𝑦∗ ⟩, where the equality follows from 𝑠𝑘 = 𝑢𝑘 − 𝛾𝑘 (𝐹(𝑢𝑘 , V𝑘 ) − 𝐹(𝑥𝑘 , 𝑦𝑘 )) and 𝑡𝑘 = V𝑘 − 𝛾𝑘 (𝐺(𝑢𝑘 , V𝑘 ) − 𝐺(𝑥𝑘 , 𝑦𝑘 )), the second

(31)

̂ 𝑦) ̂ ∈ 𝜔𝑤 (𝑥𝑘 , 𝑦𝑘 ); then there exist the two subLet (𝑥, sequences (𝑥𝑘𝑙 ) and (𝑦𝑘𝑙 ) of (𝑥𝑘 ) and (𝑦𝑘 ) which converge ̂ respectively. We will show that (𝑥, ̂ 𝑦) ̂ is a weakly to 𝑥̂ and 𝑦, solution of the SEP (1). The weak convergence of (𝐴𝑥𝑘𝑙 −𝐵𝑦𝑘𝑙 ) to 𝐴𝑥̂ − 𝐵𝑦̂ and lower semicontinuity of the squared norm imply that 󵄩 󵄩 󵄩 󵄩󵄩 ̂ 󵄩󵄩𝐴𝑥 − 𝐵𝑦̂󵄩󵄩󵄩 ≤ lim inf 󵄩󵄩󵄩󵄩𝐴𝑥𝑘𝑙 − 𝐵𝑦𝑘𝑙 󵄩󵄩󵄩󵄩 = 0; (32) 𝑙→∞ ̂ that is, 𝐴𝑥̂ = 𝐵𝑦. By noting that the two equalities in (16) can be rewritten as 𝑥𝑘𝑙 − 𝑢𝑘𝑙 − 𝐴∗ (𝐴𝑢𝑘𝑙 − 𝐵V𝑘𝑙 ) ∈ 𝑁𝐶 (𝑢𝑘𝑙 ) , 𝛾𝑘𝑙 (33) 𝑦𝑘𝑙 − V𝑘𝑙 ∗ − 𝐵 (𝐵V𝑘𝑙 − 𝐴𝑢𝑘𝑙 ) ∈ 𝑁𝑄 (V𝑘𝑙 ) , 𝛾𝑘𝑙 and that the graphs of the maximal monotone operators, 𝑁𝐶 and 𝑁𝑄, are weakly-strongly closed and by passing to the limit in the last inclusions, we obtain, from (30), that 𝑥̂ ∈ 𝐶, ̂ 𝑦) ̂ ∈ Γ. Hence (𝑥,

𝑦̂ ∈ 𝑄.

(34)

The Scientific World Journal

5

To show the uniqueness of the weak cluster points, we will use the same strick as in the celebrated Opial Lemma. Indeed, let (𝑥, 𝑦) be other weak cluster point of (𝑥𝑘 , 𝑦𝑘 ). By passing to the limit in the relation 2 ̂ 𝑦) ̂ = Γ𝑘 (𝑥, 𝑦) + ‖𝑥̂ − 𝑥‖ + 󵄩󵄩󵄩󵄩𝑦̂ − 𝑦󵄩󵄩󵄩󵄩 Γ𝑘 (𝑥,

Algorithm 10. Given constants 𝜎0 > 0, 𝛽 ∈ (0, 1), 𝜃 ∈ (0, 1), and 𝜌 ∈ (0, 1), let 𝑥0 ∈ 𝐻1 and 𝑦0 ∈ 𝐻2 be arbitrary. For 𝑘 = 0, 1, 2, . . ., compute 𝑢𝑘 = 𝑃𝐶𝑘 (𝑥𝑘 − 𝜏𝑘 𝐹 (𝑥𝑘 , 𝑦𝑘 )) ,

2

̂ + 2⟨𝑦𝑘 − 𝑦, 𝑦 − 𝑦⟩, ̂ + 2⟨𝑥𝑘 − 𝑥, 𝑥 − 𝑥⟩

󵄩2 󵄩󵄩 󵄩󵄩𝐹(𝑥𝑘 , 𝑦𝑘 ) − 𝐹(𝑢𝑘 , V𝑘 )󵄩󵄩󵄩 󵄩 󵄩2 + 󵄩󵄩󵄩𝐺(𝑥𝑘 , 𝑦𝑘 ) − 𝐺(𝑢𝑘 , V𝑘 )󵄩󵄩󵄩

(36)

̂ 𝑦) ̂ and (𝑥, 𝑦), we also have Reversing the role of (𝑥, 2 ̂ 𝑦) ̂ + ‖𝑥̂ − 𝑥‖2 + 󵄩󵄩󵄩󵄩𝑦̂ − 𝑦󵄩󵄩󵄩󵄩 . 𝑙 (𝑥, 𝑦) = 𝑙 (𝑥,

󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩𝑥 − 𝑢𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − V𝑘 󵄩󵄩󵄩 ≤ 𝜃2 󵄩 𝑘 . 𝛾2

(37)

By adding the two last equalities, we obtain 󵄩 󵄩2 ‖𝑥̂ − 𝑥‖2 + 󵄩󵄩󵄩𝑦̂ − 𝑦󵄩󵄩󵄩 = 0.

(38)

𝑋𝑘 := {𝑢 ∈ 𝐻1 | ⟨𝑥𝑘 − 𝜏𝑘 𝐹 (𝑥𝑘 , 𝑦𝑘 ) − 𝑢𝑘 , 𝑢 − 𝑢𝑘 ⟩ ≤ 0} ,

𝑥𝑘+1 = 𝑃𝑋 (𝑢𝑘 − 𝛾𝑘 (𝐹 (𝑢𝑘 , V𝑘 ) − 𝐹 (𝑥𝑘 , 𝑦𝑘 ))) ,

In Algorithm 7, we must calculate the orthogonal projections, 𝑃𝐶 and 𝑃𝑄, many times even in one iteration step, so they should be assumed to be easily calculated; however, sometimes it is difficult or even impossible to compute them. In this case, we always turn to relaxed methods [13, 14], which were introduced by Fukushima [15] and are more easily implemented. For solving the SEP (1), Moudafi [16] followed the ideas of Fukushima [15] and introduced a relaxed alternating 𝐶𝑄 algorithm which depends on the norms ‖𝐴‖ and ‖𝐵‖. In this section, we propose a relaxed scheme for the self-adaptive Algorithm 7. Assume that the convex sets 𝐶 and 𝑄 are given by 𝑄 = {𝑦 ∈ 𝐻2 : 𝑞 (𝑦) ≤ 0} , (39)

where 𝑐 : 𝐻1 → R and 𝑞 : 𝐻2 → R are convex functions which are subdifferentiable on 𝐶 and 𝑄, respectively, and we assume that their subdifferentials are bounded on bounded sets. In the 𝑘th iteration, let (𝐶𝑘 ) and (𝑄𝑘 ) be two sequences of closed convex sets defined by (40)

where 𝜉𝑘 ∈ 𝜕𝑐(𝑥𝑘 ) and 𝑄𝑘 = {𝑦 ∈ 𝐻2 : 𝑞 (𝑦𝑘 ) + ⟨𝜂𝑘 , 𝑦 − 𝑦𝑘 ⟩ ≤ 0} ,

𝑌𝑘 := {V ∈ 𝐻2 | ⟨𝑦𝑘 − 𝜏𝑘 𝐺 (𝑥𝑘 , 𝑦𝑘 ) − V𝑘 , V − V𝑘 ⟩ ≤ 0} . (44) Set

4. A Relaxed Self-Adaptive Projection Algorithm

𝐶𝑘 = {𝑥 ∈ 𝐻1 : 𝑐 (𝑥𝑘 ) + ⟨𝜉𝑘 , 𝑥 − 𝑥𝑘 ⟩ ≤ 0} ,

(43)

Construct the half-spaces 𝑋𝑘 and 𝑌𝑘 the bounding hyperplanes of which support 𝐶𝑘 and 𝑄𝑘 at 𝑢𝑘 and V𝑘 , respectively,

̂ 𝑦) ̂ = (𝑥, 𝑦); this implies that the whole sequence Hence (𝑥, (𝑥𝑘 , 𝑦𝑘 ) weakly converges to a solution of the SEP (1), which completes the proof.

𝐶 = {𝑥 ∈ 𝐻1 : 𝑐 (𝑥) ≤ 0} ,

(42)

where 𝛾𝑘 is chosen to be the largest 𝛾 ∈ {𝜎𝑘 , 𝜎𝑘 𝛽, 𝜎𝑘 𝛽2 , . . .} satisfying

we obtain 2 ̂ 𝑦) ̂ = 𝑙 (𝑥, 𝑦) + ‖𝑥̂ − 𝑥‖2 + 󵄩󵄩󵄩󵄩𝑦̂ − 𝑦󵄩󵄩󵄩󵄩 . 𝑙 (𝑥,

V𝑘 = 𝑃𝑄𝑘 (𝑦𝑘 − 𝜏𝑘 𝐺 (𝑥𝑘 , 𝑦𝑘 )) ,

(35)

(41)

where 𝜂𝑘 ∈ 𝜕𝑞(𝑦𝑘 ). It is easy to see that 𝐶𝑘 ⊃ 𝐶 and 𝑄𝑘 ⊃ 𝑄 for every 𝑘 ≥ 0.

𝑦𝑘+1 = 𝑃𝑌 (V𝑘 − 𝛾𝑘 (𝐺 (𝑢𝑘 , V𝑘 ) − 𝐺 (𝑥𝑘 , 𝑦𝑘 ))) .

(45)

If 󵄩2 󵄩󵄩 󵄩󵄩𝐹(𝑥𝑘+1 , 𝑦𝑘+1 ) − 𝐹(𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩 󵄩 󵄩2 + 󵄩󵄩󵄩𝐺(𝑥𝑘+1 , 𝑦𝑘+1 ) − 𝐺(𝑥𝑘 , 𝑦𝑘 )󵄩󵄩󵄩 󵄩󵄩󵄩𝑥 − 𝑥𝑘 󵄩󵄩󵄩2 + 󵄩󵄩󵄩𝑦𝑘+1 − 𝑦𝑘 󵄩󵄩󵄩2 󵄩 󵄩 󵄩 , ≤ 𝜌2 󵄩 𝑘+1 𝛾𝑘2

(46)

then set 𝜎𝑘 = 𝜎0 ; otherwise, set 𝜎𝑘 = 𝛾𝑘 . Following the proof of Lemma 8, we easily obtain the following. Lemma 11. The search rule (43) is well defined. Besides 𝛾 ≤ 𝛾𝑘 ≤ 𝜎0 , where { { 𝛽𝜃 𝛾 = min {𝜎0 , , { ‖𝐴‖ √2 (‖𝐴‖2 + ‖𝐵‖2 ) {

(47)

} } 𝛽𝜃 . } } ‖𝐵‖ √2 (‖𝐴‖2 + ‖𝐵‖2 ) } Theorem 12. Let (𝑥𝑘 , 𝑦𝑘 ) be the sequence generated by Algorithm 10 and let 𝑋 and 𝑌 be nonempty closed convex sets in 𝐻1 and 𝐻2 with simple structures, respectively. If (𝑋 × 𝑌) ∩ Γ is nonempty, then (𝑥𝑘 , 𝑦𝑘 ) converges weakly to a solution of the SEP (1).

6

The Scientific World Journal

Proof. Let (𝑥∗ , 𝑦∗ ) ∈ Γ; that is, 𝑥∗ ∈ 𝐶, 𝑦∗ ∈ 𝑄, and 𝐴𝑥∗ = 𝐵𝑦∗ . Following the similar proof of Theorem 9, we obtain 󵄩󵄩 󵄩 ∗ 󵄩2 ∗ 󵄩2 󵄩󵄩𝑥𝑘+1 − 𝑥 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘+1 − 𝑦 󵄩󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩 ≤ 󵄩󵄩󵄩𝑥𝑘 − 𝑥∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑦𝑘 − 𝑦∗ 󵄩󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩 − (1 − 𝜃 ) (󵄩󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 + 󵄩󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 ) 2

(48)

2

Let Γ𝑘 (𝑥∗ , 𝑦∗ ) := ‖𝑥𝑘 − 𝑥∗ ‖ + ‖𝑦𝑘 − 𝑦∗ ‖ . Then the sequence Γ𝑘 (𝑥∗ , 𝑦∗ ) is decreasing and lower bounded by 0 for that 𝜇 ∈ (0, 1) and thus converges to some finite limit, say, 𝑙(𝑥∗ , 𝑦∗ ). Moreover, (𝑥𝑘 ) and (𝑦𝑘 ) are bounded. This implies that 󵄩 󵄩 󵄩 󵄩 lim 󵄩󵄩V𝑘 − 𝑦𝑘 󵄩󵄩󵄩 = 0, lim 󵄩󵄩𝑢𝑘 − 𝑥𝑘 󵄩󵄩󵄩 = 0, (49) 𝑘→∞ 󵄩 𝑘→∞ 󵄩 󵄩 󵄩 lim 󵄩󵄩𝐴𝑢𝑘 − 𝐵V𝑘 󵄩󵄩󵄩 = 0.

𝑘→∞ 󵄩

Therefore, we have

The authors would like to express their thanks to Abdellatif Moudafi for helpful correspondence. The work was supported by National Natural Science Foundation of China (no. 11201476) and Fundamental Research Funds for the Central Universities (no. 3122013D017).

(50)

󵄩 󵄩 lim 󵄩󵄩𝐴𝑥𝑘 − 𝐵𝑦𝑘 󵄩󵄩󵄩 = 0.

(51)

Next we show that the sequence (𝑥𝑘 , 𝑦𝑘 ) generated by Algorithm 10 weakly converges to a solution of the SEP (1). ̂ 𝑦) ̂ ∈ 𝜔𝑤 (𝑥𝑘 , 𝑦𝑘 ); then there exist the two subsequences Let (𝑥, (𝑥𝑘𝑙 ) and (𝑦𝑘𝑙 ) of (𝑥𝑘 ) and (𝑦𝑘 ) which converge weakly to 𝑥̂ ̂ respectively. The weak convergence of (𝐴𝑥𝑘𝑙 − 𝐵𝑦𝑘𝑙 ) to and 𝑦, 𝐴𝑥̂ − 𝐵𝑦̂ and the lower semicontinuity of the squared norm imply that 󵄩󵄩󵄩𝐴𝑥̂ − 𝐵𝑦̂󵄩󵄩󵄩 ≤ lim inf 󵄩󵄩󵄩󵄩𝐴𝑥𝑘 − 𝐵𝑦𝑘 󵄩󵄩󵄩󵄩 = 0; (52) 󵄩 󵄩 𝑙 𝑙󵄩 𝑙→∞ 󵄩 ̂ that is, 𝐴𝑥̂ = 𝐵𝑦. Since 𝑢𝑘𝑙 ∈ 𝐶𝑘𝑙 , we have 𝑐 (𝑥𝑘𝑙 ) + ⟨𝜉𝑘 , 𝑢𝑘𝑙 − 𝑥𝑘𝑙 ⟩ ≤ 0.

(53)

󵄩 󵄩 𝑐 (𝑥𝑘𝑙 ) ≤ −⟨𝜉𝑘𝑙 , 𝑢𝑘𝑙 − 𝑥𝑘𝑙 ⟩ ≤ 𝜉 󵄩󵄩󵄩󵄩𝑢𝑘𝑙 − 𝑥𝑘𝑙 󵄩󵄩󵄩󵄩 ,

(54)

where 𝜉 satisfies ‖𝜉𝑘 ‖ ≤ 𝜉 for all 𝑘 ∈ N. The lower semicontinuity of 𝑐 and the first formula of (49) lead to ̂ ≤ lim inf 𝑐 (𝑥𝑘𝑙 ) ≤ 0, 𝑐 (𝑥) 𝑙→∞

(55)

and therefore 𝑥̂ ∈ 𝐶. Likewise, since V𝑘𝑙 ∈ 𝑄𝑘𝑙 , we have

Thus

Acknowledgments

References 𝑘→∞ 󵄩

Thus

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

󵄩 󵄩2 − 2𝛾𝑘 󵄩󵄩󵄩𝐴𝑢𝑘 − 𝐵V𝑘 󵄩󵄩󵄩 . 2

Following the same argument of Theorem 9, we can show the uniqueness of the weak cluster points and hence the whole sequence (𝑥𝑘 , 𝑦𝑘 ) weakly converges to a solution of the SEP (1), which completes the proof.

𝑞 (𝑦𝑘𝑙 ) + ⟨𝜂𝑘𝑙 , V𝑘𝑙 − 𝑦𝑘𝑙 ⟩ ≤ 0.

(56)

󵄩 󵄩 𝑞 (𝑦𝑘𝑙 ) ≤ −⟨𝜂𝑘𝑙 , V𝑘𝑙 − 𝑦𝑘𝑙 ⟩ ≤ 𝜂 󵄩󵄩󵄩󵄩V𝑘𝑙 − 𝑦𝑘𝑙 󵄩󵄩󵄩󵄩 ,

(57)

where 𝜂 satisfies ‖𝜂𝑘 ‖ ≤ 𝜂 for all 𝑘 ∈ N. Again, the lower semicontinuity of 𝑞 and the second formula of (49) lead to ̂ ≤ lim inf 𝑞 (𝑦𝑘𝑙 ) ≤ 0, 𝑞 (𝑦) 𝑙→∞

̂ 𝑦) ̂ ∈ Γ. and therefore 𝑦̂ ∈ 𝑄. Hence (𝑥,

(58)

[1] A. Moudafi, “Alternating CQ-algorithm for convex feasibility and split fixed-point problems,” Journal of Nonlinear and Convex Analysis. In press. [2] H. Attouch, A. Cabot, P. Frankel, and J. Peypouquet, “Alternating proximal algorithms for linearly constrained variational inequalities: application to domain decomposition for PDE’s,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 18, pp. 7455–7473, 2011. [3] H. Attouch, J. Bolte, P. Redont, and A. Soubeyran, “Alternating proximal algorithms for weakly coupled convex minimization problems. Applications to dynamical games and PDE’s,” Journal of Convex Analysis, vol. 15, no. 3, pp. 485–506, 2008. [4] Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity-modulated radiation therapy,” Physics in Medicine and Biology, vol. 51, no. 10, pp. 2353–2365, 2006. [5] H. Attouch, Alternating Minimization and Projection Algorithms. From Convexity to Nonconvexity, Communication in Istituto Nazionale di Alta Matematica Citta Universitaria— Roma, Rome, Italy, 2009. [6] R. Chen, J. Li, and Y. Ren, “Regularization method for the approximate split equality problem in infinite-dimensional Hilbert spaces,” Abstract and Applied Analysis, vol. 2013, Article ID 813635, 5 pages, 2013. [7] C. Byrne and A. Moudafi, “Extensions of the CQ algorithm for the split feasibility and split equality problems,” hal-00776640version 1, 2013. [8] Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2, pp. 221–239, 1994. [9] Q. L. Dong and S. He, “Solving the split equality problem without prior knowledge of operator norms,” Optimization. In press. [10] P. Tseng, “Modified forward-backward splitting method for maximal monotone mappings,” SIAM Journal on Control and Optimization, vol. 38, no. 2, pp. 431–446, 2000. [11] J. Zhao and Q. Yang, “Several acceleration schemes for solving the multiple-sets split feasibility problem,” Linear Algebra and Its Applications, vol. 437, no. 7, pp. 1648–1657, 2012.

The Scientific World Journal [12] D. Pascali and S. Sburlan, Nonlinear Mappings of Monotone Type, Editura Academiei, Bucharest, Romania, 1978. [13] Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004. [14] Q. L. Dong, Y. Yao, and S. He, “Weak convergence theorems of the modified relaxed projection algorithms for the split feasibility problem in Hilbert spaces,” Optimization Letters, 2013. [15] M. Fukushima, “A relaxed projection method for variational inequalities,” Mathematical Programming, vol. 35, no. 1, pp. 58– 70, 1986. [16] A. Moudafi, “A relaxed alternating CQ-algorithm for convex feasibility problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 79, pp. 117–121, 2013.

7

Modified projection algorithms for solving the split equality problems.

The split equality problem (SEP) has extraordinary utility and broad applicability in many areas of applied mathematics. Recently, Byrne and Moudafi (...
550KB Sizes 0 Downloads 3 Views