An Iterative Method for Split Variational Inclusion Problem and Split Fixed Point Problem for Averaged Mappings

Abstract

In this paper, we use resolvent operator technology to construct a viscosity approximate algorithm to approximate a common solution of split variational inclusion problem and split fixed point problem for an averaged mapping in real Hilbert spaces. Further, we prove that the sequences generated by the proposed iterative method converge strongly to a common solution of split variational inclusion problem and split fixed point problem for averaged mappings which is also the unique solution of the variational inequality problem. The results presented here improve and extend the corresponding results in this area.

Share and Cite:

Wang, K. , Zhao, Y. and Zhao, Z. (2023) An Iterative Method for Split Variational Inclusion Problem and Split Fixed Point Problem for Averaged Mappings. Journal of Applied Mathematics and Physics, 11, 1541-1556. doi: 10.4236/jamp.2023.116101.

1. Introduction

Throughout the paper, unless otherwise stated, let H 1 and H 2 be real Hilbert spaces with their inner product · , · and norm · . Let C and Q be nonempty closed convex subsets of H 1 and H 2 , respectively. Let A : H 1 H 2 be a bounded linear operator and A is the corresponding adjoint operator of A. A mapping S : H 1 H 1 is called contractive, if there exists a constant α ( 0 , 1 ) such that

S x S y α x y , x , y H 1 .

If α = 1 , then S is called nonexpansive. In addition, let’s first review the split feasibility problem (SFP): find x C such that A x Q . The split feasibility problem (SFP) originated from phase recovery and medical image reconstruction [1] [2] [3] , and it has been widely studied, as shown in [4] [5] [6] . When C and Q in the split feasibility problem (SFP) are fixed point sets of nonlinear operators, the split feasibility problem (SFP) is called the split fixed point problem (SFPP) [7] [8] . More precisely, find x H 1 such that

x F i x ( S ) and A x F i x ( U ) , (1.1)

where F i x ( S ) and F i x ( U ) denote the fixed point sets of two nonlinear S : H 1 H 1 and U : H 2 H 2 . The solution set of the SFPP is denoted by F, that is,

F = { x H 1 : x F i x ( S ) and A x F i x ( U ) } .

A mapping T : H 1 H 1 is said to be

1) monotone, if

T x T y , x y 0 , x , y H 1 ;

2) α -strongly monotone, if there exists a constant α > 0 such that

T x T y , x y α x y 2 , x , y H 1 ;

3) β -inverse strong monotone ( β -ism), if there exists a constant β > 0 such that

T x T y , x y β T x T y 2 , x , y H 1 ;

4) firmly nonexpansive, if

T x T y , x y T x T y 2 , x , y H 1 .

A multivalued mapping M : H 1 2 H 1 is called monotone if for all x , y H 1 , u M x and v M y such that x y , u v 0 . And M : H 1 2 H 1 is maximal if the G r a p h ( M ) is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping M is maximal if and only if for ( x , u ) H 1 × H 1 , x y , u v 0 for ( y , v ) G r a p h ( M ) implies that u M x . Then, the resolvent mapping J λ M : H 1 H 1 associated with M, is defined by

J λ M ( x ) : = ( I λ M ) 1 ( x ) , x H 1 ,

for λ > 0 , where I stands identity operator on H 1 . Noting that J λ M is single valued and firmly nonexpansive.

Recently, Moudafi [9] introduced the following split monotone variational inclusion problem (SMVIP): Find x H 1 such that

0 f 1 ( x ) + B 1 ( x ) , (1.2)

and such that

y = A x H 2 solves 0 f 2 ( g ) + B 2 ( y ) , (1.3)

where B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 are multivalued maximal monotone mappings.

Moudafi [9] introduced an iterative algorithm for solving SMVIP (1.2)-(1.3), which is an important extension of the iterative method for split variational inequality given by Censor et al. [10] for split variational inequality problem. As Moudafi pointed out in [9] , SMVIP (1.2)-(1.3) includes as special, the split common fixed point problem, splitting variational inclusion problem, splitting zero point problem and splitting feasibility problem [1] [8] - [25] . These problems have been widely studied and used in practice as a model for intensity modulated radiation planning (IMRT), see [1] [25] . This is the core of many inverse modeling problems caused by phase retrieval and other real-world problems. For example, computer tomography and data compression in sensor networks are shown in [2] [26] .

If f 1 0 and f 2 0 , then SMVIP (1.2)-(1.3) can be reduced to the following split variational inclusion problem (SVIP): Find x H 1 such that

0 B 1 ( x ) , (1.4)

and such that

y = A x H 2 solves 0 B 2 ( y ) . (1.5)

When looked separately (1.4) is the variational inclusion problem and we denoted its solution set by S O L V I P ( B 1 ) . The SVIP (1.4)-(1.5) constitutes a pair of variational inclusion problems which have to be solved so that the image y = A x under a given bounded linear operator A, of the solution x of SVIP (1.4) in H 1 is the solution of another SVIP (1.5) in another space H 2 , we denoted the solution set of SVIP (1.5) by S O L V I P ( B 2 ) . And the solution set of SVIP (1.4)-(1.5) is denoted by

Γ = { x H 1 : x S O L V I P ( B 1 ) and A x S O L V I P ( B 2 ) } .

In 2011, Byrne et al. [24] studied the weak and strong convergence of iterative algorithms for SVIP (1.4)-(1.5): For given x 0 H 1 , calculate the iterative sequence { x n } generated by the following method:

x n + 1 = J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) .

On the other hand, Censor and Segal [7] studied iterative algorithms for solving split fixed point problems (SFPP): For given x 0 H 1 , calculate the sequence { x n } generated by the following method:

x n + 1 = ψ ( x n τ A ( I φ ) A x n ) ,

where ψ and φ are two directed operators.

Inspired by Moudafi [9] and Fyrne, Kazmi and Rizvi [27] proposed the following iterative algorithm for SVIP (1.4)-(1.5) and fixed point problems of nonexpansive mappings:

{ u n = J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) ; x n + 1 = α n f ( x n ) + ( 1 α n ) S u n ,

where λ > 0 , γ ( 0 , 1 L ) , L is the spectral radius of the operator A A .

Motivated and inspired by the above results and the ongoing research in this direction, we suggest and analyze an iterative algorithm, which is proposed to solve the split variational inclusion problem SVIP (1.4)-(1.5) and split fixed point problem SFPP (1.1) under appropriate conditions. We also prove that the iterative sequence generated by the iterative algorithm converges strongly to the common solution of SVIP (1.4)-(1.5) and SFPP (1.1). The results presented here improve and extend some known results.

2. Preliminaries

We denote the weak and the strong convergence of a sequence { x n } to a point x by x n x and x n x , respectively. Let us recall some concepts and results which are needed in sequel. For x H 1 , there exists a unique closest point in C denoted by P C x such that

x P C x x y , y C ,

P C is called the metric projection of H 1 onto C. As we all know, P C is firmly nonexpansive mapping, that is,

x y , P C x P C y P C x P C y 2 , x , y H 1 . (2.1)

In addition, P C x is characterized by the fact P C x C and

x P C x , y P C x 0 , (2.2)

and

x P C x 2 + y P C x 2 x y 2 , x H 1 , y C . (2.3)

In a real Hilbert space, for x , y H 1 and λ R , the following holds:

λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 . (2.4)

Noting that every nonexpansive operator T : H 1 H 1 satisfies the inequality

( x T ( x ) ) ( y T ( y ) ) , T ( y ) T ( x ) 1 2 ( T ( x ) x ) ( T ( y ) y ) 2 , x , y H 1 . (2.5)

As a result, we have,

x T ( x ) , y T ( x ) 1 2 T ( x ) x 2 , ( x , y ) H 1 × F i x ( T ) , (2.6)

for details, see e.g., ( [28] , Theorem 3.1) and ( [29] , Theorem 2.1).

A mapping T : H 1 H 1 is called averaged if and only if it can be written as the average of the identity mapping and a nonexpansive mapping, i.e., T : = ( 1 α ) I + α S , where α ( 0 , 1 ) and S : H 1 H 1 is nonexpansive and I is the identity operator on H 1 .

It is easy to see that every averaged mapping is nonexpansive. In addition, the firmly nonexpansive mapping (especially the projection on the nonempty closed convex set and the resolvent operators of the maximal monotone operators) is averaged.

The following are some key properties of averaged operators, see for instance [3] [9] [30] .

Proposition 2.1. (i) If T = ( 1 α ) S + α V , where S : H 1 H 1 is averaged, V : H 1 H 1 is nonexpansive and α ( 0 , 1 ) , then T is averaged.

(ii) The composite of finitely many averaged mappings is averaged.

(iii) If the mapping { T i } i = 1 N is averaged and have a nonempty common fixed point, then

i = 1 N F i x ( T i ) = F i x ( T 1 , T 2 , , T N ) .

(iv) If T is τ -inverse strong monotone ( τ -ism), then for γ > 0 , γ T is τ γ -inverse strong monotone ( τ γ -ism).

(v) T is averaged if and only if its complement I T is τ -inverse strong monotone ( τ -ism) for some τ > 1 2 .

Lemma 2.1. [31] Assume that T is nonexpansive self-mapping of a closed convex subset C of a Hilbert space H 1 . If T has a fixed point, then I T is demiclosed, i.e., whenever { x n } is a sequence in C converging weakly to some x C and the sequence { ( I T ) x n } converges strongly to some y, it follows that ( I T ) x = y . Here I is the identity mapping on H 1 .

Lemma 2.2. [32] Let { a n } is a sequence of non-negative real numbers such that

a n + 1 ( 1 β n ) a n + δ n , n 0 ,

where { β n } is a sequence in ( 0 , 1 ) and { δ n } is the sequence in such that (i) n = 1 β n = ; (ii) lim sup n δ n β n 0 or n = 1 | δ n | < . Then lim n a n = 0 .

3. Main Results

In this section, we will prove a strong convergence theorem based on the proposed iterative algorithm to calculate the common approximate solutions of SVIP (1.4)-(1.5) and SFPP (1.1).

Theorem 3.1. Let H 1 and H 2 be two real Hilbert spaces, let A : H 1 H 2 be a bounded linear operator with its adjoint operator A * . Let f : H 1 H 1 be a contraction mapping with α ( 0 , 1 ) . Assume that B 1 : H 1 2 H 1 , B 2 : H 2 2 H 2 are maximal monotone mappings, S : H 1 H 1 , U : H 2 H 2 are two average mappings and Γ F . For a given x 0 H 1 , let the iterative sequence { u n } , { y n } and { x n } be generated by

{ u n = J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) ; y n = S ( u n τ A ( I U ) A u n ) ; x n + 1 = α n f ( x n ) + ( 1 α n ) y n . (3.1)

where λ > 0 , γ , τ ( 0 , 1 L ) , L is the spectral radius of the operator A A and { α n } is a sequences in ( 0 , 1 ) such that lim n α n = 0 , n = 0 α n = and n = 1 | α n α n 1 | < . Then the sequence { y n } , { u n } and { x n } all converge strongly to z F Γ , where z = P F Γ f ( z ) .

Proof. We divide the proof into the following steps.

Step 1 Let p Γ F , then p = J λ B 1 p , A p = J λ B 2 ( A p ) , U A p = A p , S p = p . By (3.1) we have

u n p 2 = J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) p 2 = J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) J λ B 1 p 2 x n + γ A ( J λ B 2 I ) A x n p 2 = x n p 2 + γ 2 A ( J λ B 2 I ) A x n 2 + 2 γ x n p , A ( J λ B 2 I ) A x n x n p 2 + γ 2 L ( J λ B 2 I ) A x n 2 + 2 γ x n p , A ( J λ B 2 I ) A x n . (3.2)

Denoting Λ = 2 γ x n p , A ( J λ B 2 I ) A x n and from (2.6), we can obtain

Λ = 2 γ x n p , A ( J λ B 2 I ) A x n = 2 γ A ( x n p ) + ( J λ B 2 I ) A x n ( J λ B 2 I ) A x n , ( J λ B 2 I ) A x n = 2 γ [ A x n A p + J λ B 2 A x n A x n , ( J λ B 2 I ) A x n ( J λ B 2 I ) A x n 2 ] = 2 γ [ J λ B 2 A x n A p , ( J λ B 2 I ) A x n ( J λ B 2 I ) A x n 2 ] 2 γ [ 1 2 ( J λ B 2 I ) A x n ( J λ B 2 I ) A x n 2 ] γ ( J λ B 2 I ) A x n 2 . (3.3)

It follows from (3.2) and (3.3) that

u n p 2 x n p 2 + γ ( L γ 1 ) ( J λ B 2 I ) A x n 2 . (3.4)

Since γ ( 0 , 1 L ) , we have u n p 2 x n p 2 . Next we prove y n p 2 u n p 2 .

By (3.1), we have again

y n p 2 = S ( u n τ A ( I U ) A u n ) p 2 = S ( u n + τ A ( U I ) A u n ) S p 2 u n + τ A ( U I ) A u n p 2 = u n p 2 + τ 2 A ( U I ) A u n 2 + 2 τ u n p , A ( U I ) A u n u n p 2 + τ 2 L ( U I ) A u n 2 + 2 τ u n p , A ( U I ) A u n . (3.5)

Denoting Θ = 2 τ u n p , A ( U I ) A u n , since U is averaged mapping, it follows from (2.6) that

Θ = 2 τ u n p , A ( U I ) A u n = 2 τ A u n A p , ( U I ) A u n = 2 τ A u n + ( U I ) A u n ( U I ) A u n A p , ( U I ) A u n = 2 τ A u n + U A u n A u n A p , ( U I ) A u n ( U I ) A u n 2 2 τ [ 1 2 ( U I ) A u n 2 ( U I ) A u n 2 ] τ ( U I ) A u n 2 . (3.6)

It follows from (3.5) and (3.6) that

y n p 2 u n p 2 + L τ 2 ( U I ) A u n 2 τ ( U I ) A u n 2 u n p 2 + τ ( τ L 1 ) ( U I ) A u n 2 . (3.7)

Noting τ ( 0 , 1 L ) that y n p 2 u n p 2 , thus we have

y n p 2 u n p 2 x n p 2 . (3.8)

Since f is α -contractive, then from (3.1) and (3.8) that

x n + 1 p = α n f ( x n ) ( 1 α n ) y n p = α n f ( x n ) α n p + ( 1 α n ) y n + ( 1 α n ) p α n f ( x n ) p + ( 1 α n ) y n p α n [ f ( x n ) f ( p ) + f ( p ) p ] + ( 1 α n ) y n p α α n x n p + α n f ( p ) p + ( 1 α n ) y n p

α α n x n p + α n f ( p ) p + ( 1 α n ) x n p = [ 1 α n ( 1 α ) ] x n p + α n f ( p ) p max { x n p , f ( p ) p 1 α } max { x 0 p , f ( p ) p 1 α } . (3.9)

Hence { x n } is bounded and so are { u n } and { y n } .

Step 2. Next, we show that { x n } is asymptotically regular, i.e., x n + 1 x n 0 ( n ) . for τ ( 0 , 1 L ) , since S and U are both averaged mappings, and hence the mapping S ( I + τ A ( U I ) A ) is nonexpansive (see [9] ). Hence, we obtain

y n y n 1 S ( u n + τ A ( U I ) A u n ) S ( u n 1 + τ A ( U I ) A u n 1 ) = S ( I + τ A ( U I ) A ) u n S ( I + τ A ( U I ) A ) u n 1 u n u n 1 . (3.10)

It follows from (3.1) and (3.10) that

x n + 1 x n = α n f ( x n ) + ( 1 α n ) y n [ α n 1 f ( x n 1 ) + ( 1 α n 1 ) y n 1 ] = α n f ( x n ) α n f ( x n 1 ) + α n f ( x n 1 ) α n 1 f ( x n 1 ) + ( 1 α n ) y n ( 1 α n ) y n 1 + ( 1 α n ) y n 1 ( 1 α n 1 ) y n 1 α α n x n x n 1 + ( 1 α n ) y n y n 1 + 2 | α n α n 1 | K α α n x n x n 1 + ( 1 α n ) u n u n 1 + 2 | α n α n 1 | K , (3.11)

where K : = sup { f ( x n ) + y n : n N } . Since, for γ ( 0 , 1 L ) , the mapping J λ B 1 ( I + γ A ( J λ B 2 I ) A ) is averaged and hence nonexpanding (see [27] ), then we obtain

u n u n 1 J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) J λ B 1 ( x n 1 + γ A ( J λ B 2 I ) A x n 1 ) = J λ B 1 ( I + γ A ( J λ B 2 I ) A ) x n J λ B 1 ( I + γ A ( J λ B 2 I ) A ) x n 1 x n x n 1 .

It follows from (3.10) that

y n y n 1 u n u n 1 x n x n 1 . (3.12)

Then, from (3.11) and (3.12), we have

x n + 1 x n ( 1 α n ( 1 α ) ) x n x n + 1 + 2 | α n α n 1 | K .

By applying Lemma 2.2 with β n : = α n ( 1 α ) and δ n : = 2 | α n α n 1 | K , we have

lim n x n + 1 x n = 0. (3.13)

Next, since

( 1 α n ) ( y n x n ) = x n + 1 x n α n ( f ( x n ) x n ) .

Then, we have

( 1 α n ) y n x n x n + 1 x n + α n f ( x n ) x n .

It follows from (3.13) and α n 0 ( n ) , we obtain

lim n y n x n = 0. (3.14)

Next, we show that x n u n 0 ( n ) . From (3.18) and (3.4), we have

x n + 1 p 2 = α n f ( x n ) + ( 1 α n ) y n p 2 = α n ( f ( x n ) p ) + ( 1 α n ) ( y n p ) 2 α n f ( x n ) p 2 + ( 1 α n ) y n p 2 α n f ( x n ) p 2 + ( 1 α n ) u n p 2 α n f ( x n ) p 2 + ( 1 α n ) [ x n p 2 + γ ( L γ 1 ) ( J λ B 2 I ) A x n 2 ] α n f ( x n ) p 2 + x n p 2 + γ ( L γ 1 ) ( J λ B 2 I ) A x n 2 . (3.18)

Therefore,

γ ( 1 L γ ) ( J λ B 2 I ) A x n 2 α n f ( x n ) p 2 + x n p 2 x n + 1 p 2 α n f ( x n ) p 2 + x n + 1 x n ( x n p + x n + 1 p ) .

Since ( 1 L γ ) > 0 and α n 0 ( n ) and (3.13), we obtain

lim n ( J λ B 2 I ) A x n = 0. (3.16)

From (3.7) and (3.8), we have

x n + 1 p 2 = α n f ( x n ) + ( 1 α n ) y n p 2 α n f ( x n ) p 2 + ( 1 α n ) y n p 2 α n f ( x n ) p 2 + ( 1 α n ) [ u n p 2 + τ ( τ L 1 ) ( U I ) A u n 2 ] α n f ( x n ) p 2 + ( 1 α n ) [ x n p 2 + τ ( τ L 1 ) ( U I ) A u n 2 ] α n f ( x n ) p 2 + x n p 2 + τ ( τ L 1 ) ( U I ) A u n 2 .

Therefore

τ ( 1 τ L ) ( U I ) A u n 2 α n f ( x n ) p 2 + x n p 2 x n + 1 p 2 α n f ( x n ) p 2 + x n + 1 x n ( x n p + x n + 1 p ) .

Since ( 1 τ L ) > 0 and α n 0 ( n ) and (3.13), we obtain

lim n ( U I ) A u n 2 = 0. (3.17)

In addition, using (3.2), (3.8) and γ ( 0 , 1 L ) , we observe that

u n p 2 = J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) p 2 = J λ B 1 ( x n + γ A ( J λ B 2 I ) A x n ) J λ B 1 p 2 u n p , ( x n + γ A ( J λ B 2 I ) A x n ) p = 1 2 [ u n p 2 + ( x n + γ A ( J λ B 2 I ) A x n ) p 2 ( u n p ) ( x n + γ A ( J λ B 2 I ) A x n p ) 2 ]

1 2 [ u n p 2 + x n p 2 ( u n p ) ( x n + γ A ( J λ B 2 I ) A x n p ) 2 ] 1 2 [ u n p 2 + x n p 2 [ u n x n 2 + γ 2 A ( J λ B 2 I ) A x n 2 2 γ u n x n , A ( J λ B 2 I ) A x n ] ] 1 2 [ u n p 2 + x n p 2 u n x n 2 + 2 γ A ( u n x n ) ( J λ B 2 I ) A x n ] .

Therefore

u n p 2 x n p 2 u n x n 2 + 2 γ A ( u n x n ) ( J λ B 2 I ) A x n . (3.18)

It follows from (3.8), (3.15) and (3.18) that

x n + 1 p 2 α n f ( x n ) p 2 + ( 1 α n ) [ x n p 2 u n x n 2 + 2 γ A ( u n x n ) ( J λ B 2 I ) A x n ] α n f ( x n ) p 2 + x n p 2 u n x n 2 + 2 γ A ( u n x n ) ( J λ B 2 I ) A x n .

Implying that

u n x n 2 α n f ( x n ) p 2 + x n p 2 x n + 1 p 2 + 2 γ A ( u n x n ) ( J λ B 2 I ) A x n α n f ( x n ) p 2 + ( x n p + x n + 1 p ) x n x n + 1 + 2 γ A ( u n x n ) ( J λ B 2 I ) A x n .

Since α n 0 ( n ) and from (3.13) and (3.16), we obtain

lim n u n x n = 0. (3.19)

Next, we show that y n u n 0 ( n ) . Now, we can write

y n u n = y n x n + x n u n y n x n + x n u n .

From (3.14) and (3.19), we get

lim n y n u n = 0. (3.20)

Next, we show that lim n S u n u n = 0 . Note that from (3.13) and (3.19), we have

lim n u n x n + 1 = 0. (3.21)

And from (3.13) and (3.14) that

lim n x n + 1 y n = 0. (3.22)

Finally, it follows form (3.1) that

y n S u n = S ( u n τ A ( I U ) A u n ) S u n u n τ A ( I U ) A u n u n τ A ( U I ) A u n .

From (3.17), we have

lim n y n S u n = 0. (3.23)

Then, from (3.21)-(3.23), we have

lim n S u n u n = 0. (3.24)

Step 3. We show that w F Γ . Since { u n } is bounded, we consider weak cluster point w of { u n } . Hence, there exists a subsequence { u n k } of { u n } , which converges weakly to w. Since S and U both are both average mappings, then S and U are also both nonexpansive mappings. According to (3.17) and (3.24) and Lemma 2.1, we have w F i x ( S ) , A w F i x ( U ) . Thus w F .

On the other hand, u n k = J λ B 2 ( x n k + γ A ( J λ B 2 I ) A x n k ) can be written as

( x n k u n k ) + γ A ( J λ B 2 I ) A x n k λ B 1 u n k . (3.25)

By pass to limit when k in (3.25) and by taking into (3.16) and (3.19) and the fact that graphs of a maximal monotone operators is weakly-strongly closed, we obtain 0 B 1 ( w ) , i.e., w S O L V I P ( B 1 ) . In addition, since { x n } and { u n } have the same asymptotic behavior, { A x n k } weakly convergence to Aw. Again, by (3.16) and the fact that the resolvent J λ B 2 is nonexpansive and Lemma 2.1, we obtain that A w B 2 ( A w ) , i.e., A w S O L V I P ( B 2 ) . Thus w Γ . Therefore w F Γ .

Step 4. We show that x n z ( n ) . First, we claim that lim sup n f ( z ) z , x n z 0 .

Since { x n } is bounded, there exist a subsequence { x n j } of { x n } satisfy x n j w as j and lim sup n f ( z ) z , x n z = lim j f ( z ) z , x n j z . Since lim n x n u n = 0 , we have u n j w as j . From step 3, we obtain w F Γ . Indeed, we have

lim sup n f ( z ) z , x n z = lim j f ( z ) z , x n j z = f ( z ) z , w z 0 , (3.26)

where z = P F Γ f ( z ) . Next, we show that x n z ( n ) .

x n + 1 z 2 = α n f ( x n ) + ( 1 α n ) y n z , x n + 1 z = α n f ( x n ) z , x n + 1 z + ( 1 α n ) y n z , x n + 1 z = α n f ( x n ) f ( z ) , x n + 1 z + α n f ( z ) z , x n + 1 z

+ ( 1 α n ) y n z , x n + 1 z α α n x n z x n + 1 z + α n f ( z ) z , x n + 1 z

+ ( 1 α n ) x n z x n + 1 z α α n 2 [ x n z 2 + x n + 1 z 2 ] + α n f ( z ) z , x n + 1 z + 1 α n 2 [ x n z 2 + x n + 1 z 2 ] 1 α n ( 1 α ) 2 [ x n z 2 + x n + 1 z 2 ] + α n f ( z ) z , x n + 1 z 1 α n ( 1 α ) 2 x n z 2 + 1 2 x n + 1 z 2 + α n f ( z ) z , x n + 1 z ,

which implies that

x n + 1 z 2 [ 1 α n ( 1 α ) ] x n z 2 + 2 α n f ( z ) z , x n + 1 z .

Therefore, according to (3.26) and Lemma 2.2, we obtain x n z ( n ) . Further it follows from u n x n 0 , u n w F F and x n z ( n ) that z = w . This completes the proof.

Remark 3.1. Theorem 3.1 improves and extends the corresponding results in [7] [24] .

Remark 3.2. The algorithm is more general than the existing algorithm. The disadvantage is that the spectral radius of the operator is calculated, but the adaptive step size can be used to overcome the difficulties caused by calculating the spectral radius.

Remark 3.3. Numerical experiments are the direstion of our future efforts.

At last, we give two examples to illustrate the validity of our considered common solution problem for SVIP (1.4)-(1.5) and SFPP (1.1) and our convergence result of proposed algorithm (3.1).

Example 3.1. Let H = H 1 = H 2 = and B : H 2 H be defined by

B ( x ) = { { 1 } , x > 0 ; [ 0 , 1 ] , x = 0 ; { 0 } , x < 0.

Then, B is a maximal monotone mapping. We define the mappings A , f , S , U : H H

By A x = 1 2 x , f x = 1 3 x , S x = 2 3 x = 1 3 x + 2 3 1 2 x , U x = 3 4 x = 1 4 x + 3 4 2 3 x , x H , respectively.

It is easy to cheek that A is a bounded linear operator, f is a 1 3 -contractive mapping, Sand U are averaged mappings. Let B 1 ( x ) = B 2 ( x ) = B x . Then B 1 , B 2 : H 2 H are maximal mappings. Let J λ B 1 ( x ) = J λ B 2 ( x ) = x 2 be the resolvent operators. It is easy to see that x = 0 Γ F is the common solution to SVIP (1.4)-(1.5) and SFPP (1.1).

Example 3.2. Let H = H 1 = H 2 = 3 with the normal inner product and norm. We define the operators B 1 , B 2 : H H by

B 1 ( [ x 1 x 2 x 3 ] ) = [ 1 4 0 0 0 1 5 0 0 0 1 6 ] [ x 1 x 2 x 3 ] and B 2 ( [ x 1 x 2 x 3 ] ) = ( 1 3 0 0 0 1 2 0 0 0 1 ) [ x 1 x 2 x 3 ] .

Clearly, B 1 and B 2 are maximal monotone operators and their resolvents are given by

J λ B 1 ( [ x 1 x 2 x 3 ] ) = ( 4 4 + λ 0 0 0 5 5 + λ 0 0 0 6 6 + λ ) [ x 1 x 2 x 3 ] and J λ B 2 ( [ x 1 x 2 x 3 ] ) = ( 3 3 + λ 0 0 0 2 2 + λ 0 0 0 1 1 + λ ) [ x 1 x 2 x 3 ] .

For some λ > 0 , we also defined the mappings A , f , S , U : H H by

A ( [ x 1 x 2 x 3 ] ) = ( 1 2 3 4 5 6 7 8 9 ) [ x 1 x 2 x 3 ] , f ( [ x 1 x 2 x 3 ] ) = ( 2 3 0 0 0 2 3 0 0 0 2 3 ) [ x 1 x 2 x 3 ] ,

S ( [ x 1 x 2 x 3 ] ) = ( 1 3 0 0 0 1 3 0 0 0 1 3 ) [ x 1 x 2 x 3 ] and U ( [ x 1 x 2 x 3 ] ) = ( 3 4 0 0 0 3 4 0 0 0 3 4 ) [ x 1 x 2 x 3 ] .

Clearly, A is a bounded linear mapping, f is a 2 3 -contractive mapping, S and U are two averaged mappings. It is easy to know that x = ( 0 , 0 , 0 ) Γ F is the common solution to SVIP (1.4)-(1.5) and SFPP (1.1).

Acknowledgements

The authors would like to thank the reviewers for their valuable comments, which have helped to improve the quality of this paper.

Funding

This research was supported by Liaoning Provincial Department of Education under project No. LJKMZ20221491.

Authors’ Contributions

The authors carried out the results and read and approved the current version of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Censor, Y. and Elfving, T. (1994) A Multiprojection Algorithm Using Bregman Projections in a Product Space. Numerical Algorithms, 8, 221-239.
https://doi.org/10.1007/BF02142692
[2] Byrne, C. (2002) Iterative Oblique Projection onto Convex Sets and the Split Feasibility Problem. Inverse Problems, 18, 441-453.
https://doi.org/10.1088/0266-5611/18/2/310
[3] Byrne, C. (2004) A Unified Treatment of Some Iterative Algorithms in Signal Processing and Image Reconstruction. Inverse Problems, 20, 103-120.
https://doi.org/10.1088/0266-5611/20/1/006
[4] Wang, F. and Xu, H.-K. (2011) Cyclic Algorithms for Split Feasibility Problems in Hilbert Spaces. Nonlinear Analysis: Theory, Methods & Applications, 74, 4105-4111.
https://doi.org/10.1016/j.na.2011.03.044
[5] Xu, H.-K. (2010) Iterative Methods for the Split Feasibility Problem in Infinite-Dimensional Hilbert Spaces. Inverse Problems, 26, Article ID: 105018.
https://doi.org/10.1088/0266-5611/26/10/105018
[6] Yao, Y., Postolache, M. and Zhu, Z. (2020) Gradient Methods with Selection Technique for the Multiple Sets Split Feasibility Problem. Optimization, 69, 269-281.
https://doi.org/10.1080/02331934.2019.1602772
[7] Censor, Y. and Segal, A. (2009) The Split Common Fixed Point Problem for Directed Operators. Journal of Convex Analysis, 16, 587-600.
[8] Padcharoen, A., Kumam, P. and Cho, Y.J. (2019) Split Common Fixed Point Problems for Demicontractive Operators. Numerical Algorithms, 82, 297-320.
https://doi.org/10.1007/s11075-018-0605-0
[9] Moudafi, A. (2011) Split Monotone Variational Inclusions. Journal of Optimization Theory and Applications, 150, 275-283.
https://doi.org/10.1007/s10957-011-9814-6
[10] Censor, Y., Gibali, A. and Reich, S. (2012) Algorithms for the Split Variational Inequality Problem. Numerical Algorithms, 59, 301-323.
https://doi.org/10.1007/s11075-011-9490-5
[11] Zhao, J. and Hou, D.F. (2019) A Self-Adaptive Iterative Algorithm for the Split Common Fixed Point Problems. Numerical Algorithms, 82, 1047-1063.
https://doi.org/10.1007/s11075-018-0640-x
[12] Thong, D.V. (2017) Viscosity Approximation Methods for Solving Fixed-Point Problems and Split Common Fixed-Point Problems. Journal of Fixed Point Theory and Applications, 19, 1481-1499.
https://doi.org/10.1007/s11784-016-0323-y
[13] Wang, F.H. (2017) A New Iterative Method for the Split Common Fixed Point Problem in Hilbert Spaces. Optimization, 66, 407-415.
https://doi.org/10.1080/02331934.2016.1274991
[14] Boikanyon, O.A. (2015) A Strongly Convergent Algorithm for the Split Common Fixed Point Problem. Applied Mathematics and Computation, 265, 844-853.
https://doi.org/10.1016/j.amc.2015.05.130
[15] Shehu, Y. and Agbebaku, D.F. (2018) On Split Inclusion Problem and Fixed Point Problem for Multi-Valued Mappings. Computational and Applied Mathematics, 37, 1807-1824.
https://doi.org/10.1007/s40314-017-0426-0
[16] Deepho, J., Thounthong, P., Kuman, P. and Phiangsungnoen, S. (2016) A New General Iterative Scheme for Split Variational Inclusion and Fixed Point Problems of k-Strick Pseudo-Contraction Mappings with Convergence Analysis. Journal of Computational and Applied Mathematics, 318, 293-306.
https://doi.org/10.1016/j.cam.2016.09.009
[17] Ogbuisi, F.U. and Mewomo, O.T. (2017) Iterative Solution of Split Variational Inclusion Problem in a Real Banach Spaces. Afrika Matematika, 28, 295-309.
https://doi.org/10.1007/s13370-016-0450-z
[18] Kazmi, K.R., Ali, R. and Furkan, M. (2018) Hybrid Iterative Method for Split Monotone Variational Inclusion Problem and Hierarchical Fixed Point Problem for a Finite Family of Nonexpansive Mappings. Numerical Algorithms, 79, 499-527.
https://doi.org/10.1007/s11075-017-0448-0
[19] Guan, J.-L., Ceng, L.-C. and Hu, B. (2018) Strong Convergence Theorem for Split Monotone Variational Inclusion with Constraints of Variational Inequalities and Fixed Point Problems. Journal of Inequalities and Applications, 2018, Article No. 311.
https://doi.org/10.1186/s13660-018-1905-6
[20] Tuyen, T.M. (2019) A Strong Convergence Theorem for the Split Common Null Point Problem in Banach Spaces. Applied Mathematics & Optimizati, 79, 207-227.
https://doi.org/10.1007/s00245-017-9427-z
[21] Zhao, Y.L. and Han, D.X. (2016) Split General Strong Nonlinear Quasi-Variational Inequality Problem. Mathematical Problems in Engineering, 2016, Article ID: 5937016.
https://doi.org/10.1155/2016/5937016
[22] Zhao, Y.L., Liu, X. and Sun, R.N. (2021) Iterative Algorithms of Common Solutions for a Hierarchical Fixed Point Problem, a System of Variational Inequalities, and a Split Equilibrium Problem in Hilbert Spaces. Journal of Inequalities and Applications, 2021, Article No. 111.
https://doi.org/10.1186/s13660-021-02645-4
[23] Moudafi, A. (2010) The Split Common Fixed-Point Problem for Demicontractive Mappings. Inverse Problems, 26, Article ID: 055007.
https://doi.org/10.1088/0266-5611/26/5/055007
[24] Byrne, C., Censor, Y., Gibali, A. and Reich, S. (2012) The Split Common Null Point Problem. Journal of Nonlinear and Convex Analysis, 13, 759-775.
[25] Censor, Y., Bortfeld, T., Martin, B. and Trofimov, A. (2006) A Unified Approach for Inversion Problems in Intensity-Modulated Radiation Therapy. Physics in Medicine and Biology, 51, 2353-2365.
https://doi.org/10.1088/0031-9155/51/10/001
[26] Combettes, P.L. (1996) The Convex Feasibility Problem in Image Recovery. In: Hawkes, P.W., Ed., Advances in Imaging and Electron Physics, Vol. 95, Elsevier, Amsterdam, 155-270.
https://doi.org/10.1016/S1076-5670(08)70157-5
[27] Kazmi, K.R. and Rizvi, S.H. (2014) An Iterative Method for Split Variational Inclusion Problem and Fixed Point Problem for a Nonexpansive Mapping. Optimization Letters, 8, 1113-1124.
https://doi.org/10.1007/s11590-013-0629-2
[28] Crombez, G. (2006) A Hierarchical Presentation of Operators with Fixed Points on Hilbert Spaces. Numerical Functional Analysis & Optimization, 27, 259-277.
https://doi.org/10.1080/01630560600569957
[29] Crombez, G. (2005) A Geometrical Look at Iterative Methods for Operators with Fixed Points. Numerical Functional Analysis and Optimization, 26, 157-175.
https://doi.org/10.1081/NFA-200063882
[30] Bauschke, H.H. and Combettes, P.L. (2011) Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York.
https://doi.org/10.1007/978-1-4419-9467-7
[31] Goebel, K. and Kirk, W.A. (1990) Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511526152
[32] Xu, H.-K. (2004) Viscosity Approximation Methods for Nonexpansive Mappings. Journal of Mathematical Analysis and Applications, 298, 279-291.
https://doi.org/10.1016/j.jmaa.2004.04.059

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.