Share This Article:

Self-Adaptive Algorithms for the Split Common Fixed Point Problem of the Demimetric Mappings

Abstract Full-Text HTML XML Download Download as PDF (Size:314KB) PP. 2187-2199
DOI: 10.4236/jamp.2019.710150    109 Downloads   245 Views  

ABSTRACT

The split common fixed point problem is an inverse problem that consists in finding an element in a fixed point set such that its image under a bounded linear operator belongs to another fixed-point set. In this paper, we present new iterative algorithms for solving the split common fixed point problem of demimetric mappings in Hilbert spaces. Moreover, our algorithm does not need any prior information of the operator norm. Weak and strong convergence theorems are given under some mild assumptions. The results in this paper are the extension and improvement of the recent results in the literature.

1. Introduction

Let H 1 and H 2 be two real Hilbert spaces. Let S : H 1 H 1 and T : H 2 H 2 be two nonlinear mappings. We denote the fixed point sets of S and T by F ( S ) and F ( T ) , respectively. Let A : H 1 H 2 be a bounded linear operator with its adjoint A * . Then, we consider the following split common fixed point problem:

Finding x H 1 such that x F ( S ) and A x F ( T ) . (1.1)

The split common fixed point problem (1.1) is a generalization of the split feasibility problem arising from signal processing and image restoration; see [1] - [7] for instance. It was first introduced and studied by Censor and Segal [8]. Note that solving (1) can be translated to solve the fixed point equation

x * = S ( x * τ A * ( I T ) A x * ) , τ > 0.

Censor and Segal also proposed the following algorithm for directed mappings.

Algorithm 1.1 Initialization: let x * H 1 : = n be arbitrary. Iterative step: let

x n + 1 = S ( x n τ A * ( I T ) A x n ) , n 0 ,

where S : n n and T : R m m are two directed mappings and τ ( 0, 2 λ ) with λ being the spectral radius of the operator A * A .

Since then, there has been growing interest in the split common fixed point problem; please, see [9] - [15].

Recently, Wang [16] introduced the following new iterative algorithms for the split common fixed point problem of directed mappings.

Algorithm 1.2 Choose an arbitrary initial guess x 0 . Assume x n has been constructed. If

x n S x n + A * ( I T ) A x n = 0 ,

then stop; otherwise, continue and construct x n + 1 via the formula:

x n + 1 = x n τ n [ x n S x n + A * ( I T ) A x n ] , n 0 ,

where τ n is chosen self-adaptively as

τ n = x n S x n 2 + ( I T ) A x n 2 x n S x n + A * ( I T ) A x n 2 .

Algorithm 1.3 Let u H and start an initial guess x 0 H . Assume x n has been constructed. If

x n S x n + A * ( I T ) A x n = 0 ,

then stop; otherwise, continue and construct x n + 1 via the formula:

x n + 1 = α n u + ( 1 α n ) [ x n S x n + A * ( I T ) A x n ] , n 0 ,

where the stepsize sequence τ n is chosen self-adaptively as

τ n = x n S x n 2 + ( I T ) A x n 2 x n S x n + A * ( I T ) A x n 2 .

Wang obtained the weak and strong convergence of Algorithms 1.2 and 1.3, respectively. Inspired by the above work in the literature, Yao, et al. [17] extend Wang’s results in [16] from the directed mappings to the demicontractive mappings. Further, they construct the following two self-adaptive algorithms for solving the split common fixed point problem (1.1).

Algorithm 1.4. Initialization: let x 0 H 1 be arbitrary. For n 0 , assume the current iterate x n has been constructed. If

x n S x n + A * ( I T ) A x n = 0 ,

then stop; otherwise, calculate the next iterate x n + 1 by the following formula

{ y n = x n S x n + A * ( I T ) A x n , x n + 1 = x n γ τ n y n , n 0 ,

where γ ( 0, min { 1 β ,1 μ } ) is a positive constant and τ n is chosen self-adaptively as

τ n = x n S x n 2 + ( I T ) A x n 2 y n 2 .

Algorithm 1.5. Initialization: Let u H 1 be a fixed point and let x 0 H 1 be arbitrary. Iterative step: for n 0 , assume the current iterate x n has been constructed. If

x n S x n + A * ( I T ) A x n = 0 ,

then stop; otherwise, calculate the next iterate x n + 1 by the following formula

{ y n = x n S x n + A * ( I T ) A x n , x n + 1 = α n u + ( 1 α n ) ( x n γ τ n y n ) , n 0 ,

where γ ( 0, min { 1 β ,1 μ } ) is a positive constant and τ n is chosen self-adaptively as

τ n = x n S x n 2 + ( I T ) A x n 2 y n 2 .

They also obtained the weak and strong convergence of Algorithms 1.4 and 1.5, respectively. Motivated and inspired by the work in the literature, the main purpose of this paper is to extend the results of Wang [16] and Yao, et al. [17] from the directed mappings or demicontractive mappings to the demicontractive mappings. We present two self-adaptive algorithms for solving the split common fixed point problem (1.1). Weak and strong convergence theorems are given under some mild assumptions. Our results improve essentially the corresponding results in [16] [17]. Further, some other results are also improved; see [9] - [22].

2. Preliminaries

Let C be a nonempty closed convex subset of a real Hilbert space H.

Definition 2.1. A mapping T : C C is said to be:

1) directed if

T x x * 2 x x * 2 T x x 2 , x C , x * F ( T ) ;

2) β-demicontractive if there exists a constant β [ 0,1 ) such that

T x x * 2 x x * 2 + β T x x 2 , x C , x * F ( T ) ;

3) k-demimetric if there exists a constant k ( ,1 ) such that

x x * , x T x 1 k 2 x T x 2 , x C , x * F ( T ) . (2.1)

Clearly, (2.1) is equivalent to the following:

T x x * 2 x x * 2 + k T x x 2 , x C , x * F ( T ) .

It is obvious that the demimetric mappings include the directed mappings and the demicontractive mappings as special cases. Furthermore, this class mapping also contains the classes of strict pseudo-contractions, firmly-quasinon expansive mappings, 2-generalized hybrid mappings and quasi-non-expansive mappings. The class of demimetric mappings is fundamental because many common types of mappings arising in optimization belong to this class, see for example [23] [24] and references therein.

Definition 2.2 A sequence { x n } is called Fejér-monotone with respect to a given nonempty set Ω , if for every x Ω ,

x n + 1 x x n x , n 0.

Next we adopt the following notations:

a) x n x and x n x denote the strong and weak convergence of the sequence { x n } , respectively;

b) ω w ( x n ) : = { x : x n j x } is the weak ω-limit set of the sequence { x n } .

Recall that a mapping f : C C is said to be contractive if there exists a constant v ( 0,1 ) such that

f x f y v x y , x , y C .

We use Π C to denote the collection of mappings f verifying the above inequality. That is

Π C = { f : C H : f is a contraction with constant v } .

Let D be a nonempty subset of C. A sequence { f n } of mappings of C into H is said to be stable on D (see [25]) if { f n ( x ) : n 0 } is a singleton for every x D . It is clear that if { f n } is stable on D, then f n ( x ) = f 0 ( x ) for all n 0 and x D .

Recall that the (nearest point or metric) projection from H onto C, denoted P C , assigns to each u H , the unique point P C ( u ) C with the property

u P C ( u ) = inf { u v : v C } .

The metric projection P C ( u ) of H onto C is characterized by

u P C ( u ) , y P C ( u ) 0, y C , u H .

Lemma 2.1 ( [26]) Let Ω be a nonempty closed convex subset in H. If the sequence { x n } is Fejér monotone with respect to Ω , then we have the following conclusions:

1) x n x * Ω iff ω w ( x n ) Ω ;

2) the sequence { P Ω ( x n ) } converges strongly;

3) if x n x * Ω , then x * = lim n P Ω ( x n ) .

Lemma 2.2 ( [27]) Let { α n } be a sequence of nonnegative numbers satisfying the property:

α n + 1 ( 1 γ n ) α n + γ n c n , n 0,

where { γ n } , { c n } satisfy the restrictions:

1) n = 1 γ n = ;

2) lim sup n c n 0 or n = 1 c n γ n < .

Then, lim n α n = 0 .

Lemma 2.3 ( [23] [24]) Let E be a smooth, strictly convex and reflexive Banach space and let k be a real number with k ( ,1 ) . Let U be an k-demimetric mapping of E into itself. Then F ( U ) is closed and convex.

3. Main Results

Now we study the split common fixed points problem (1) under the following hypothesis:

Ÿ H 1 and H 2 are two real Hilbert spaces;

Ÿ S : H 1 H 1 and T : H 2 H 2 are two demimetric mappings with constants β ( ,1 ) and μ ( ,1 ) , respectively;

Ÿ A : H 1 H 2 is a bounded linear operator with its adjoint operator A * ;

Ÿ { f n } Π C is stable on Ω , where Ω denotes the solution set of problem (1.1).

Lemma 3.1 z * solves problem (1) iff z * S z * + A * ( I T ) A z * = 0 .

Proof. If z * solves problem (1), then z * = S z * and ( I T ) A z * = 0 . Therefore, we get z * S z * + A * ( I T ) A z * = 0 . To see the converse, suppose that z * S z * + A * ( I T ) A z * = 0 . Then, we have for any z Ω that

0 = z * S z * + A * ( I T ) A z * z * z z * S z * + A * ( I T ) A z * , z * z z * S z * , z * z + A * ( I T ) A z * , z * z z * S z * , z * z + ( I T ) A z * , A z * A z . (3.1)

Since S and T are demimetric, we have that

z * S z * , z * z 1 β 2 z * S z * 2 (3.2)

and

( I T ) A z * , A z * A z 1 μ 2 A z * T A z * 2 . (3.3)

Combining (3.1), (3.2) and (3.3), we obtain that

0 1 β 2 z * S z * 2 + 1 μ 2 A z * T A z * 2 . (3.4)

Since β , μ ( ,1 ) , we infer that z * F ( S ) and A z * F ( T ) by (3.4). Therefore, z * solves problem (1.1). This completes the proof.

Next we construct the following self-adaptive algorithm to solve problem (1.1).

Algorithm 3.1. Initialization: let x 0 H 1 be arbitrary. For n 0 , assume the current iterate x n has been constructed. If

x n S x n + A * ( I T ) A x n = 0 ,

then stop (in this case x n solves problem (1.1) by Lemma 3.1); otherwise, calculate the next iterate x n + 1 by the following formula

{ y n = x n S x n + A * ( I T ) A x n , x n + 1 = x n γ τ n y n , n 0 , (3.5)

where γ ( 0, min { 1 β ,1 μ } ) is a positive constant and τ n is chosen self adaptively as

τ n = x n S x n 2 + ( I T ) A x n 2 y n 2 .

We assume that the sequence { x n } generated by Algorithm 3.1 is infinite. In other words, Algorithm 3.1 does not terminate in a finite number of iterations.

Theorem 3.2. Assume that S and T are demiclosed at zero. If Ω , then the sequence { x n } generated by (3.5) converges weakly to a solution z * ( = lim n P Ω ( x n ) ) of problem (1.1).

Proof. Since A is linear and continuous, noticing Lemma 2.3, we see Ω is closed and convex. Thus we have that P Ω is well defined.

We next prove that the sequence { x n } is Fejér-monotone with respect to Ω . Letting z Ω , we then obtain that

y n , x n z = x n S x n + A * ( I T ) A x n , x n z = x n S x n , x n z + A * ( I T ) A x n , x n z 1 β 2 x n S x n 2 + 1 μ 2 ( I T ) A x n 2 1 2 min { 1 β ,1 μ } ( x n S x n 2 + A x n T A x n 2 ) . (3.6)

In view of Equation (3.5) and Equation (3.6), we deduce

x n + 1 z 2 = x n γ τ n y n z 2 = x n z 2 2 γ τ n y n , x n z + γ 2 τ n 2 y n 2 x n z 2 + γ 2 ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 γ min { 1 β ,1 μ } ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 x n z 2 γ ( min { 1 β ,1 μ } γ ) ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 . (3.7)

This implies that the sequence { x n } is Fejér monotone.

Next, we show that every weak cluster point of the sequence { x n } belongs to the solution set of problem (1.1).

From the Fejér-monotonicity of { x n } , it follows that the sequence { x n } is bounded. Further, we deduce from (3.7) that

γ ( min { 1 β ,1 μ } γ ) ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 x n z 2 x n + 1 z 2 .

An induction induces that

γ ( min { 1 β , 1 μ } γ ) n = 0 ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 x 0 z 2 < ,

which implies that

lim n ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 = 0.

Observe that

( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 = ( x n S x n 2 + A x n T A x n 2 ) 2 x n S x n + A * ( I T ) A x n 2 ( x n S x n 2 + A x n T A x n 2 ) 2 2 ( x n S x n 2 + A 2 ( I T ) A x n 2 ) ( x n S x n 2 + A x n T A x n 2 ) 2 2 max { 1 , A 2 } ( x n S x n 2 + ( I T ) A x n 2 ) = x n S x n 2 + A x n T A x n 2 2 max { 1 , A 2 } . (3.8)

By the demiclosedness (at zero) of S and T, we deduce immediately ω w ( x n ) Ω . To this end, the conditions of Lemma 2.1 are all satisfied. Consequently, x n z * = lim n P Ω ( x n ) . This completes the proof.

Next, we study an iteration with strong convergence for solving problem (1.1).

Algorithm 3.3 Initialization: Let x 0 H 1 be arbitrary. Iterative step: for n 0 , assume the current iterate x n has been constructed. If

x n S x n + A * ( I T ) A x n = 0,

then stop (in this case x n solves problem (1.1) by Lemma 3.1); otherwise, calculate the next iterate x n + 1 by the following formula

{ y n = x n S x n + A * ( I T ) A x n , x n + 1 = α n f n x n + ( 1 α n ) ( x n γ τ n y n ) , n 0 , (3.9)

where γ ( 0, min { 1 β ,1 μ } ) is a positive constant and τ n is chosen self-adaptively as

τ n = x n S x n 2 + ( I T ) A x n 2 y n 2 .

Theorem 3.4 Assume that:

(C1) Ω ;

(C2) S and T are demiclosed at zero;

(C3) lim n α n = 0 and n = 0 α n = .

Then the sequence { x n } generated by (3.9) converges strongly to the solution z ( = P Ω f 0 z ) of problem (1.1).

Proof. Putting z = P Ω f 0 z , we obtain from (3.7) that

x n γ τ n y n z 2 x n z 2 γ ( min { 1 β ,1 μ } γ ) ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 x n z 2 . (3.10)

Next, we show that the sequence { x n } is bounded. Indeed, we obtain from (3.9) and (3.10) that

x n + 1 z = α n f n x n + ( 1 α n ) ( x n γ τ n y n ) z α n f n x n z + ( 1 α n ) x n γ τ n y n z α n ( f n x n f n z + f n z z ) + ( 1 α n ) x n z α n ( v x n z + f 0 z z ) + ( 1 α n ) x n z α n f 0 z z + ( 1 α n ( 1 v ) ) x n z .

By induction, we get

x n + 1 z max { f 0 z z 1 v , x 0 z } ,

which gives that the sequence { x n } is bounded.

By virtue of (3.9), we deduce

x n + 1 z 2 = α n f n x n + ( 1 α n ) ( x n γ τ n y n ) z , x n + 1 z = ( 1 α n ) ( x n γ τ n y n ) z , x n + 1 z + α n f n x n f n z , x n + 1 z + α n f 0 z z , x n + 1 z ( 1 α n ) x n γ τ n y n z x n + 1 z + α n f n x n f n z x n + 1 z + α n f 0 z z , x n + 1 z

( 1 α n ) x n γ τ n y n z x n + 1 z + α n v x n z x n + 1 z + α n f 0 z z , x n + 1 z = ( 1 α n ) ( 1 2 x n γ τ n y n z 2 + 1 2 x n + 1 z 2 ) + α n ( 1 2 v x n z 2 + 1 2 x n + 1 z 2 ) + α n f 0 z z , x n + 1 z ,

which implies

x n + 1 z 2 ( 1 α n ) x n γ τ n y n z 2 + α n v x n z 2 + 2 α n f 0 z z , x n + 1 z .

This together with (3.10) implies that

x n + 1 z 2 ( 1 α n ( 1 v ) ) x n z 2 + 2 α n f 0 z z , x n + 1 z ( 1 α n ) γ ( min { 1 β , 1 μ } γ ) ( x n S x n 2 + A x n T A x n 2 ) 2 y n 2 ( 1 α n ( 1 v ) ) x n z 2 + α n ( 2 f 0 z z , x n + 1 z ( 1 α n ) γ ( min { 1 β , 1 μ } γ ) α n ( x n S x n 2 + A x n T A x n 2 ) 2 x n S x n + A * ( I T ) A x n 2 ) . (3.11)

Set δ n = x n z 2 and

σ n = 2 f 0 z z , x n + 1 z ( 1 α n ) γ ( min { 1 β , 1 μ } γ ) α n × ( x n S x n 2 + A x n T A x n 2 ) 2 x n S x n + A * ( I T ) A x n 2 (3.12)

for all n 0 . Returning to (3.11) to obtain

δ n + 1 ( 1 α n ( 1 v ) ) δ n + α n σ n , n 0. (3.13)

From (3.12), we find

σ n 2 f 0 z z , x n + 1 z 2 f 0 z z x n + 1 z .

It follows that lim sup n σ n < + .

Next we show that lim sup n σ n 1 .

If lim sup n σ n < 1 , then there exists n 0 such that σ n 1 for all n n 0 . It then follows from (3.13) that

δ n + 1 ( 1 α n ( 1 v ) ) δ n α n δ n α n .

for all n n 0 . By induction, we have

δ n + 1 δ n 0 i = n 0 n α i . (3.14)

By taking lim sup as n in (3.14), we have

lim sup n δ n δ n 0 lim n i = n 0 n α i = ,

which induces a contradiction. So, 1 lim sup n σ n < + . Thus, we can take a subsequence { n k } such that

lim sup n σ n = lim k σ n k = lim k 2 f 0 z z , x n k + 1 z ( 1 α n k ) γ ( min { 1 β , 1 μ } γ ) α n k × ( x n k S x n k 2 + A x n k T A x n k 2 ) 2 x n k S x n k + A * ( I T ) A x n k 2 . (3.15)

Since f 0 z , x n k + 1 z is a bounded real sequence, without loss of generality, we may assume lim k f 0 z , x n k + 1 z exists. Consequently, from (3.15), the following limit also exists

lim k ( 1 α n k ) γ ( min { 1 β , 1 μ } γ ) α n k ( x n k S x n k 2 + A x n k T A x n k 2 ) 2 x n k S x n k + A * ( I T ) A x n k 2 .

It turns out that

lim k ( x n k S x n k 2 + A x n k T A x n k 2 ) 2 x n k S x n k + A * ( I T ) A x n k 2 = 0. (3.16)

Taking into consideration that

x n k S x n k 2 + A x n k T A x n k 2 2 max { 1 , A 2 } ( x n k S x n k 2 + A x n k T A x n k 2 ) 2 x n k S x n k + A * ( I T ) A x n k 2 ,

we then deduce from (3.16) that

lim k x n k S x n k = lim k A x n k T A x n k = 0. (3.17)

It follows that any weak cluster point of { x n k } belongs to Ω . Observe that

x n + 1 x n α n x n f n x n + ( 1 α n ) γ τ n y n = α n x n f n x n + ( 1 α n ) γ ( x n S x n 2 + A x n T A x n 2 ) 2 x n S x n + A * ( I T ) A x n 2 .

By (C3) and (3.16), we derive

lim k x n k + 1 x n k = 0.

This means that any weak cluster point of { x n k + 1 } also belongs to Ω . Without loss of generality, we assume that { x n k + 1 } converges weakly to x ¯ Ω . Hence, we obtain

lim sup n σ n lim k 2 f 0 z z , x n k + 1 z = 2 f 0 z z , x ¯ z 0.

due to the fact that z = P Ω f 0 z . Rewriting (3.13) as

δ n + 1 ( 1 α n ( 1 v ) ) δ n + α n ( 1 v ) σ n 1 v , n 0,

and noticing Lemma 2.2, we get x n z as n .

Theorem 3.5 Let S : H 1 H 1 and T : H 2 H 2 be two demicontractive mappings with constants β [ 0,1 ) and μ [ 0,1 ) , respectively. Then the sequence { x n } generated by (1.1) converges strongly to the solution z ( = P Ω f 0 z ) of problem (3.9) under the assumption of Theorem 3.4.

4. Conclusion

In this paper, we consider a class of the split common fixed point problems. By extending results in [16] [17] from the directed mappings or the demicontractive mappings to the demimetric mappings, and a fixed point u H 1 to a sequence mappings { f n } Π , we construct two self-adaptive algorithms for solving the split common fixed point problem. Further, we also establish the weak and strong convergence theorems under some certain appropriate assumptions. The results in this paper are the extension and improvement of the recent results in the literature.

Acknowledgements

This research was supported by the Key Scientific Research Projects of Higher Education Institutions in Henan Province (20A110038).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Chen, X. , Song, Y. , He, J. and Gong, L. (2019) Self-Adaptive Algorithms for the Split Common Fixed Point Problem of the Demimetric Mappings. Journal of Applied Mathematics and Physics, 7, 2187-2199. doi: 10.4236/jamp.2019.710150.

References

[1] Byrne, C. (2004) A Unified Treatment of Some Iterative Algorithms in Signal Processing and Image Reconstruction. Inverse Problems, 20, 103-120.
https://iopscience.iop.org/article/10.1088/0266-5611/20/1/006
https://doi.org/10.1088/0266-5611/20/1/006
[2] Ceng, L.C., Ansari, Q.H. and Yao, J.C. (2012) An Extra Gradient Method for Split Feasibility and Fixed Point Problems. Computers & Mathematics with Applications, 64, 633-642.
https://doi.org/10.1016/j.camwa.2011.12.074
[3] Mǎaruster, S. and Popirlan, C. (2008) On the Mann-Type Iteration and Convex Feasibility Problem. Journal of Computational and Applied Mathematics, 212, 390-396.
https://doi.org/10.1016/j.cam.2006.12.012
[4] Qin, X. and Yao, J.C. (2017) Projection Splitting Algorithms for Non-Self Operators. Journal of Nonlinear and Convex Analysis, 18, 925-935.
http://www.ybook.co.jp/online2/opjnca/vol18/p925.html
[5] Sahu, D.R. and Yao, J.C. (2017) A Generalized Hybrid Steepest Descent Method and Applications. Journal of Nonlinear and Variational Analysis, 1, 111-126.
http://jnva.biemdas.com/issues/JNVA2017-1-7.pdf
[6] Tang, J., Chang, S.S. and Dong, J. (2017) Split Equality Fixed Point Problem for Two Quasi-Asymptotically Pseudocontractive Mappings. Journal of Nonlinear Functional Analysis, 2017, Article ID: 26.
http://jnfa.mathres.org/archives/1322
https://doi.org/10.23952/jnfa.2017.26
[7] Wang, F. and Xu, H.K. (2011) Cyclic Algorithms for Split Feasibility Problems in Hilbert Spaces. Nonlinear Analysis, 74, 4105-4111.
https://doi.org/10.1016/j.na.2011.03.044
[8] Censor, Y. and Segal, A. (2009) The Split Common Fixed Point Problem for Directed Operators. Journal of Convex Analysis, 16, 587-600.
http://www.heldermann.de/JCA/JCA16/JCA162/jca16031.htm
https://doi.org/10.1111/j.1475-3995.2008.00684.x
[9] Boikanyo, O.A. (2015) A Strongly Convergent Algorithm for the Split Common Fixed Point Problem. Applied Mathematics and Computation, 265, 844-853.
https://doi.org/10.1016/j.amc.2015.05.130
[10] Cegielski, A. (2015) General Method for Solving the Split Common Fixed Point Problem. Journal of Optimization Theory and Applications, 165, 385-404.
https://link-springer.gg363.site/article/10.1007/s10957-014-0662-z
https://doi.org/10.1007/s10957-014-0662-z
[11] He, Z. and Du, W.S. (2013) On Hybrid Split Problem and Its Nonlinear Algorithms. Fixed Point Theory and Applications, 2013, Article No. 47.
https://link-springer.gg363.site/article/10.1186/1687-1812-2013-47
https://doi.org/10.1186/1687-1812-2013-47
[12] Kraikaew, P. and Saejung, S. (2014) On Split Common Fixed Point Problems. Journal of Mathematical Analysis and Applications, 415, 513-524.
https://doi.org/10.1016/j.jmaa.2014.01.068
[13] Moudafi, A. (2010) The Split Common Fixed-Point Problem for Demicontractive Mappings. Inverse Problems, 26, Article ID: 055007.
https://doi.org/10.1088/0266-5611/26/5/055007
https://iopscienceiop.gg363.site/article/10.1088/0266-5611/26/5/055007/meta
[14] Yao, Y.H., Leng, L.M., Postolache, M. and Zheng, X.X. (2017) Mann-Type Iteration Method for Solving the Split Common Fixed Point Problem. Journal of Nonlinear and Convex Analysis, 18, 875-882.
https://www.researchgate.net/publication/319098874
[15] Yao, Y.H., Yao, J.C., Liou, Y.C. and Postolache, M. (2018) Iterative Algorithms for Split Common Fixed Points of Demicontractive Operators without Priori Knowledge of Operator Norms. Carpathian Journal of Mathematics, 34, 459-466.
https://www.researchgate.net/publication/328598852
[16] Wang, F. (2017) A New Iterative Method for the Split Common Fixed Point Problem in Hilbert Spaces. Optimization, 66, 407-415.
https://doi.org/10.1080/02331934.2016.1274991
[17] Yao, Y.H., Liou, Y.C. and Postolache, M. (2017) Self-Adaptive Algorithms for the Split Problem of the Demicontractive Operators. Optimization, 67, 1309-1319.
[18] Kocourek, P., Takahashi, W. and Yao, J.C. (2010) Fixed Point Theorems and Weak Convergence Theorems for Generalized Hybrid Mappings in Hilbert Spaces. Taiwanese Journal of Mathematics, 14, 2497-2511.
https://www.jstor.org/stable/43834926
https://doi.org/10.11650/twjm/1500406086
[19] Marino, G. and Xu, H.K. (2007) Weak and Strong Convergence Theorems for Strict Pseudo-Contractions in Hilbert Spaces. Journal of Mathematical Analysis and Applications, 329, 336-349.
https://doi.org/10.1016/j.jmaa.2006.06.055
[20] Qin, X.L., Dehaishb, B., Latif, A. and Cho, S.Y. (2016) Strong Convergence Analysis of a Monotone Projection Algorithm in a Banach Space. Journal of Nonlinear Sciences and Applications, 9, 2865-2874.
https://doi.org/10.22436/jnsa.009.05.81
https://mathscinet.ams.org/leavingmsn?url=https://doi.org/10.22436/jnsa.009.05.81
https://mathscinet.ams.org/mathscinet-getitem?mr=3491153
[21] Takahashi, W. (2017) Strong Convergence Theorem for a Finite Family of Demimetric Mappings with Variational Inequality Problems in a Hilbert Space. Japan Journal of Industrial and Applied Mathematics, 34, 41-57.
https://doi.org/10.1007/s13160-017-0237-0
https://link-springer.gg363.site/article/10.1007/s13160-017-0237-0
[22] Yao, Y.H., Postolache, M., Liou, Y.C. and Yao, Z.S. (2016) Construction Algorithms for a Class of Monotone Variational Inequalities. Optimization Letters, 10, 1519-1528.
https://link-springer.gg363.site/article/10.1007/s11590-015-0954-8
https://doi.org/10.1007/s11590-015-0954-8
[23] Hojo, M. and Takahashi, W. (2016) Strong Convergence Theorems by Hybrid Methods for Demimetric Mappings in Banach Spaces. Journal of Nonlinear and Convex Analysis, 17, 1333-1344.
http://www.ybook.co.jp/online2/opjnca/vol17/p1333.html
[24] Takahashi, W. (2016) The Split Common Null Point Problem and the Shrinking Projection Method in Banach Spaces. Optimization, 65, 281-287.
https://doi.org/10.1080/02331934.2015.1020943
[25] Aoyama, K. and Kohsaka, F. (2014) Viscosity Approximation Process for a Sequence of Quasi-Nonexpansive Mappings. Fixed Point Theory and Applications, 2014, Article No. 17.
https://doi.org/10.1186/1687-1812-2014-17
https://fixedpointtheoryandapplications.springeropen.com/articles/10.1186/1687-1812-2014-17
[26] Bauschke, H.H. and Borwein, J.M. (1996) On Projection Algorithms for Solving Convex Feasibility Problems. SIAM Review, 38, 367-426.
https://doi.org/10.1137/S0036144593251710
[27] Xu, H.K. (2002) Iterative Algorithms for Nonlinear Operators. Journal of the London Mathematical Society, 66, 240-256.
https://doi.org/10.1112/S0024610702003332
https://www.researchgate.net/publication/284788881

  
comments powered by Disqus

Copyright © 2019 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.