Generalized Shift-Splitting Preconditioner for Saddle Point Problems with Block Three-by-Three Structure

Abstract

We propose a generalized shift-splitting iteration method for saddle point problems with block three-by-three structure. As a new iteration method, the method converges to the unique solution of the saddle point problem unconditionally. When exploited as a preconditioner, the spectral distribu-tion of the preconditioned matrix is investigated. Numerical experiments show that the new variant is efficient in speeding up GMRES for solving the block three-by-three saddle point problem.

Share and Cite:

Wang, L. and Zhang, K. (2019) Generalized Shift-Splitting Preconditioner for Saddle Point Problems with Block Three-by-Three Structure. Open Access Library Journal, 6, 1-14. doi: 10.4236/oalib.1105968.

1. Introduction

We consider the saddle point problems with the following block three-by-three structure

[ A B T 0 B 0 C T 0 C 0 ] [ x y z ] = [ f g h ] or A u = b , (1)

where A n × n is a symmetric positive definite matrix, B m × n and C s × m are matrices of full row rank, f n , g m and h s are given vectors. The linear system of equations (1) has many practical applications in scientific computing and engineering fields, including the constrained quadratic optimization problem, the constrained least squares problem and the computational fluid dynamics [1].

In practice, the coefficient matrix A in (1) is generally sparse and of large size. Therefore, iterative methods are usually preferred for their low memory requirement per iteration. Among these candidates, the Uzawa-type methods [2], the matrix splitting methods [3] and the Krylov subspace methods [4] have received much attention during the past decades. For instance, the Uzawa method [2] is a classical approach which is known for its low computational overhead and easy implementation. To improve the numerical performance, some Uzawa-type variants have been proposed recently which include the inexact Uzawa [5] [6] [7], Uzawa-SOR [8] and Uzawa-PSS [9], etc. The other way for solving the saddle point problems is to exploit the matrix splitting. By using the Hermitian and skew-Hermitian splitting (HSS), Bai et al. [3] propose the HSS iteration method. Following this reasoning, some efficient variants such as the accelerated Hermitian and skew-Hermitian splitting (AHSS) [10], the modified Hermitian and skew-Hermitian splitting (MHSS) [11] and the relaxed Hermitian and skew- Hermitian splitting (RHSS) [12] have been proposed henceforth. Recently, the sophisticated Krylov subspace methods have been used to solve the saddle point problems. Due to the structure of the saddle point matrices, however, a slow convergence or even stagnation is often witnessed during the actual computation. Such drawback can be alleviated by employing an appropriate preconditioner. In the context of saddle point problems, many efficient preconditioners, to name just a few, the block diagonal preconditioners, the constrained preconditioners, the HSS-based preconditioners, have been developed; see [13] [14] [15] [16] and the more recent monograph [17] for detail.

In [18], Bai et al. propose a class of shift-splitting iteration method for solving non-Hermitian positive definite linear system A u = b . It can be recast in the following way. Suppose that A n × n is a non-Hermitian positive definite matrix. Then the shift-splitting of the coefficient matrix A is given by

A = M ( α ) N ( α ) = 1 2 ( α I + A ) 1 2 ( α I A ) ,

which leads naturally to the shift-splitting iteration scheme

u k + 1 = M 1 ( α ) N ( α ) u k + M 1 ( α ) b , k = 0 , 1 , 2 ,

Theoretical analysis shows that shift-splitting iteration is endowed with an unconditional convergence. By using this sound convergence property, Cao et al. come up with the shift-splitting to solve the saddle point problems and propose a shift-splitting preconditioner and a local shift-splitting variant [19]. Numerical experiments show that the proposed two shift-splitting-type preconditioners are superior to the HSS preconditioner in terms of CPU time and iteration steps. The shift-splitting technique has also been generalized by incorporating two relaxation parameters [20]. Furthermore, Salkuyeh et al. extend it to solve the generalized saddle point problems and propose a modified generalized shift-splitting preconditioner [21].

By taking into account the special structure of (1), Cao [22] proposes shift- splitting preconditioners for solving (1). Motivated by [20] and [22], we devise a new generalized shift-splitting (GSS) iteration scheme for the structured linear system (1). A corresponding GSS preconditioner is induced to accelerate the convergence of GMRES. Theoretical results on the convergence of GSS iteration method and the eigenvalue distribution of the preconditioned matrix are also given.

The organization of this paper is as follows. In Section 2, we introduce the generalized shift-splitting iteration method for solving the block three-by-three saddle point problem and look into its convergence property. In Section 3, we obtain a GSS preconditioner derived from the GSS iteration and analyze the spectrum of the preconditioned matrix. In Section 4, some numerical experiments are carried out to illustrate the effectiveness of GSS by comparing it with other shift-splitting preconditioners given in [22]. Finally, some conclusions are given in Section 5.

2. Generalized Shift-Splitting Iteration Method and Its Convergence

The proposed generalized shift-splitting (GSS) iteration is based on the work of [22]. Therefore, it is reasonable for us to recap briefly the shift splitting used in [22] before introducing the new iteration method in this section.

In its simplest form, the shift splitting (SS) of the coefficient matrix A in (1) goes as follows

A = 1 2 ( α I + A ) 1 2 ( α I A ) = 1 2 [ α I + A B T 0 B α I C T 0 C α I ] 1 2 [ α I A B T 0 B α I C T 0 C α I ] , (2)

where I is the identity matrix with conforming size. The parameter α in (2) is used to monitor the convergence of the SS iteration method and hence the spectrum of the preconditioned matrix. In other words, α plays the same role in adjusting the two block diagonal matrices in (2), i.e.,

[ A B T B 0 ] ( m + n ) × ( m + n ) and 0 s × s . (3)

However, the reason for choosing a single parameter to control simultaneously these two block diagonal matrices above remains elusive. Instead, it seems more appealing to investigate the convergence if the two block diagonal matrices are affected by two different parameters separately. This is the main motivation of our work. To this end, we refine the splitting (2) by embedding an extra parameter β in the ( 3,3 ) -block, which yields the following generalized shift- splitting

A = M ( α , β ) N ( α , β ) = 1 2 ( Ω + A ) 1 2 ( Ω A ) = 1 2 [ α I + A B T 0 B α I C T 0 C β I ] 1 2 [ α I A B T 0 B α I C T 0 C β I ] . (4)

Here α > 0 , β > 0 and Ω = [ α I α I β I ] . Then it is trivial to find that the generalized shift-splitting (4) reduces to the shift-splitting (2) if α = β .

Using the splitting (4), we obtain the following stationary iteration

u k + 1 = Γ G S S u k + c , (5)

where Γ G S S = M 1 ( α , β ) N ( α , β ) , c = M 1 ( α , β ) b and k = 0,1, . To put it another way, the generalized shift-splitting method (GSS) for the block three-by- three saddle point problem (1) that can be stated as follows. Given an initial vector u 0 , compute

1 2 [ α I + A B T 0 B α I C T 0 C β I ] u k + 1 = 1 2 [ α I A B T 0 B α I C T 0 C β I ] u k + [ f g h ] (6)

until u k converges, where k = 0,1, .

It is known that the GSS iteration method (5) enjoys the unconditional convergence if the spectral radius of the iteration matrix Γ G S S is less than 1. The following results are adapted from [22] and [23] which are instrumental in proving such sound convergence property.

Lemma 1. Suppose that A n × n is a symmetric positive definite matrix, B m × n and C s × m are of full row rank. Let λ be an eigenvalue of the saddle point matrix A , then

R e ( λ ) > 0.

Proof. The proof can be found in ( [22], Lemma 2.2).

Lemma 2. Let W n × n be a non-Hermitian positive semidefinite matrix and E ( W , δ ) = ( δ I + W ) 1 ( δ I W ) be the extrapolated Cayley transform with δ > 0 . Then

1) the spectral radius of E ( W , δ ) is bounded by 1, i.e., ρ ( E ( W , δ ) ) 1 ;

2) ρ ( E ( W , δ ) ) = 1 if and only if the matrix W has a (reducing) eigenvalue of the form ι ξ with ξ and ι the imaginary unit.

Proof. See ( [23], Theorem 2.1) for detail.

The following theorem on the convergence of GSS iteration method comes out as a result of Lemma 1 and Lemma 2.

Theorem 1. Suppose that A n × n is a symmetric positive definite matrix, B m × n and C s × m are matrices of full row rank. Let α > 0 and β > 0 . Then the spectral radius of Γ G S S defined in (5) satisfies

ρ ( Γ G S S ) < 1,

i.e., the GSS iteration method converges to the unique solution of (1) unconditionally.

Proof. By (4) and (5), we rewrite the iteration matrix Γ G S S as

Γ G S S = ( Ω + A ) 1 ( Ω A ) = Ω 1 2 ( I + Ω 1 2 A Ω 1 2 ) 1 ( I Ω 1 2 A Ω 1 2 ) Ω 1 2 .

Let A ^ = Ω 1 2 A Ω 1 2 . It is easy to see that Γ G S S is similar to the matrix Γ ^ G S S = ( I + ^ ) 1 ( I ^ ) . Therefore, it suffices to prove ρ ( Γ ^ G S S ) < 1 . By the construction of A ^ = Ω 1 2 A Ω 1 2 , we have

A ^ = [ α I α I β I ] 1 2 [ A B T 0 B 0 C T 0 C 0 ] [ α I α I β I ] 1 2 = [ α 1 A α 1 B T 0 α 1 B 0 ( α β ) 1 2 C T 0 ( α β ) 1 2 C 0 ] .

Since α and β are positive, then α 1 A is a symmetric positive definite matrix, α 1 B and ( α β ) 1 2 C are matrices of full row rank. Thus A ^ is a

nonsymmetric positive semi-definite matrix. Moreover, it follows from Lemma 1 that R e ( λ ( ^ ) ) > 0 . Therefore, it can be deduced from Lemma 2 that ρ ( Γ ^ G S S ) 1 . The equality ρ ( Γ ^ G S S ) = 1 holds if and only if A ^ has a (reducing) eigenvalue of the form ι ξ , which is impossible because R e ( λ ( A ^ ) ) > 0 . It turns out that ρ ( Γ ^ G S S ) < 1 . As a result, ρ ( Γ G S S ) < 1 which completes the proof.

3. GSS Preconditioner and Its Properties

Though the GSS iteration method is proved to converge unconditionally, its convergence can be rather slow when compared with the involved Krylov subspace methods [24]. In the context of linear system solvers, it is more interesting to exploit the generalized shift-splitting as a preconditioner to accelerate the Krylov subspace methods like GMRES.

In this section, we present the following generalized shift-splitting preconditioner (GSS)

P G S S = 1 2 [ α I + A B T 0 B α I C T 0 C β I ] . (7)

When applying a preconditioner, the convergence of the preconditioned linear system is closely related with the eigenvalue distribution. The following finding shows that the eigenvalues of GSS-preconditioned matrix A P G S S 1 cluster in a circle centered at the point ( 1,0 ) with radius less than 1. We adopt the right preconditioning here such that the residual of the original saddle point problem is the same as that of its right-preconditioned counterpart.

Theorem 2. Suppose that A n × n is a symmetric positive definite matrix, B m × n and C s × m are matrices of full row rank. Let θ be an eigenvalue of A P G S S 1 , then

| 1 θ | < 1.

Proof. Since the preconditioned matrix A P G S S 1 is similar to P G S S 1 A , then it is only necessary to investigate the spectrum of P G S S 1 A . Since P G S S = M ( α , β ) , then it follows from (4) that

P G S S 1 A = P G S S 1 ( M ( α , β ) N ( α , β ) ) = M ( α , β ) 1 ( M ( α , β ) N ( α , β ) ) = I Γ G S S .

From Theorem 1, we know that | 1 θ | < 1 , where θ is an eigenvalue of P G S S 1 A .

Before ending this section, we give some implementation details when applying the GSS preconditioner. In actual computations, a matrix-vector product P G S S 1 r is often needed which can be solved from the linear system P G S S z = r . Note that the preconditioner P G S S can be decomposed as

P G S S = 1 2 [ I B T ( α I + 1 β C T C ) 1 0 0 I 1 β C T 0 0 I ] [ α I + A + B T ( α I + 1 β C T C ) 1 B 0 0 0 α I + 1 β C T C 0 0 0 β I ] [ I 0 0 ( α I + 1 β C T C ) 1 B I 0 0 1 β C I ] . (8)

Let r = [ r 1 T , r 2 T , r 3 T ] T and z = [ z 1 T , z 2 T , z 3 T ] T , where r 1 , z 1 n , r 2 , z 2 m and r 3 , z 3 s . By using the decomposition (8), we obtain the following subroutine for solving z = P G S S 1 r .

1) Solve w 1 from ( α I + 1 β C T C ) w 1 = 2 r 2 + 2 β C T r 3 ;

2) Solve z 1 from ( α I + A + B T ( α I + 1 β C T C ) 1 B ) z 1 = 2 r 1 B T w 1 ;

3) Solve w 2 from ( α I + 1 β C T C ) w 2 = B z 1 ;

4) Update z 2 = w 1 + w 2 ;

5) Update z 3 = 1 β ( 2 r 3 C z 2 ) .

In computing P G S S 1 r , the computational overhead by GSS is the same as that by SS [22]. As indicated in the numerical examples, however, an appropriate choice of ( α , β ) generally yields better numerical performance (concerning CPU time and iteration steps) than using a single parameter α (as in SS). In this sense, the GSS can be regarded as a more promising variant than its predecessor SS.

4. Numerical Experiments

In this section, we give some numerical experiments to illustrate the effectiveness of the new preconditioner GSS in terms of the number of iteration steps (iter), CPU time in seconds (time) and the relative residual norm (res). For completeness, we display both the number of restarts and the number of inner steps in the last restart; see, for instance, Table 3. The unconditioned GMRES(k), the shift-spliting-preconditioned GMRES(k) (SS) [22] and the relaxed shift- splitting-preconditioned GMRES(k) (RSS) [22] are employed for comparison, where k is the restarting frequency. In what follows, the right preconditioning is used for SS, RSS and GSS, the reason of which is given in Section 3. We set the restarting frequency k to be 5 and the initial guess u 0 to be the zero vector. The right-hand side vector b in (1) is given by 10 r a n d ( m + n + s ,1 ) . All algorithms mentioned above terminate if b A u i 2 / b 2 < 10 6 within 1500 restarts, where u i is the ith iterate. Numerical experiments are carried out on a laptop using MATLAB 2014b with Intel Core (8G RAM) under Windows 10 system.

The testing problem is adapted from [25]. It is a block three-by-three saddle point problem with A, B and C of the following structure

A = [ I T + T I 0 0 I T + T I ] 2 p 2 × 2 p 2 , B = [ I F F I ] p 2 × 2 p 2 , C = E F p 2 × p 2 ,

where T = 1 h 2 tridiag ( 1,2, 1 ) p × p , F = 1 h tridiag ( 0,1, 1 ) p × p , E = diag ( 1, p + 1, , p 2 p + 1 ) with being the Kronecker product and h = 1 p + 1 the grid size.

4.1. Choice of the Parameter Pair ( α , β )

As mentioned in Section 2, the main motivation for embracing an extra parameter β in GSS is to strike a balance in controlling the sub-matrices in (3). A proper choice of α and β will make the GSS preconditioner more appealing. As shown in (7), the GSS preconditioner P G S S becomes closer to (half of) the saddle point matrix A as α and β decrease. This implies that the choice α = β = 0 seems to be the optimal one. However, we do not suggest using extremely small values of α and β for numerical concerns. Instead, we use the experimental (optimal) values of α and β in the following experiments.

To this end, we first tabulate the CPU time of the GSS preconditioner with varying α and β for p = 16 and p = 32 in Table 1 and Table 2, respectively. The parameter pair ( α , β ) = ( 0.01,0.001 ) yields the shortest CPU time in both tables. To further verify this finding, we carry out more experiments in Figure 1 and Figure 2 by depicting the number of total iteration steps1 and CPU time of GSS with varying β when α is chosen to be 0.01, 0.1 and 1, respectively. Some remarks are in order. Loosely speaking, for a fixed value of α , the number of total iteration steps and CPU time increase as β goes up although there are some zigzags; see Figure 1 and Figure 2. Moreover, a similar pattern is also observed for β when the value of β is fixed while α varies; for instance, when β = 50 , the total iteration steps increase as α changes from 0.01 to 1, which reads readily by comparing the subplots (a), (c) and (e) in Figure 1. This justifies our choice of using ( α , β ) = ( 0.01,0.001 ) as the experimental optimal value.

Based on the above experiments, we use ( α , β ) = ( 0.01,0.001 ) as the experimental optimal choice for α and β in GSS. As for the optimal value of α

Table 1. CPU time of the GSS preconditioner with different values of α and β ( p = 16 ).

in SS and RSS, we follow the choice α = 0.01 used in [22] and regard it as the empirical optimal value.

4.2. Comparison with Other SS-Type Methods

In this subsection, we compare the numerical performance of GSS with SS and

Table 2. CPU time of the GSS preconditioner with different values of α and β ( p = 32 ).

Figure 1. Total iteration steps (left) and CPU time (right) of the GSS preconditioner for different values of α and β ( p = 16 ).

Figure 2. Total iteration steps (left) and CPU time (right) of the GSS preconditioner for different values of α and β ( p = 32 ).

RSS regarding CPU time and iteration steps. It should be noted that all sub-linear equations involved in computing matrix-vector product P 1 r are solved by direct methods.

It is known that the spectrum imposes a great effect on the convergence of the preconditioned linear system. To begin with the discussion, we plot respectively the spectral distributions of the original saddle point matrix A , the GSS-pre- conditioned matrix A P G S S 1 , the SS-preconditioned matrix A P S S 1 , and the RSS-preconditioned matrix A P R S S 1 in Figure 3. As illustrated, the eigenvalues of the original saddle point matrix A are well separated, which makes the unpreconditioned GMRES method cumbersome. After employing the SS-type preconditioners, however, the eigenvalues of the preconditioned matrices become much more clustered. In particular, the eigenvalues of the GSS-precondi- tioned matrix are more clustered than those of SS and RSS; see Figure 3. Thus it can be expected that GSS will bring about an improvement over SS and RSS in solving (1). This intuition is echoed by Table 3, where the saddle point problem (1) of four different sizes are considered. In [22], it is stated that the iteration steps of both the SS and RSS preconditioned iteration methods remain constant

Figure 3. Eigenvalue distributions of the unpreconditioned saddle point matrix and of three SS-type preconditioned matrices ( p = 32 ).

Table 3. Numerical results for three preconditioned GMRES methods with different grid sizes.

even if the problem size 4 p 2 soars. Fortunately, this desirable property has also been carried over to GSS. Furthermore, GSS outperforms SS and RSS in that it requires the shortest CPU time and the least number of iteration steps in all test problems, as observed in Table 3. In light of this, we conclude that GSS is a competitive shift-splitting variant for solving the structured saddle point problem (1).

5. Conclusion

The shift-splitting iteration method/preconditioner for solving the block three- by-three saddle point problem (1) uses a single parameter to control the convergence. Being aware of the different roles of the block diagonal matrices in the saddle point matrix, we introduce an additional parameter to monitor the ( 3,3 ) - block, which yields a generalized shift-splitting iteration method and a corresponding auxiliary preconditioner. By looking into the spectrum of the iteration matrix, we prove the unconditional convergence of the proposed method. We also apply the induced generalized shift-splitting preconditioner to speed up the convergence of GMRES. It is proved that the preconditioned matrix using the new preconditioner has a well-clustered spectrum which is verified by the numerical examples. Future works include finding the theoretically optimal choice of the parameters in the new scheme and develop other efficient shift-splitting preconditioners.

Acknowledgements

This work is supported by the National Natural Science Foundation (grant number: 11601323) and the Key Discipline Fund (2018) of College of Arts and Sciences in Shanghai Maritime University. The authors would like to thank the referees for their constructive suggestions.

NOTES

1By “total iteration steps”, we mean the total number of matrix-vector products associated with A P G S S 1 ; for instance, if “iter”, as in Table 3, is denoted by 10(3), then the number of total iteration steps will be ( 10 1 ) × 5 + 3 = 48 . In general, if “iter” is denoted by i ( j ) , then the number of total iteration steps will be ( i 1 ) × 5 + j .

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Benzi, M., Golub, G.H. and Liesen, J. (2005) Numerical Solution of Saddle Point Prob-lems. Acta Numerica, 14, 1-137. https://doi.org/10.1017/S0962492904000212
[2] Arrow, K.J., Hurwicz, L. and Uzawa, H. (1958) Studies in Nonlinear Programming. Stanford University Press, Redwood City, CA.
[3] Bai, Z.-Z., Golub, G.H. and Ng, M.K. (2003) Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Definite Linear Sys-tems. SIAM Journal on Matrix Analysis and Applications, 24, 603-626.
https://doi.org/10.1137/S0895479801395458
[4] Bai, Z.-Z. (2015) Motiva-tions and Realizations of Krylov Subspace Methods for Large Sparse Linear Systems. Journal of Computational and Applied Mathematics, 283, 71-78. https://doi.org/10.1016/j.cam.2015.01.025
[5] Bramble, J.H., Pasciak, J.E. and Vassilev, A.T. (1997) Analysis of the Inexact Uzawa Algorithm for Saddle Point Prob-lems. SIAM Journal on Numerical Analysis, 34, 1072-1092. https://doi.org/10.1137/S0036142994273343
[6] Cao, Z.-H. (2003) Fast Uzawa Algorithm for Generalized Saddle Point Problems. Applied Numerical Mathematics, 46, 157-171.
https://doi.org/10.1016/S0168-9274(03)00023-0
[7] Bai, Z.-Z. and Wang, Z.-Q. (2008) On Parameterized Inexact Uzawa Methods for Generalized Saddle Point Problems. Linear Algebra and Its Applications, 428, 2900- 2932. https://doi.org/10.1016/j.laa.2008.01.018
[8] Zhang, J. and Shang, J. (2010) A Class of Uzawa-SOR Methods for Saddle Point Problems. Applied Mathematics and Computation, 216, 2163-2168.
https://doi.org/10.1016/j.amc.2010.03.051
[9] Cao, Y. and Yi, S.-C. (2016) A Class of Uzawa-PSS Iteration Methods for Nonsingular and Singular Non-Hermitian Saddle Point Problems. Applied Mathematics and Computation, 275, 41-49. https://doi.org/10.1016/j.amc.2015.11.049
[10] Bai, Z.-Z. and Golub, G.H. (2007) Accelerated Hermitian and Skew-Hermitian Splitting Iteration Methods for Saddle-Point Problems. IMA Journal of Numerical Analysis, 27, 1-23. https://doi.org/10.1093/imanum/drl017
[11] Bai, Z.-Z., Benzi, M. and Chen, F. (2010) Modified HSS Iteration Methods for a Class of Complex Symmetric Linear Sys-tems. Computing, 87, 93-111.
https://doi.org/10.1007/s00607-010-0077-0
[12] Cao, Y., Yao, L., Jiang, M. and Niu, Q. (2013) A Relaxed HSS Preconditioner for Saddle Point Problems from Meshfree Discretization. Journal of Computational and Applied Mathematics, 31, 398-421. https://doi.org/10.4208/jcm.1304-m4209
[13] Pan, J.-Y., Ng, M.K. and Bai, Z.-Z. (2006) New Preconditioners for Saddle Point Problems. Applied Mathematics and Computation, 172, 762-771.
https://doi.org/10.1016/j.amc.2004.11.016
[14] Benzi, M. and Guo, X.-P. (2011) A Dimensional Split Preconditioner for Stokes and Linearized Na-vier?-Stokes Equations. Applied Numerical Mathematics, 61, 66-76.
https://doi.org/10.1016/j.apnum.2010.08.005
[15] Zhang, G.-F., Ren, Z.-R. and Zhou, Y.-Y. (2011) On HSS-Based Constraint Preconditioners for Generalized Saddle-Point Problems. Numerical Algorithms, 57, 273-287.
https://doi.org/10.1007/s11075-010-9428-3
[16] Wang, N.-N. and Li, J.-C. (2019) A Class of New Extended Shift-Splitting Preconditioners for Saddle Point Problems. Journal of Computational and Applied Mathematics, 357, 123-145. https://doi.org/10.1016/j.cam.2019.02.015
[17] Rozlo?nk, M. (2018) Saddle-Point Problems and Their Iterative Solution. Birkh?user. https://doi.org/10.1007/978-3-030-01431-5
[18] Bai, Z.-Z., Yin, J.-F. and Su, Y.-F. (2006) A Shift-Splitting Preconditioner for Non-Hermitian Positive Definite Matrices. Journal of Computational and Applied Mathematics, 24, 539-552.
[19] Cao, Y., Du, J. and Niu, Q. (2014) Shift-Splitting Preconditioners for Saddle Point Problems. Journal of Computational and Applied Mathematics, 272, 239-250.
https://doi.org/10.1016/j.cam.2014.05.017
[20] Cao, Y., Tao, H. and Jiang, M. (2014) Generalized Shift Splitting Preconditioners for Saddle Point Problems, in Chinese. Mathematica Numerica Sinica, 36, 16-26.
[21] Salkuyeh, D.K., Masoudi, M. and Hezari, D. (2015) On the Generalized Shift-Splitting Preconditioner for Saddle Point Problems. Applied Mathematics Letters, 48, 55-61.
https://doi.org/10.1016/j.aml.2015.02.026
[22] Cao, Y. (2019) Shift-Splitting Preconditioners for a Class of Block Three-by-Three Saddle Point Problems. Applied Mathematics Letters, 96, 40-46.
https://doi.org/10.1016/j.aml.2019.04.006
[23] Bai, Z.-Z. and Hadjidimos, A. (2014) Optimization of Extrapolated Cayley Transform with Non-Hermitian Positive Definite Matrix. Linear Algebra and Its Applications, 463, 322-339. https://doi.org/10.1016/j.laa.2014.08.021
[24] Zhang, K., Zhang, J.-L. and Gu, C.-Q. (2017) A New Relaxed PSS Preconditioner for Nonsymmetric Saddle Point Problems. Applied Mathematics and Computation, 308, 115-129. https://doi.org/10.1016/j.amc.2017.03.022
[25] Huang, N. and Ma, C.-F. (2019) Spectral Analysis of the Preconditioned System for the 3 × 3 Block Saddle Point Prob-lem. Numerical Algorithms, 81, 421-444.
https://doi.org/10.1007/s11075-018-0555-6

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.