Accelerated RHSS Iteration Method for Stabilized Saddle-Point Problems

Abstract

For stabilized saddle-point problems, we apply the two iteration parameters idea for regularized Hermitian and skew-Hermitian splitting (RHSS) method and establish accelerated RHSS (ARHSS) iteration method. Theoretical analysis shows that the ARHSS method converges unconditionally to the unique solution of the saddle point problem. Finally, we use a numerical example to confirm the effectiveness of the method.

Share and Cite:

Song, Z. and Zhang, P. (2022) Accelerated RHSS Iteration Method for Stabilized Saddle-Point Problems. Journal of Applied Mathematics and Physics, 10, 1019-1027. doi: 10.4236/jamp.2022.104069.

1. Introduction

Consider the stabilized saddle-point problem

A x ( B E E * C ) ( y z ) = ( f g ) b , (1)

where B p × p is a Hermitian positive definite matrix, C q × q is a Hermitian positive semidefinite matrix, E p × q is rectangular matrix, E * q × p is the complex conjugate transpose of E and f p and g q . In addition, the null spaces of the matrices C and E do not overlap, i.e., n u l l ( C ) n u l l ( E ) = { 0 } , then in accordance with [1], we know that the stabilized saddle-point problem (1) admits a unique solution; see also [2] and its reference therein. For stabilized saddle-point problem, it has a broad background in scientific computing and engineering applications. For example, it frequently appears in Navier-Stokes equations, finite element methods, regularized and weighted least-squares problems. For more details, see [3] [4] [5] [6] [7].

When C = 0 , the stabilized saddle-point problem (1) degenerated into standard saddle-point problem. Benzi and Golub [8] proposed the Hermitian and skew-Hermitian splitting (HSS) iteration method to solve the standard saddle-point problem. This iteration method is a direct generalization of the HSS iteration method initially proposed in [9]. To improve the convergence rate of the HSS iteration method, Bai, Golub and Pan [10] proposed the preconditioned HSS (PHSS) iteration method, and then Bai and Golub [11] proposed the accelerated HSS (AHSS) iteration method by calculating the HSS iteration sequence with different parameters. Bai and Benzi [12] proposed the regularized Hermitian and skew-Hermitian splitting (RHSS) iteration method to solve the standard saddle-point problem. The biggest highlight is that the condition number of iterative systems can be improved by regularizing the matrix. In addition, Behzadi and Abdollahi [13] combined the AHSS iteration method and the RHSS iteration method to build the accelerated RHSS (ARHSS) iteration method for the standard saddle-point linear system, and experimental results show that the ARHSS iteration method is better than the HSS iteration method and the RHSS iteration method.

Recently, based on the regularized Hermitian and skew-Hermitian splitting of the stabilized saddle-point matrix A in (1):

A = ( B 0 0 Q 1 + ω C ) + ( 0 E E * Q 1 + ( 1 ω ) C ) = H + ( ω ) + S ( ω ) = ( 0 E E * Q 1 + ( 1 + ω ) C ) + ( B 0 0 Q 1 ω C ) = S + ( ω ) + H ( ω ) , (2)

where Q 1 is a Hermitian positive semidefinite matrix and ω is a non-negative constant. Bai [2] extended the RHSS iteration method for the standard saddle-point problem to the stabilized saddle-point problem as follows:

{ ( α I + H + ( ω ) ) x ( k + 1 2 ) = ( α I S ( ω ) ) x ( k ) + b , ( α I + S + ( ω ) ) x ( k + 1 ) = ( α I H ( ω ) ) x ( k + 1 2 ) + b , (3)

where I is the identity matrix and α is a positive constant. And Bai proved the RHSS iteration method converges unconditionally to the exact solution of the stabilized saddle-point problem (1). Numerical results on stabilized saddle-point problems in [2] show that the RHSS iteration method significantly outperforms the Hermitian and skew-Hermitian splitting iteration method in iteration counts and computing times.

In this paper, inspired by Behzadi and Abdollahi [13] and Bai [2], we know that the accelerated RHSS iteration method has advantages over the HSS iteration method and the RHSS iteration method on the standard saddle-point problem. So we extend the accelerated RHSS iteration method for the standard saddle-point problem to the stabilized saddle-point problem, and establish the ARHSS iteration method for the stabilized saddle-point problem. Then we confirm the convergence of the ARHSS iteration method. Finally, experimental results show that the ARHSS iteration method is more effective than the HSS iteration method and the RHSS iteration method for the stabilized saddle-point problem (1).

The remainder of this work is organized as follows. In Section 2, we propose the ARHSS iteration method for the stabilized saddle-point problem (1). In Section 3, we give the convergence property of the ARHSS iteration method. In Section 4, A numerical experiment is given to show the effectiveness of the method. In Section 5, we end the paper with some brief conclusions.

2. The ARHSS Iteration Method

In this section, we derive the ARHSS iteration method for solving (1). Let Q q × q be a Hermitian positive semidefinite matrix and ω be a prescribed non-negative parameter. Then we can split the stabilized saddle-point matrix A n × n in (1), and obtain the regularized Hermitian and skew-Hermitian (RHS) splitting:

A = ( B 0 0 Q + ω C ) + ( 0 E E * Q + ( 1 ω ) C ) = H + + S = ( 0 E E * Q + ( 1 + ω ) C ) + ( B 0 0 Q ω C ) = S + + H . (4)

By applying the regularized Hermitian and Skew-Hermitian (RHS) splitting to (4), we then obtain the iteration scheme

{ ( Λ + H + ) x = ( Λ S ) x + b ( Λ + S + ) x = ( Λ H ) x + b

where

Λ = ( α I 0 0 β I )

with α and β positive constants.

By iterating alternatively between these two fixed-point systems as

( Λ + H + ) x ( k + 1 / 2 ) = ( Λ S ) x ( k ) + b ,

and

( Λ + S + ) x ( k + 1 ) = ( Λ H ) x ( k + 1 / 2 ) + b ,

or in their blockwise forms

( α I + B 0 0 β I + Q + ω C ) ( y ( k + 1 / 2 ) z ( k + 1 / 2 ) ) = ( α I E E * β I + Q ( 1 ω ) C ) ( y ( k ) z ( k ) ) + ( f g ) , (5)

and

( α I E E * β I + Q + ( 1 + ω ) C ) ( y ( k + 1 ) z ( k + 1 ) ) = ( α I B 0 0 β I + Q + ω C ) ( y ( k + 1 / 2 ) z ( k + 1 / 2 ) ) + ( f g ) . (6)

So, we obtain the ARHSS iteration method for solving the stabilized saddle-point problem (1) as follows:

The ARHSS iteration method. Let α and β be positive constants, ω be a non-negative constant and Q p × p be a Hermitian positive semidefinite matrix. Give an initial guess x ( 0 ) = ( y ( 0 ) , z ( 0 ) ) n , for k = 0 , 1 , 2 , until the iteration sequence { x ( k ) } = { ( y ( k ) , z ( k ) ) } n converges, compute the next iterate x ( k + 1 ) = ( y ( k + 1 ) , z ( k + 1 ) ) n according to following procedure:

(i) solve for y ( k + 1 / 2 ) p from the linear subsystem

( α I + B ) y ( k + 1 / 2 ) = α I y ( k ) E z ( k ) + f ;

(ii) compute

f ( k + 1 / 2 ) = ( α I B ) y ( k + 1 / 2 ) + f ;

and

g ( k + 1 / 2 ) = ( β I + Q + ( ω 1 ) C ) z ( k ) + E * y ( k ) + 2 g ;

(iii) solve for z ( k + 1 ) q from the linear subsystem

( β I + Q + ( 1 + ω ) C + 1 α E * E ) z ( k + 1 ) = 1 α E * f ( k + 1 / 2 ) + g ( k + 1 / 2 ) ;

(iv) compute

y ( k + 1 ) = 1 α ( E z ( k + 1 ) + f ( k + 1 / 2 ) ) .

3. The Convergence Property of the ARHSS Iteration Method

In this section we prove the unconditional convergence of the ARHSS iteration method. We know that calculating the spectral radius of the iterative matrix L ( α , β , ω ) is the easiest way to prove the convergence of ARHSS iteration method. From (5) and (6), we easily know L ( α , β , ω ) can be equivalently rewritten as

L ( α , β , ω ) = M ( α , β , ω ) 1 N ( α , β , ω ) ,

where

M ( α , β , ω ) = 1 2 ( α I + B 1 α ( α I + B ) E E * β I + Q + ( ω + 1 ) C )

and

N ( α , β , ω ) = 1 2 ( α I B 1 α ( α I B ) E E * β I + Q + ( ω 1 ) C )

To prove the convergence of the ARHSS iteration method, we first introduce a lemma.

Lemma 3.1. ( [2], Theorem 3.1) For the stabilized saddle-point problem (1), the RHSS iteration method (3) converges unconditionally to the exact solution of the stabilized saddle-point problem (1).

In the following theorem, we prove the unconditional convergence of the ARHSS iteration method.

Theorem 3.2. For the stabilized saddle-point problem (1), assume that B p × p is Hermitian positive definite, C q × q is Hermitian positive semidefinite and E p × q satisfies n u l l ( C ) n u l l ( E ) = 0 . Let α , β > 0 be prescribed positive constant, ω be a given non-negative constant and Q q × q be a given Hermitian positive semidefinite matrix. Then it holds that ρ ( L ( α , β , ω ) ) < 1 , i.e., the ARHSS iteration method converges unconditionally to the exact solution of the stabilized saddle-point problem (1).

Proof. There are two cases to consider.

Case I: If β α , according to the ARHS splitting, we obtain

A = ( Λ + H + ) ( Λ S ) = ( α I + ( B 0 0 Q ˜ + ω C ) ) ( α I ( 0 E E * Q ˜ + ( 1 ω ) C ) ) (7)

and

A = ( Λ + S + ) ( Λ H ) = ( α I + ( 0 E E * Q ˜ + ( 1 + ω ) C ) ) ( α I ( B 0 0 Q ˜ ω C ) ) (8)

where Q ˜ = ( β α ) I + Q . Since Q ˜ is Hermitian positive semidefinite matrix, relations (7) and (8) are exactly the RHS splitting of A. By Lemma 3.1, this split converges to the exact solution of the saddle point linear system (1).

Case II: If β < α , according to the ARHS splitting, we obtain

A = ( Λ + H + ) ( Λ S ) = ( β I + ( B 0 0 Q ˜ + ω C ) ) ( β I ( 0 E E * Q ˜ + ( 1 ω ) C ) ) (9)

and

A = ( Λ + S + ) ( Λ H ) = ( β I + ( 0 E E * Q ˜ + ( 1 + ω ) C ) ) ( β I ( B 0 0 Q ˜ ω C ) ) (10)

where Q ˜ = ( α β ) I + Q . Since Q ˜ is Hermitian positive semidefinite matrix, the relations (9) and (10) are exactly the RHS splitting of A. By Lemma 3.1, this split converges to the exact solution of the saddle point linear system (1). The desired result holds by Case I and Case II. Therefore, we complete the proof.

4. Numerical Experiment

In this section, a numerical example is given to illustrate the effectiveness of the ARHSS iteration method for the stabilized saddle-point problem (1). We compare the number of iteration steps (denoted as IT) and computing times in seconds (denoted as CPU) of the ARHSS iteration method with the HSS iteration method and the RHSS iteration method. All our tests are started from the initial vector x ( 0 ) = 0 . All iterations are terminated as the current residuals satisfy b A x ( k ) 10 6 × b or the number of iterations steps exceeds 5000. In addition, all the computations were implemented in MATLAB [version 9.1.0.441655 (R2016b)] in double precision on a personal computer with 1.60 GHZ central processing unit [Intel(R) Core(TM)i5-5250U] and 4.00 GB memory.

Example 4.1. ( [2], Example 4.3) Consider the nonlinear image restoration problem

min f ˜ s ( K y ) 2 2 + β y 2 2 , (11)

where f ˜ and y p represent the observed and the original images,respectively, s : p denotes a nonlinear point spread function and K = ( [ K ] i j ) p × p is a blurring matrix. When this regularized nonlinear least-squares problem (11) is solved by the Gauss-Newton iteration method,at each step for the currently available approximant y c = ( [ y c ] 1 , [ y c ] 2 , , [ y c ] p ) ,we need to solve a linear system of the form

( β I + K D 2 K ) y = K D [ f ˜ s ( K y c ) + D K y c ]

to obtain the next approximant,where D = d i a g ( [ D ] 1 , [ D ] 2 , ) , [ D ] p p × p is a diagonal matrix with its diagonal entries being given by

[ D ] i = s ξ | ξ = j = 1 p [ K ] i j [ y c ] j , i = 1,2, , p .

This linear system can be equivalently reformulated into the stabilized saddle-point problem (1),in which

B = D 2 , E = K , C = β I

and

f = D 1 [ f ˜ s ( K y c ) ] + K y c , g = 0.

In actual computations we set

f ˜ = [ 254 p : 254 p : 254 ]

and

y c = [ [ 0.5 + 508 p : 508 p : 254.5 ] , [ 254.5 : 508 p : 0.5 + 508 p ] ]

and take β = 10 3 ,

s ( ξ ) = 30 log ( ξ ) , [ K ] i j = 1 2 π μ exp ( | i j | 2 2 μ 2 ) , i , j = 1 , 2 , , p ,

with μ = 2 . Here,the notation :is a standard MATLAB symbol used in indicating a row vector of the form

[ x f : δ x : x l ] = [ x f , x f + δ x , , x f + k δ x , , x l ] ,

for which x f and x l are the first and the last elements, δ x is a prescribed increment such that x l x f is its multiple and k is a non-negative integer ranging from 0 to x l x f δ x .

For this example we have p = q , so that the dimension of the stabilized saddle-point matrix A B p × p is n = 2 p . Note that the Toeplitz matrix E is highly ill-conditioned with rapidly decaying singular values so that it is almost rank-deficient, especially when the problem size p becomes sufficiently large. Besides, we take the regularization matrix to be

Q = ( α γ ω ) C + γ E E .

Taking Q into (5) and (6), we get

g ( k + 1 / 2 ) = ( β I + ( α γ 1 ) C + γ E * E ) z ( k ) + E * y ( k ) + 2 g ;

and

( β I + ( α γ + 1 ) C + 1 α E * E ) z ( k + 1 ) = 1 α E * f ( k + 1 / 2 ) + g ( k + 1 / 2 ) .

So, in the specific algorithms, the parameter ω does not need to be determined. In our experiments, let the parameter γ be taken as the optimal value of [ [2], Example 4.3], and the optimal values of the parameters α and β can be determined by experience and trial runs. For example, first, suppose the ranges of α and β are ( 0,1 ) , and then divide it into 100 equal parts. Second, fix parameter α to a certain value in ( 0,1 ) , and then find the corresponding optimal parameter β by trial runs. Third, using the second step method, fix parameter β to a certain value in ( 0,1 ) , and then find the corresponding optimal parameter α by trial runs. Finally, we choose the best parameters α and β through experience and trial runs.

In Table 1, we list the best parameters α , β and regularization parameter γ for the three iteration methods including HSS, RHSS and ARHSS with different mesh number p. Through these optimal parameters, we list the experimental results of the three methods in Table 2. The experimental results show that compared with HSS and RHSS, the ARHSS iteration method takes the least CPU and the smallest IT to converge to the solution of (11).

Table 1. Optimal parameter and regularization parameter.

Table 2. IT and CPU.

5. Conclusion

For stabilized saddle-point problem, we extend the accelerated RHSS iteration method for the standard saddle-point problem to the stabilized saddle-point problem. The convergence property of the ARHSS iteration method is carefully studied, and the numerical result illustrates that the ARHSS is more effective for the stabilized saddle-point problem. In this paper, optimal iteration parameters can only be determined experimentally. So, how to establish practical formulas for computing the optimal iteration parameters is worth further study.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Bai, Z.-Z. (2019) Regularized HSS iteration Methods for Stabilized Saddle-Point Problems. IMA Journal of Numerical Analysis, 39, 1888-1923.
https://doi.org/10.1093/imanum/dry046
[2] Wathen, A.J. and Silvester, D.J. (1993) Fast Iterative Solution of Stabilished Stokes Systems I: Using Simple Diagonal Preconditioners. SIAM Journal on Numerical Analysis, 30, 630-649.
https://doi.org/10.1137/0730031
[3] Silvester, D.J. and Kechkar, N. (1990) Stabilised Bilinear-Constant Velocity-Pressure Finite Elements for the Conjugate Gradient Solution of the Stokes Problems. Computer Methods in Applied Mechanics and Engineering, 79, 71-86.
https://doi.org/10.1016/0045-7825(90)90095-4
[4] Elman, H.C., Silvester, D.J. and Wathen, A.J. (2002) Performance and Analysis of Saddle Point Preconditioners for the Discrete Steady-State Navier-Stokes Equations. Numerische Mathematik, 90, 665-688.
https://doi.org/10.1007/s002110100300
[5] Benzi, M., Golub, G.H. and Liesen, J. (2005) Numerical Solution of Saddle Point Problems. Acta Numerica, 14, 1-137.
https://doi.org/10.1017/S0962492904000212
[6] Bai, Z.-Z. (2006) Structured Preconditioners for Nonsingular Matrices of Block Two-by-Two Structures. Mathematics of Computation, 75, 791-815.
https://doi.org/10.1090/S0025-5718-05-01801-6
[7] Wathen, A.J. (2015) Preconditioning. Acta Numerica, 124, 329-376.
https://doi.org/10.1017/S0962492915000021
[8] Benzi, M. and Golub, G.H. (2004) A Preconditioner for Generalized Saddle Point Problems. SIAM Journal on Matrix Analysis and Applications, 26, 20-41.
https://doi.org/10.1137/S0895479802417106
[9] Bai, Z.-Z., Golub, G.H. and Ng, M.K. (2003) Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Definite Linear Systems. SIAM Journal on Matrix Analysis and Applications, 24, 603-626.
https://doi.org/10.1137/S0895479801395458
[10] Bai, Z.-Z., Golub, G.H. and Pan, J.-Y. (2004) Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Linear Systems. Numerische Mathematik, 98, 1-32.
https://doi.org/10.1007/s00211-004-0521-1
[11] Bai, Z.-Z. and Golub, G.H. (2007) Accelerated Hermitian and Skew-Hermitian Splitting Iteration Methods for Saddle-Point Problems. IMA Journal of Numerical Analysis, 27, 1-23.
https://doi.org/10.1093/imanum/drl017
[12] Bai, Z.-Z. and Benzi, M. (2017) Regularized HSS Iteration Methods for Saddle-Point Linear Systems. BIT Numerical Mathematics, 57, 287-311.
https://doi.org/10.1007/s10543-016-0636-7
[13] Behzadi, R. and Abdollahi, F. (2018) Accelerated RHSS Iteration Methods for Saddle-Point Linear Systems. UPB Scientific Bulletin, Series A, 80, 153-162.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.