Greedy Randomized Gauss-Seidel Method with Oblique Direction

Abstract

For the linear least squares problem with coefficient matrix columns being highly correlated, we develop a greedy randomized Gauss-Seidel method with oblique direction. Then the corresponding convergence result is deduced. Numerical examples demonstrate that our proposed method is superior to the greedy randomized Gauss-Seidel method and the randomized Gauss-Seidel method with oblique direction.

Share and Cite:

Li, W. and Zhang, P. (2023) Greedy Randomized Gauss-Seidel Method with Oblique Direction. Journal of Applied Mathematics and Physics, 11, 1036-1048. doi: 10.4236/jamp.2023.114068.

1. Introduction

We focus on the linear least squares problem

min β n y X β 2 2 , (1)

where X m × n ( m n ) is of full column rank, y m and 2 denotes the Euclidean norm. There is a wide range of applications for the least squares problem in many fields, such as signal processing, image restoration, and so on. As we know, the coordinate descent method is an effective iteration method for (1). It applies the Gauss-Seidel method to the following equivalent normal Equation

X T X β = X T y , (2)

leading to the following formula

β k + 1 = β k + X j k T ( y X β k ) X j k 2 2 e j k , j k = ( k mod n ) + 1,

where X j k denotes the jkth column of X , e j k is the jkth unit coordinate vector and the superscript T denotes the transpose. The convergence of the Gauss-Seidel method highly depends on the order of the column selected in each step.

Inspired by the fancy work of Strohmer and Vershynin [1] , Leventhal and Lewis [2] proposed the randomized Gauss-Seidel (RGS) method and proved that it has an expected linear convergence rate. As Bai and Wu [3] pointed out an obvious flaw of the RGS method that the probability for selecting column will be a uniform column sampling if the coefficient matrix is scaled with a diagonal matrix. To tackle this problem, they proposed the greedy randomized coordinate descent (GRCD) or called the greedy randomized Gauss-Seidel (GRGS) method by adopting an effective probability for selecting the working column to capture larger entries of the residual vector with respect to (2). They showed that the GRGS method is significantly superior to the RGS method in terms of both theoretical analysis and numerical experiments. In addition, there are tons of attention about the Gauss-Seidel type methods [4] [5] [6] [7] . However, the convergence rate of the RGS method will be significantly reduced when the coefficient matrix columns are highly correlated. To improve the convergence rate, Wang et al. [8] proposed the randomized Gauss-Seidel method with oblique direction (RGSO) by combining two successive selected unit coordinate directions as the search direction

d k = e j k + 1 X j k T X j k + 1 X j k 2 2 e j k . (3)

They showed that, in terms of both theory and experiments, the RGSO method outperforms the RGS method. For more discussions about the oblique projection, we refer the readers to other literatures [9] [10] and their references.

However, the RGSO method still exists in the same flaw of the RGS method that the probability for selecting column will be a uniform column sampling if the coefficient matrix is scaled with a diagonal matrix. In addition, it is worth noting that the convergence rate of the GRGS method would significantly decrease when the coefficient matrix columns are close to linear correlation. For the above mentioned limitations, we present a greedy randomized Gauss-Seidel method with oblique direction (GRGSO) for solving (1), by combining the oblique direction and the GRGS method. In theory, it is proved that the iterative solutions generated by the GRGSO method can converge to the least squares solution β * when the coefficient matrix is of full column rank. Numerical results show that compared with the RGSO and GRGS methods, the GRGSO method has a significant advantage over the iteration steps and computing time, especially when the coefficient matrix columns are highly correlated.

The organization of this paper is as follows. In Section 2, some notation and lemmas are introduced. In Section 3, we propose the GRGSO method for solving (1) and give its convergence analysis. Some examples are used to demonstrate the competitiveness of our proposed method in Section 4. Finally, we draw some brief conclusions in Section 5.

2. Notion and Preliminaries

In the beginning of this section, we give some notation. For a Hermitian positive definite matrix B and a column vector β with appropriate dimension, we denote β B 2 = B β , β = B 1 2 β 2 2 and β ( i ) the ith entry of β . For a given matrix S m × n , σ min ( S ) and S F denote the smallest nonzero singular value and the Frobenius norm of matrix S , respectively. Let r k = y X β k and s k = X T r k , then s k ( j ) = X j T r k represents jth entry of s k . β * is the optimal solution of the corresponding problem. We indicate by E k the expected value conditional on the first k iterations, that is,

E k [ ] = E [ | j 0 , j 1 , , j k 1 ] ,

where j t , t = 0 , 1 , , k 1 is the column selected at the t-th iteration.

In the following, we give a basic lemma.

Lemma 1 (See Bai and Wu [11] ) If the vector u is in the column space of A T , it holds

A u 2 2 σ min 2 ( A ) u 2 2 .

3. GRGSO Method

In this section, we design the GRGSO method for solving (1), by combining the oblique direction with the GRGS method. The pseudo-code of GRGSO method is listed in Table 1. The difference between the RGSO method and GRGSO method is the selection strategy. The RGSO method utilizes the random selection strategy. Specially, the RGSO method selects j k + 1 th column with probability X j k + 1 2 2 X F 2 in the numerical experiments, which can be equivalent to the uniform sampling if the Euclidean norms of all the columns of the matrix X are same; while the GRGSO method aims to grasp the larger entries of the residual vector at each iteration. Compared with the GRGS method, our proposed method considers the oblique projection, which is expected to have better convergence performance in some cases of coefficient matrix columns being highly correlated.

Remark Let t k = arg max 1 j n { | s k ( j ) | 2 2 X j 2 2 } , which implies t k V k . Therefore, for all iterative step k, the index set V k generated by the GRGSO method is nonempty.

Remark In the GRGSO method, it holds that

s k + 1 = X T r k + 1 = X T ( y X β k + 1 ) = X T ( y X ( β k + η k ( j k ) w j k ) ) = X T ( y X β k ) η k ( j k ) X T X w j k = s k η k ( j k ) X T X w j k , ( k 1 ) (4)

Table 1. GRGSO method.

and

s 1 = X T r 1 = X T ( y X β 1 ) = X T ( y X ( β 0 + s 0 ( j 1 ) X j 1 2 2 e j 1 ) ) = X T ( y X β 0 ) s 0 ( j 1 ) X j 1 2 2 X T X j 1 = s 0 s 0 ( j 1 ) X j 1 2 2 X T X j 1 . (5)

Therefore, the GRGSO method can be executed more effectively if X T X can be computed in an economical manner at the beginning.

Next, we give some lemmas which are useful to analyze the convergence of the GRGSO method.

Lemma 2 For the GRGSO method, we have

s k ( j k ) = 0 , ( k > 0 ) , (6)

s k ( j k 1 ) = 0 , ( k > 1 ) . (7)

Proof. For k = 1 , one has

s 1 ( j 1 ) = X j 1 T ( y X β 1 ) = X j 1 T y X j 1 T X ( β 0 + s 0 ( j 1 ) X j 1 2 2 e j 1 ) = 0. (8)

For k > 1 , we have

s k ( j k ) = X j k T ( y X β k ) = X j k T y X j k T X ( β k 1 + η k 1 ( j k 1 ) w j k 1 ) = X j k T y X j k T X β k 1 η k 1 ( j k 1 ) ( X j k T X w j k 1 ) = s ˜ k 1 ( j k ) s ˜ k 1 ( j k ) h j k 1 h j k 1 = 0,

which together with (8) proves (6).

By (6), it follows for k > 0 that

s k + 1 ( j k ) = X j k T ( y X β k + 1 ) = X j k T y X j k T X ( β k + η k ( j k ) w j k ) = s k ( j k ) η k ( j k ) ( X j k T X w j k ) = η k ( j k ) ( X j k T X ( e j k + 1 X j k T X j k + 1 X j k 2 2 e j k ) ) = 0,

which leads to (7).

Remark From (6) and (7), it is obvious that in kth iteration, the GRGSO method dose not select j k , j k 1 , which means j k + 1 j k , j k 1 . Thus, the direction w j k can be the combination of two unit coordinate directions. This is also the advantage of the GRGSO method compared with the RGSO method. Since the RGSO method randomly selects j k + 1 in the kth iteration, which can not avoid selecting j k and j k 1 while the GRGSO method can avoid.

Lemma 3 For h j k in the GRGSO method, it satisfies

h j k = X w j k 2 2 = X j k + 1 2 2 | X j k T X j k + 1 | 2 X j k 2 2 Δ X j k + 1 2 2 ,

where Δ = max i j sin 2 X i , X j .

Proof. Since j k j k + 1 , it holds that

h j k = X j k + 1 T X ( e j k + 1 X j k T X j k + 1 X j k 2 2 e j k ) = X j k + 1 2 2 | X j k T X j k + 1 | 2 X j k 2 2 = X j k + 1 2 2 X j k 2 2 X j k + 1 2 2 cos 2 X j k , X j k + 1 X j k 2 2 = sin 2 X j k , X j k + 1 X j k + 1 2 2 Δ X j k + 1 2 2

and

X w j k 2 2 = ( e j k + 1 X j k T X j k + 1 X j k 2 2 e j k ) T X T X ( e j k + 1 X j k T X j k + 1 X j k 2 2 e j k ) = X j k + 1 2 2 | X j k T X j k + 1 | 2 X j k 2 2 .

Hence, we complete this proof.

Lemma 4 The iteration sequence { β k } k = 0 generated by the GRGSO method satisfies

X ( β k + 1 β * ) 2 2 = X ( β k β * ) 2 2 X ( β k + 1 β k ) 2 2 , k = 0,1,2, .

Proof. For k = 0 , we have

e j 1 T X T X ( β 1 β * ) = X j 1 T X ( β 0 β * + s 0 ( j 1 ) X j 1 2 2 e j 1 ) = X j 1 T X β 0 X j 1 T X β * + s 0 ( j 1 ) = 0.

This means that the vector X T X ( β 1 β * ) is perpendicular to the vector e j 1 . Since β 1 β 0 is parallel to e j 1 , the vector X T X ( β 1 β * ) is perpendicular to β 1 β 0 .

For k > 0 , It follows from Lemma 3 and Lemma 2 that

w j k T X T X ( β k + 1 β * ) = w j k T X T X ( β k β * + η k ( j k ) w j k ) = w j k T ( s k + η k ( j k ) X T X w j k ) = w j k T s k + η k ( j k ) X w j k 2 2 = ( e j k + 1 X j k T X j k + 1 X j k 2 2 e j k ) T s k + s ˜ k ( j k + 1 ) = s k ( j k + 1 ) + X j k T X j k + 1 X j k 2 2 s k ( j k ) + s ˜ k ( j k + 1 ) = 0,

which means that the vector X T X ( β k + 1 β * ) is perpendicular to the vector w j k . Since β k + 1 β k is parallel to w j k , the vector X T X ( β k + 1 β * ) is perpendicular to β k + 1 β k . For all k = 0,1, , it follows that

X ( β k + 1 β * ) , X ( β k + 1 β k ) = X T X ( β k + 1 β * ) , β k + 1 β k = 0,

which together with Pythagoras theorem leads to the desired result.

Next, the convergence theory of the GRGSO method is deduced.

Theorem 5 For the least squares problem (1), the iteration sequence { β k } k = 0 generated by the GRGSO method from any initial guess β 0 satisfies

E X ( β k β * ) 2 2 t = 0 k 1 ζ t X ( β 0 β * ) 2 2 , for k = 1,2, , (9)

where ζ 0 = 1 σ min 2 ( X ) X F 2 , ζ 1 = 1 1 2 ( 1 γ 1 + 1 X F 2 ) σ min 2 ( X ) Δ , ζ t = 1 1 2 ( 1 γ 2 + 1 X F 2 ) σ min 2 ( X ) Δ ( t > 1 ). Here γ 1 = max 1 i n j = 1 j i n X j 2 2 , γ 2 = max 1 i , j n i j t = 1 t i , j n X t 2 2 and Δ is given as in Lemma 3.

Proof. By Lemma 2, it follows for k > 1 that

δ k X F 2 = 1 2 ( X F 2 s k 2 2 max 1 j n { | s k ( j ) | 2 X j 2 2 } + 1 ) = 1 2 ( max 1 j n { | s k ( j ) | 2 X j 2 2 } j = 1 n X j 2 2 X F 2 | s k ( j ) | 2 X j 2 2 + 1 ) = 1 2 ( max 1 j n { | s k ( j ) | 2 X j 2 2 } j = 1 j j k , j k 1 n X j 2 2 X F 2 | s k ( j ) | 2 X j 2 2 + 1 ) 1 2 ( X F 2 j = 1 j j k , j k 1 n X j 2 2 + 1 ) 1 2 ( X F 2 γ 2 + 1 ) . (10)

For k = 1 , it follows from (6) that

δ 1 X F 2 = 1 2 ( X F 2 s 1 2 2 max 1 j n { | s 1 ( j ) | 2 X j 2 2 } + 1 ) = 1 2 ( max 1 j n { | s 1 ( j ) | 2 X j 2 2 } j = 1 n X j 2 2 X F 2 | s 1 ( j ) | 2 X j 2 2 + 1 ) = 1 2 ( max 1 j n { | s 1 ( j ) | 2 X j 2 2 } j = 1 j j 1 n X j 2 2 X F 2 | s 1 ( j ) | 2 X j 2 2 + 1 ) 1 2 ( X F 2 j = 1 j j 1 n X j 2 2 + 1 ) 1 2 ( X F 2 γ 1 + 1 ) . (11)

By Lemma 4, Lemma 3 and Lemma 1, for k 1 , we have

E k X ( β k + 1 β * ) 2 2 = X ( β k β * ) 2 2 E k X ( β k + 1 β k ) 2 2 = X ( β k β * ) 2 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 η k ( j k ) ( X w j k ) 2 = X ( β k β * ) 2 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 | s ˜ k ( j k + 1 ) | 2 X w j k 2 2 | h j k | 2 = X ( β k β * ) 2 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 | s ˜ k ( j k + 1 ) | 2 h j k

X ( β k β * ) 2 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 j k + 1 V k | s ˜ k ( j k + 1 ) | 2 | s ˜ k ( j k + 1 ) | 2 Δ X j k + 1 2 2 X ( β k β * ) 2 2 δ k Δ s k 2 2 = X ( β k β * ) 2 2 δ k Δ X T X ( β k β * ) 2 2 ( 1 δ k σ min 2 ( X ) Δ ) X ( β k β * ) 2 2 ,

which together with (11) and (10) can lead to

E 1 X ( β 2 β * ) 2 2 ζ 1 X ( β 1 β * ) 2 2 (12)

and

E k X ( β k + 1 β * ) 2 2 ζ X ( β k β * ) 2 2 , k > 1, (13)

respectively. For k = 0 , we can similarly get

E X ( β 1 β * ) 2 2 ζ 0 X ( β 0 β * ) 2 2 .

Then taking the full expectation on both sides of (12) and (13) and by induction on the iteration index k, we can easily obtain (9). Thus, we complete the proof.

Remark Suppose that the upper bound for convergence rate of the GRGS method [3] is defined as

ρ G R C D = 1 1 2 ( 1 γ 1 + 1 X F 2 ) σ min 2 ( X ) .

Since Δ = max i j sin 2 X i , X j ( 0,1 ] , γ 2 < γ 1 < X F 2 and 0 < σ min 2 ( X ) X F 2 , we have

ζ < ζ 1 ρ G R C D < ζ 0 < 1.

This implies that the upper bound on the convergence factor of the GRGSO method is smaller, uniformly with respect to the iterative step k, than that of the GRCD method.

4. Numerical Experiments

Some examples are designed in this section to verify the effectiveness of the GRGSO method. Specially, the GRGSO method is compared with the GRGS method [3] , RGS method [2] and RGSO method [8] . We list the tested results of these methods in terms of the number of iteration steps (denoted by “IT”) and the running time in seconds (denoted by “CPU”).

The coefficient matrix X m × n ( m n ) in numerical experiments is from two sources. One is the random matrix whose entries are randomly taken from the interval [ c ,1 ] ( c 0 ) by using the function rand in MATLAB. c being close to 1 implies that the matrix columns are highly correlated. Another is sparse matrices from the literature [12] listed in Table 2. We use cond ( X ) to represent the condition number for X , and define the density as

density = number of nonzero entries of an m × n matrix m × n .

We randomly generate the true vector β * by utilizing the MATLAB function randn and construct the vector y by y = X β * when the linear system is consistent; while for the inconsistent linear systems the right-hand side is set by y = X β * + n o i s e , where n o i s e n u l l ( X T ) . We take zero vector for the initial approximation of each iteration process. Since

X ( β k β * ) 2 2 = X β * + n o i s e r k X β * 2 2 = n o i s e r k 2 2 ,

which was also used in the work of Wang et al. [8] , we terminate the iteration process once

R S E = n o i s e r k 2 y 2 < 10 6 .

We set “-” in the numerical tables if the number of iteration steps exceeds 300,000. All the results are averages from 20 repetitions. All experiments were implemented by using MATLAB (R2021b) on a computer with 2.30 GHz central processing unit (Intel(R) Core(TM) i7-10875H CPU), 16 GB memory.

Table 2. Sparse matrix properties of realistic problems [12] .

For the randomly generated matrices, with c = 0 , the numerical results for consistent and inconsistent linear systems are listed in Table 3 and Table 4, respectively. It is easy to observe from Table 3 and Table 4 that the GRGSO method significantly outperforms the RGS, GRGS and RGSO methods in terms of both IT and CPU.

In the following, we compare these methods for solving (1) when the randomly generated matrix is with different c. We list the numerical results for the consistent and inconsistent systems in Table 5 and Table 6, respectively. From Table 5 and Table 6, it is easy to observe that the IT and CPU of the RGS and GRGS methods increase significantly with c increasing closer to 1. When c increases to 0.8, the IT of the RGS method exceeds the maximal number of iteration steps. And when c increases to 0.9, the IT of the GRGS method exceeds the maximal number of iteration steps. For the different c values, the methods with the oblique direction outperform the methods without the oblique direction. In addition, compared with the other methods, the GRGSO method performs best in terms of both IT and CPU.

For the full-rank sparse matrices from literature [12] , the numerical results for the consistent and inconsistent linear systems are listed in Table 7 and Table 8, respectively. It can be seen that for the matrix abtaha1 the performance of the GRGSO method is similar to that of the GRGS method in terms of both IT and CPU, whereas for the other matrices the GRGSO method significantly performs better in terms of both IT and CPU than the other methods.

Table 3. The consistent system with c = 0 : different m impacts on IT and CPU.

Table 4. The inconsistent system with c = 0 : different m impacts on IT and CPU.

Table 5. The consistent system with X 1000 × 100 : different c impacts on IT and CPU.

Table 6. The inconsistent system with X 1000 × 100 : different c impacts on IT and CPU.

Table 7. The consistent system: IT and CPU time of test methods for different sparse matrices.

Table 8. The inconsistent system: IT and CPU time of test methods for different sparse matrices.

5. Conclusion

In this manuscript, we construct the GRGSO method for the linear least squares problem. We have established the convergence analyses of the GRGSO method. Numerical experiments show that the GRGSO method is superior to the RGS, GRGS and RGSO methods in terms of both IT and CPU, especially when the coefficient matrix columns are highly correlated. It is natural to generalize the GRGSO method by introducing a relaxation parameter in its probability criterion. However, the choice of the optimal relaxation parameter is difficult in theory up to now. How to find the optimal relaxation parameter in theory is worthy of further study.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Strohmer, T. and Vershynin, R. (2009) A Randomized Kaczmarz Algorithm with Exponential Convergence. Journal of Fourier Analysis and Applications, 15, 262-278.
https://doi.org/10.1007/s00041-008-9030-4
[2] Leventhal, D. and Lewis, A.S. (2010) Randomized Methods for Linear Constraints: Convergence Rates and Conditioning. Mathematics of Operations Research, 35, 641-654.
https://doi.org/10.1287/moor.1100.0456
[3] Bai, Z.Z. and Wu, W.T. (2019) On Greedy Randomized Coordinate Descent Methods for Solving Large Linear Least-Squares Problems. Numerical Linear Algebra with Applications, 26, e2237.
https://doi.org/10.1002/nla.2237
[4] Du, K. (2019) Tight Upper Bounds for the Convergence of the Randomized Extended Kaczmarz and Gauss-Seidel Algorithms. Numerical Linear Algebra with Applications, 26, e2233.
https://doi.org/10.1002/nla.2233
[5] Liu, Y., Jiang, X.L. and Gu, C.Q. (2021) On Maximum Residual Block and Two-Step Gauss-Seidel Algorithms for Linear Least-Squares Problems. Calcolo, 58, Article No. 13.
https://doi.org/10.1007/s10092-021-00404-x
[6] Zhang, J.H. and Guo, J.H. (2020) On Relaxed Greedy Randomized Coordinate Descent Methods for Solving Large Linear Least-Squares Problems. Applied Numerical Mathematics, 157, 372-384.
https://doi.org/10.1016/j.apnum.2020.06.014
[7] Niu, Y.Q. and Zheng, B. (2021) A New Randomzied Gauss-Seidel Method for Solving Linear Least-Squares Problems. Applied Mathematics Letters, 116, Article ID: 107057.
https://doi.org/10.1016/j.aml.2021.107057
[8] Wang, F., Li, W.G., Bao, W.D. and Lv, Z.L. (2021) Gauss-Seidel Method with Oblique Direction. Results in Applied Mathematics, 12, Article ID: 100180.
https://doi.org/10.1016/j.rinam.2021.100180
[9] Li, W.G., Wang, Q., Bao, W.D. and Xing, L.L. (2022) Kaczmarz Method with Oblique Projection. Results in Applied Mathematics, 16, Article ID: 100342.
https://doi.org/10.1016/j.rinam.2022.100342
[10] Wang, F., Li, W.G., Bao, W.D. and Liu, L. (2022) Greedy Randomized and Maximal Weighted Residual Kaczmarz Methods with Oblique Projection. Electronic Research Archive, 30, 1158-1186.
https://doi.org/10.3934/era.2022062
[11] Bai, Z.Z. and Wu, W.T. (2018) On Greedy Randomized Kaczmarz Method for Solving Large Sparse Linear Systems. SIAM Journal on Scientific Computing, 40, A592-A606.
https://doi.org/10.1137/17M1137747
[12] Davis, T.A. and Hu, Y. (2011) The University of Florida Sparse Matrix Collection. ACM Transactions on Mathematical Software, 38, 1-25.
https://doi.org/10.1145/2049662.2049663

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.