A Count Sketch Maximal Weighted Residual Kaczmarz Method with Oblique Projection for Highly Overdetermined Linear Systems

Abstract

Motivated by the count sketch maximal weighted residual Kaczmarz (CS-MWRK) method presented by Zhang and Li (Appl. Math. Comput., 410, 126486), we combine the count sketch tech with the maximal weighted residual Kaczmarz Method with Oblique Projection (MWRKO) constructed by Wang, Li, Bao and Liu (arXiv: 2106.13606) to develop a new method for solving highly overdetermined linear systems. The convergence rate of the new method is analyzed. Numerical results demonstrate that our method performs better in computing time compared with the CS-MWRK and MWRKO methods.

Share and Cite:

Zhang, P. , Li, L. and Zhang, P. (2022) A Count Sketch Maximal Weighted Residual Kaczmarz Method with Oblique Projection for Highly Overdetermined Linear Systems. Advances in Pure Mathematics, 12, 260-270. doi: 10.4236/apm.2022.124020.

1. Introduction

We consider the following consistent linear system:

A x = b (1.1)

where A m × n , b m and x is the n-dimensional unknown vector. One of the most popular solvers for consistent linear systems (1.1) is Kaczmarz method, which was first discovered by Stefen Kaczmarz. In 2009, Strohmer and Vershynin [1] proposed the randomized Kaczmarz method with the expected exponential rate of convergence, which has triggered many scholars to research on the Kaczmarz algorithm. See [2] [3] [4]. Due to its simplicity and performance, the Kaczmarz method has many applications ranging from image reconstruction [5], distributed computing [6] to signal process [7].

Since the classical Kaczmarz method cycles through all rows of coefficient matrix A, the convergence rate depends strongly on the row index selection strategy. Mccormick [8] proposed a Maximal Weighted Residual Kaczmarz (MWRK) method, which selects the component of residual with the largest module length at each iteration. Inspired by the proof of the Greedy Randomized Kaczmarz (GRK) method [9] with remarkable convergence, Du and Gao [10] gave a new theoretical estimate for the convergence rate of the MWRK method, dependent on quantities of the coefficient matrix. Another interesting direction of studying Kaczmarz is to combine it with random sketching matrices. In the past decades, many random sketching matrices were found, such as Gaussian random projection [11], the Subsampled Randomized Hadmard Transform [12] and the count sketch [13] [14]. Zhang and Li [15] proposed a Count Sketch Maximal Weighted Residual Kaczmarz (CS-MWRK) method to solve highly overdetermined linear systems. The core of it is that the count sketch matrix can reduce the computation cost with keeping most of the information original problem [12] [16]. Experiments in [15] show that it can speed up the CPU time for solving highly overdetermined linear systems. For more sketch Kaczmarz-type methods, we refer the reader to [17] [18] [19] and the references therein.

Recently, Li, Wang, Bao and Liu [20] proposed a new Kaczmarz method with a new descent direction based on the oblique projection introduced by Constantin Popa in [21] [22], for short as KO. Using the row index selection rule in the MWRK and GRK methods, Wang, Li, Bao and Liu [23] gave two accelerated variants of the KO method: Maximal Weighted Residual Kaczmarz Method with oblique projection (MWRKO) and greedy randomized Kaczmarz method with oblique projection (GRKO). Inspired by the work of Zhang and Li [15], we combine the count sketch tech with the MWRKO method to develop a Count Sketch Maximal Weighted Residual Kaczmarz Method with oblique projection (CS-MWRKO) and obtain the convergence rate of it. Numerical experiments demonstrate that the CS-MWRKO method requires less computing time for highly overdetermined linear systems, especially for near-linear correction structure systems, compared with the CS-MWRK and MWRKO methods.

The organization of the paper is as follows. In Section 2, we propose the CS-MWRKO method and its convergence is analyzed. Section 3 contains experimental results demonstrating the efficiency of the presented method. We end this paper with some conclusions in Section 4.

We end this section with some notation. In this paper, x , y stands for the scalar product. x 2 is the Euclid norm of x n . For a given matrix G = ( g i j ) m × n , g i T , G T , G , ( G ) , ( G ) , G F , σ i ( G ) and σ min ( G ) are used to denote the ith row, the transpose, the Moore-Penrose pseudoinverse, the range space, the null space, the Frobenius norm, ith singular value and smallest nonzero singular value, respectively. We let r k = b A x k to denote the kth residual vector and r i k k represents ikth entry of r k . x ˜ is any solution of the system (1.1).

2. The Count Sketch Maximal Weighted Residual Kaczmarz Method with Oblique Projection

In this section, we combine the MWRKO method with the CS-MWRK method to construct a new method for (1.1), for short as CS-MWRKO, listed in Algorithm 1.

Next, we introduce some lemmas used to analyze the convergence of our method.

Lemma 2.1. ( [16], Theorem 1) If S d × m is a count sketch transform with d = ( n 2 + n ) / ( δ ε 2 ) , where 0 < δ , ε < 1 , then we have that:

( 1 ε ) A x x ˜ 2 2 S A x 2 2 ( 1 + ε ) A x x ˜ 2 2

for all x n , and:

( 1 ε ) σ i ( A ) σ i ( S A ) ( 1 + ε ) σ i ( A )

for all 1 i n , hold with probability 1 δ .

Lemma 2.2. ( [24], Lemma 1) For any vector u ( A T ) , it holds that:

A u 2 2 σ min 2 ( A ) u 2 2 .

Lemma 2.3. Let S be given as in Lemma 2.1. Then ( A T S T ) is equal to ( A T ) with probability 1 δ .

Proof. It can be found in the proof of ( [15], Theorem 3), we omit it here.

Lemma 2.4. The iteration sequence { x k } k = 0 generated by the CS-MWRKO method satisfies the following equation:

x k + 1 x ˜ 2 2 = x k x ˜ 2 2 x k + 1 x k 2 2 , (2.1)

and the residual satisfies:

r ˜ i k k = b ˜ i k a ˜ i k , x k = 0 , k > 0 , (2.2)

r ˜ i k 1 k = b ˜ i k 1 a ˜ i k 1 , x k = 0 , k > 1 , (2.3)

where x ˜ is an arbitrary solution of the system (1.1). Especially, if P ( A ) ( x 0 ) = P ( A ) ( x ˜ ) then x k x ˜ ( A T ) .

Proof. Since the CS-MWRKO method is equal to the MWRKO method for sketch system S A x = S b , the Equation (2.1), the Equations (2.2) and (2.3) are easily obtained by ( [23], Lemma 2) and ( [23], Lemma 1), respectively.

For the convergence property of the CS-MWRKO method, we establish the following theorem.

Theorem 2.5. Let x 0 n be an arbitrary approximation and x ˜ is a solution of (1.1) such that P ( A ) ( x ˜ ) = P ( A ) ( x 0 ) . Let S be given as in Lemma 2.1. Then the sequence { x k } k = 0 , generated by Algorithm CS-MWRKO, with probability 1 δ , obeys:

x 1 x ˜ 2 2 ( 1 ( 1 ε ) 2 ( 1 + ε ) 2 σ min 2 ( A ) A F 2 ) x 0 x ˜ 2 2 ,

and for k = 1,2, :

x k + 1 x ˜ 2 2 q = 1 k ρ q x 1 x ˜ 2 2 ,

where ρ 1 = 1 ( 1 ε ) 2 σ min 2 ( A ) Δ γ 1 and ρ k = 1 ( 1 ε ) 2 σ min 2 ( A ) Δ γ 2 , ( k > 1 ) , with Δ = max j k sin 2 a ˜ j , a ˜ k , γ 1 = max 1 i m i = 1 , i i 1 m M ˜ ( i ) and γ 2 = max 1 i m i = 1 , i i k , i k 1 m M ˜ ( i ) .

Proof. Based on Lemma 2.3, we can drive the convergence rate of the CS-MWRKO method following from ( [23], Theroem 2) and ( [15], Theroem 3]. For k = 0 , by Equation (2.1) in Lemma 2.4, we have:

x 1 x ˜ 2 2 = x 0 x ˜ 2 2 x 1 x 0 2 2 = x 0 x ˜ 2 2 | b ˜ i 1 a ˜ i 1 , x 0 | 2 M ˜ ( i 1 ) = x 0 x ˜ 2 2 | b ˜ i 1 a ˜ i 1 , x 0 | 2 M ˜ ( i 1 ) b ˜ A ˜ x 0 2 2 i = 1 d | b ˜ i a ˜ i , x 0 | 2 M ˜ ( i ) M ˜ ( i ) x 0 x ˜ 2 2 A ˜ ( x ˜ x 0 ) 2 2 A ˜ F 2 x 0 x ˜ 2 2 σ min 2 ( A ˜ ) A ˜ F 2 x 0 x ˜ 2 2

= ( 1 σ min 2 ( S A ) S A F 2 ) x 0 x ˜ 2 2 ( 1 ( 1 ε ) 2 ( 1 + ε ) 2 σ min 2 ( A ) A F 2 ) x 0 x ˜ 2 2 ,

with probability 1 δ . The second inequality comes from Lemma 2.2 and the last inequality with probability 1 δ follows from Lemma 2.1. For k = 1 , it holds that:

x 2 x ˜ 2 2 = x 1 x ˜ 2 2 x 2 x 1 2 2 = x 1 x ˜ 2 2 | b ˜ i 2 a ˜ i 2 , x 1 | 2 a ˜ i 2 2 2 sin 2 a ˜ i 1 , a ˜ i 2 x 1 x ˜ 2 2 | b ˜ i 2 a ˜ i 2 , x 1 | 2 Δ M ˜ ( i 2 ) b ˜ A ˜ x 1 2 2 i = 1 , i i 1 d | b ˜ i a ˜ i , x 1 | 2 M ˜ ( i ) M ˜ ( i ) x 1 x ˜ 2 2 b ˜ A ˜ x 1 2 2 Δ i = 1 , i i 1 d M ˜ ( i )

= x 1 x ˜ 2 2 A ˜ ( x ˜ x 1 ) 2 2 Δ i = 1 , i i 1 d M ˜ ( i ) ( 1 σ min 2 ( S A ) Δ i = 1 , i i 1 d M ˜ ( i ) ) x 1 x ˜ 2 2 ( 1 ( 1 ε ) 2 σ min 2 ( A ) Δ i = 1 , i i 1 d M ˜ ( i ) ) x 1 x ˜ 2 2 , (2.4)

with probability 1 δ . Here, in the first inequality, we focus on the Equation (2.2) in Lemma 2.4. The third inequality follows from the Lemma 2.2 and the last inequality holds with probability 1 δ by Lemma 2.1. Along the similar lines as in (2.4), we obtain:

x k + 1 x ˜ 2 2 = x k x ˜ 2 2 x k + 1 x k 2 2 = x k x ˜ 2 2 | b ˜ i k + 1 a ˜ i k + 1 , x k | 2 a ˜ i k + 1 2 2 sin 2 a ˜ i k , a ˜ i k + 1 x k x ˜ 2 2 | b ˜ i k + 1 a ˜ i k + 1 , x k | 2 Δ M ˜ ( i k + 1 ) b ˜ A ˜ x k 2 2 i = 1 , i i k , i k 1 d | b ˜ i a ˜ i , x k | 2 M ˜ ( i ) M ˜ ( i ) x k x ˜ 2 2 b ˜ A ˜ x k 2 2 Δ i = 1 , i i k , i k 1 d M ˜ ( i )

x k x ˜ 2 2 σ min 2 ( A ˜ ) Δ i = 1 , i i k , i k 1 d M ˜ ( i ) x k x ˜ 2 2 = x k x ˜ 2 2 σ min 2 ( S A ) Δ i = 1 , i i k , i k 1 d M ˜ ( i ) x k x ˜ 2 2 ( 1 ( 1 ε ) 2 σ min 2 ( A ) Δ i = 1 , i i k , i k 1 d M ˜ ( i ) ) x k x ˜ 2 2 ,

with probability 1 δ . Thus, we complete the proof.

Remark 2.6. Set ρ ^ 0 = 1 ( 1 ε ) 2 ( 1 + ε ) 2 σ min 2 ( A ) A F 2 , ρ ^ k = 1 ( 1 ε ) 2 σ min 2 ( A ) max 1 j d i = 1 , i j d M ˜ ( i ) and the convergence of the CS-MWRK method in [15] is:

x k + 1 x ˜ 2 2 q = 1 k ρ ^ q x 1 x ˜ 2 2 .

Since ρ 0 = ρ ^ 0 , ρ 1 ρ ^ 1 and ρ k < ρ ^ k , ( k > 2 ) , the CS-MWRKO method is faster than the CS-MWRK method. Based on the ( [15], Remark 4), the convergence rate of CS-MWRKO is indeed larger than that of the MWRKO method. This is why the iteration numbers of the former is worse than that of the latter in numerical examples.

3. Numerical Examples and Results

Since the MWRKO [23] method is more effective than the GRK [9], GRKO [23] and MWRK [10] methods, in this section, we give some examples to illustrate the effectiveness of the CS-MWRKO method compared with the MWRKO and CS-MWRK [15] methods in terms of the iteration numbers (denoted as “IT”) and computing time in seconds (denoted as “CPU time”) for (1.1). We also report the iteration numbers speedup of the CS-MWRKO method against the MWRKO and CS-MWRK methods defined by:

IT speedup1 = IT of MWRKO IT of CS MWRKO , IT speedup2 = IT of CS MWRK IT of CS MWRKO

and the CPU time speedup of the CS-MWRKO method against the MWRKO and CS-MWRK methods defined by:

CPU speedup1 = CPU of MWRKO CPU of CS MWRKO , CPU speedup2 = CPU of CS MWRK CPU of CS MWRKO .

For the coefficient matrix A, we use the following two choices: the random matrices generated by MATLAB function rand and the other selected from the University of Florida sparse matrix collection [25]. In the following experiments, the right-hand vector b = A x such that the exact solution x n is a vector generated by the MATLAB function rand. We repeat 50 experiments and all the experiments start from an initial vector x 0 = 0 , and terminate once the Relative Solution Error (RES) defined by:

RES = x k x 2 2 x 2 2 ,

satisfies RES < 0.5−10 or the number of the iteration steps exceeds 100,000. All experiments presented in this section are performed in MATLAB R2018b on a personal computer with 2.00 GHz central processing unit (Intel(R) Core(TM) i5 CPU), 16.00 GB memory, and Windows operating system (Windows 10).

Example One. In this example, we report iteration numbers and CPU time for the CS-MWRKO, MWRKO and CS-MWRK methods for the randomly generated matrices in [ 0,1 ] , listed in Table 1. From this table, we show that the CS-MWRKO performs better than the MWRKO and CS-MWRK methods in CPU time. The CPU speedup1 is at least 7.42 and at most 20.68 and the CPU speedup2 is at least 0.95 and at most 1.15 in our experiments. For the iteration numbers, the CS-MWRKO method needs more iterations than the MWRKO method but less than the CS-MWRKO method.

Table 1. Numerical results for the CS-MWRKO, MWRKO, CS-MWRK methods with matrices generated by rand in [0, 1].

Example Two. In this example, we construct a random coefficient matrix with correlated rows A 50000 × 150 in [ c ,1 ] , c from 0.1 to 0.9, to test the validity of the CS-MWRKO method with different size of count-sketch matrix S. This set of matrices was also done in [26] and [27]. From Table 2, we note that the CPU speedup1 is at least 6.14 and at most 12.89 and the CPU speedup2 is at least 1.08 and at most 1.61. This is, the CS-MWRKO method outperforms the MWRKO and CS-MWRK methods in term of computing time. For the iteration numbers,

Table 2. Numerical results for the CS-MWRKO, MWRKO, CS-MWRK methods with matrices generated by rand in [c, 1].

Table 3. Numerical results for the CS-MWRKO, MWRKO, CS-MWRK methods with sparse matrices.

we find that iteration numbers of the count sketch MWRK-type methods (CS-MWRK, CS-MWRKO) increase with c growing 0.1 to 0.9 and decrease with the increase of d.

Example Three. In this example, we test CS-MWRKO, MWRKO and CS-MWRK with coefficient matrices from real world data [25]. The two matrices are shar_te2-b1 with 34,320 nonzero elements and ch6-6-b1 with 900 nonzero elements. From Table 3, we see again that the CS-MWRKO method outperforms the CS-MWRK and MWRKO method in CPU time. The minimum of the CPU speedup1 is 2.00 and the maximum can reach 13.48. The minimum of the CPU speedup2 is 1.11 and the maximum is 1.24. For the iteration numbers, we get the same conclusion reported in Example One.

4. Conclusion

In this paper, we construct the count sketch maximal weighted residual Kaczmarz method with oblique projection for highly overdetermined linear systems. Numerical examples validate that our method needs less computing time compared with the MWRKO and CS-MWRK methods, especially for the system (1.1) with a linear correction structure. As we all know, there are many works about block versions of Kaczmarz-type methods [28] [29] [30] [31] [32]. We will consider the organic combination of block tech and oblique tech in future work. This topic is practically valuable and theoretically meaningful.

Acknowledgements

The authors are grateful to the anonymous referees and the Editor for their detailed and helpful comments that led to a substantial improvement to the paper. And also would like to thank Prof. Hanyu Li and Dr. Yanjun Zhang for providing Matlab codes of [15].

Funding

Longyan Li is supported by the Research and Training Program for College Students (No. A2020-171).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Strohmer, T. and Vershynin, R. (2009) A Randomized Kaczmarz Algorithm with Exponential Convergence. Journal of Fourier Analysis and Applications, 15, Article No. 262.
https://doi.org/10.1007/s00041-008-9030-4
[2] Niu, Y.Q. and Zheng, B. (2020) A Greedy Block Kaczmarz Algorithm for Solving Large-Scale Linear Systems. Applied Mathematics Letters, 104, Article ID: 106294.
https://doi.org/10.1016/j.aml.2020.106294
[3] Nutini, J., Sepehry, B., Laradji, I., Schmidt, M., Koepke, H. and Virani, A. (2016) Convergence Rates for Greedy Kaczmarz Algorithms, and Faster Randomized Kaczmarz Rules Using the Orthogonality Graph. arXiv:1612.07838.
[4] Eldar, Y. and Needell, D. (2011) Acceleration of Randomized Kaczmarz Method via the Johnson-Lindenstrauss Lemma. Numerical Algorithms, 58, 163-177.
https://doi.org/10.1007/s11075-011-9451-z
[5] Eggermont, P.P.B., Herman, G.T. and Lent, A. (1981) Iterative Algorithms for Large Partitioned Linear Systems, with Applications to Image Reconstruction. Linear Algebra and Its Applications, 40, 37-67.
https://doi.org/10.1016/0024-3795(81)90139-7
[6] Elble, J.M., Sahinidis, N.V. and Vouzis, P. (2010) GPU Computing with Kaczmarz’s and Other Iterative Algorithms for Linear Systems. Parallel Computing, 36, 215-231.
https://doi.org/10.1016/j.parco.2009.12.003
[7] Lorenz, D.A., Wenger, S., Schopfer, F. and Magnör, M. (2014) A Sparse Kaczmarz Solver and a Linearized Bregman Method for Online Compressed Sensing. 2014 IEEE International Conference on Image Processing, Paris, 27-30 October 2014, 1347-1351.
https://doi.org/10.1109/ICIP.2014.7025269
[8] McCormick, S.F. (1977) The Methods of Kaczmarz and Row Orthogonalization for Solving Linear Equations and Least Squares Problems in Hilbert Space. Indiana University Mathematics Journal, 26, 1137-1150.
https://doi.org/10.1512/iumj.1977.26.26090
[9] Bai, B.Z. and Wu, W.T. (2018) On Greedy Randomized Kaczmarz Method for Solving Large Sparse Linear Systems. SIAM Journal on Scientific Computing, 40, A592-A605.
https://doi.org/10.1137/17M1137747
[10] Du, K. and Gao, H. (2019) A New Theoretical Estimate for the Convergence Rate of the Maximal Weighted Residual Kaczmarz Algorithm. Numerical Mathematics: Theory, Methods and Applications, 12, 627-639.
https://doi.org/10.4208/nmtma.OA-2018-0039
[11] Boutsidis, C., Drineas, P. and Magdon-Ismail, M. (2014) Near-Optimal Column-Based Matrix Reconstruction. SIAM Journal on Computing, 43, 687-717.
https://doi.org/10.1137/12086755X
[12] Woodruff, D.P. (2014) Sketching as a Tool for Numerical Linear Algebra. Foundations and Trends in Theoretical Computer Science, 10, 1-157.
[13] Charikar, M., Chen, K. and Farach-Colton, M. (2002) Finding Frequent Items in Data Streams. International Colloquium on Automata, Languages, and Programming 2002, Málaga, 8-13 July 2002, 693-703.
https://doi.org/10.1007/3-540-45465-9_59
[14] Thorup, M. and Zhang, Y. (2012) Tabulation-Based 5-Independent Hashing with Applications to Linear Probing and Second Moment Estimation. SIAM Journal on Computing, 41, 293-331.
https://doi.org/10.1137/100800774
[15] Zhang, Y.J. and Li, H.Y. (2021) A Count Sketch Maximal Weighted Residual Kaczmarz Method for Solving Highly Overdetermined Liner Systems. Applied Mathematics and Computation, 410, Article ID: 126486.
https://doi.org/10.1016/j.amc.2021.126486
[16] Clarkson, K.L. and Woodruff, D.P. (2017) Low-Rank Approximation and Regression in Input Sparsity Time. Journal of the ACM, 63, Article No. 54.
https://doi.org/10.1145/3019134
[17] Wu, N.C. and Xiang, H. (2021) Semiconvergence Analysis of the Randomzied Row Iterative Method and Its Extended Variants. Numerical Linear Algebra with Applications, 28, Article No. e2334.
https://doi.org/10.1002/nla.2334
[18] Gower, R.M., Molitor, D., Moorman, J. and Needell, D. (2021) On Adaptive Sketch-and-Projection for Solving Linear Systems. SIAM Journal on Matrix Analysis and Applications, 42, 954-989.
https://doi.org/10.1137/19M1285846
[19] Gower, R.M. and Richtárik, P. (2015) Randomized Iterative Methods for Linear Systems. SIAM Journal on Matrix Analysis and Applications, 36, 1660-1690.
https://doi.org/10.1137/15M1025487
[20] Li, W.G., Wang, Q.F., Bao, W.B. and Liu, L. (2021) On Kaczmarz Methods with Oblique Projection for Solving Large Overdetermined Lienar Systems. arXiv: 2106.13368.
[21] Popa, C., Preclik, T., Köstler, H. and Rüde, U. (2012) On Kaczmarz’s Projection Iteration as a Direct Solver for Linear Least Squares Problems. Linear Algebra and its Applications, 436, 389-404.
https://doi.org/10.1016/j.laa.2011.02.017
[22] Popa, C. (2012) Projection algorithms Classical Results and Developments. Lap Lambert Academic Publishing, Saarbrücken.
[23] Wang, F., Li, W.G., Bao, W.B. and Liu, L. (2021) Greedy Randomzied and Maximal Weighted Residual Kaczmarz Methods with Oblique Projection, arXiv: 2106.13606.
[24] Zhang, J.H. and Guo, J.H. (2020) On Relaxed Greedy Randomized Coordinate Descent Methods for Solving Large Linear Least-Squares Problems. Applied Numerical Mathematics, 157, 372-384.
https://doi.org/10.1016/j.apnum.2020.06.014
[25] Davis, T.A. and Hu, Y. (2011) The University of Florida Sparse Matrix Collection. ACM Transactions on Mathematical Software, 38, Article No. 1.
https://doi.org/10.1007/s11075-021-01104-x
[26] Wu, W.T. (2021) On Two-Subspace Randomized Extended Kaczmarz Method for Solving Large Linear Least-Squares Problems. Numerical Algorithms, 89, 1-31.
https://doi.org/10.1007/s11075-021-01104-x
[27] Needell, D. and Ward, R. (2013) Two-Subspace Projection Method for Coherent Overdetermined Systems. Journal of Fourier Analysis and Applications, 19, 256-269.
https://doi.org/10.1007/s00041-012-9248-z
[28] Chen, J.Q. and Huang, Z.D. (2022) On a Fast Deterministic Block Kaczmarz Method for Solving Large-Scale Linear Systems. Numerical Algorithms, 89, 1007-1029.
https://doi.org/10.1007/s11075-021-01143-4
[29] Necoara, I. (2019) Faster Randomzied Block Kaczmarz Algorithms. SIAM Journal on Matrix Analysis and Applications, 40, 1425-1452.
https://doi.org/10.1137/19M1251643
[30] Du, K., Si, W.T. and Sun, X.H. (2020) Randomized Extended Average Block Kaczmarz for Solving Least Squares. SIAM Journal on Matrix Analysis and Applications, 42, A3541-A3559.
https://doi.org/10.1137/20M1312629
[31] Li, W., Yin, F., Liao, Y.M. and Huang, G.X. (2021) A Geometric Gaussian Kaczmarz Method for Large Scaled Consistent Linear Equations. Journal of Applied Mathematics and Physics, 9, 2954-2665.
https://doi.org/10.4236/jamp.2021.911189
[32] Liao, Y., Yin, F. and Huang, G.X. (2021) A Relaxed Greedy Block Kaczmarz Method for Solving Large Consistent Linear Systems. Journal of Applied Mathematics and Physics, 9, 3032-3044.
https://doi.org/10.4236/jamp.2021.912196

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.