A Geometric Gaussian Kaczmarz Method for Large Scaled Consistent Linear Equations ()

Li Wen^{1}, Feng Yin^{1,2*}, Yimou Liao^{1}, Guangxin Huang^{2,3}

^{1}College of Mathematics and Statistics, Sichuan University of Science and Engineering, Zigong, China.

^{2}Sichuan Province University Key Laboratory of Bridge Non-Destruction Detecting and Engineering Computing, Zigong, China.

^{3}College of Mathematics and Physics, Chengdu University of Technology, Chengdu, China.

**DOI: **10.4236/jamp.2021.911189
PDF HTML XML
65
Downloads
270
Views
Citations

This paper presents a geometric Gaussian Kaczmarz (GGK) method for solving the large-scaled consistent linear systems of equation. The GGK method improves the geometric probability randomized Kaczmarz method in [1] by introducing a new block set strategy and the iteration process. The GGK is proved to be of linear convergence. Several numerical examples show the efficiency and effectiveness of the GGK method.

Keywords

Gaussian Kaczmarz, Convergence Rate, Linear Systems, Consistent

Share and Cite:

Wen, L. , Yin, F. , Liao, Y. and Huang, G. (2021) A Geometric Gaussian Kaczmarz Method for Large Scaled Consistent Linear Equations. *Journal of Applied Mathematics and Physics*, **9**, 2954-2965. doi: 10.4236/jamp.2021.911189.

1. Introduction

We are concerned with the approximation solution of large-scaled linear systems of equation of the form

$Ax=b,$ (1)

where $A\in {\mathbb{R}}^{m\times n}$ is a real matrix, $b\in {\mathbb{R}}^{m}$ is a real vector and $x\in {\mathbb{R}}^{n}$ is an unknown vector to be determined.

The Kaczmarz [2] method is a good candidate for solving large problems (1) due to its simplicity and good performance, which is applied in numerous fields, such as image reconstruction [3] [4] and digital signal processing [5]. The Kaczmarz method can be formulated as

${x}_{k+1}={x}_{k}+\frac{{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}}{{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}{\left({A}^{\left(i\right)}\right)}^{\text{T}}$ (2)

where
$i=\left(k\mathrm{mod}m\right)+1$, *i.e.* all *m* equations in the linear systems (1) are swept through after *m* iterations.

There are many extended Kaczmarz methods are derived in recent years. Strohmer and Vershynin in [6] proposed a randomized Kaczmarz (RK) method with the expected exponential rate of convergence, which is also known as “linear convergence”. In 2018, Bai and Wu [7] proposed a greedy randomized Kaczmarz (GRK) method. GRK introduces an effective probability criterion to grasp larger entries of the residual vector at each iteration and was proved to be of the linear convergence rate. The index set is determined by

${u}_{k}=\left\{i|{\left|{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}\right|}^{2}\ge {\epsilon}_{k}{\Vert b-A{x}_{k}\Vert}_{2}^{2}{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}\right\}\mathrm{,}$ (3)

where

${\epsilon}_{k}=\frac{1}{2}\left(\frac{1}{{\Vert b-A{x}_{k}\Vert}_{2}^{2}}\underset{1\le i\le m}{\mathrm{max}}\left\{\frac{{\left|{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}\right|}^{2}}{{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}\right\}+\frac{1}{{\Vert A\Vert}_{F}^{2}}\right)\mathrm{.}$ (4)

In order to improve the influence on the convergence rate by diagonal scalings of the coefficient matrix in (1). Yang in [1] presented a geometric probability randomized Kaczmarz algorithm (GPRK) to determine a subset ${J}_{k}$ of $\left[m\right]=\left\{\mathrm{1,2,}\cdots \mathrm{,}m\right\}$ by a given rule such that the magnitude of the value of ${\left({d}_{i}^{k}\right)}^{2}$ should be larger than a prescribed threshold if the row index $i\in {J}_{k}$, where

${d}_{i}^{k}=\Vert \frac{{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}}{{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}{\left({A}^{\left(i\right)}\right)}^{\text{T}}\Vert \mathrm{.}$ (5)

Generally, the block Kaczmarz methods [8] [9] select some rows from the coefficient matrix *A* to construct the block matrix and compute the Moore-Penrose pseudoinverse of the block at each iteration. However, the cost of computing the Moore-Penrose pseudoinverse of a matrix is so expensive, which impacts gravely the CPU time of the algorithm.

Gower and Richtrik proposed a Gaussian Kaczmarz (GK) method [10] whose iteration process is defined by

${x}_{k+1}={x}_{k}+\frac{{\zeta}_{k}^{\text{T}}\left(b-A{x}_{k}\right)}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}{A}^{\text{T}}{\zeta}_{k}\mathrm{,}$ (6)

where
${\zeta}_{k}$ is a Gaussian vector with mean
$0\in {\mathbb{R}}^{m}$ and the covariance matrix
$I\in {\mathbb{R}}^{m\times m}$, *i.e.*,
${\zeta}_{k}~N\left(\mathrm{0,}I\right)$. Here *I* denotes the identity matrix. The expected linear convergence rate was analyzed in the case that *A* is of full column rank. The idea of the GK method is also used in [11] to avoid computing the pseudoinverses of submatrices. We will adapt this iteration process in our method.

In this paper, we improve the GPRK method in [1] and present a geometric Gaussian Kaczmarz (GGK) method. The GGK introduces a new block strategy and uses the iteration process of the GK method in (6). The block set strategy of GGK is more efficient than that in [1] because it can grasp as many as possible rows of the system matrix to participate in projection. This is illustrated in Section 3.

The rest of this paper is arranged as follows. A geometric Gaussian Kaczmarz algorithm is presented in Section 2. Its convergence is also proved. Section 3 shows several numerical examples for the proposed method and Section 4 draws some conclusions.

2. A Geometric Gaussian Kaczmarz Algorithm

This section describes a geometric Gaussian Kaczmarz (GGK) algorithm to compute the solution of (1). Algorithm 1 summarizes the GGK algorithm. Steps 2, 3 and 4 determine the block control sequence ${\left\{{\tau}_{k}\right\}}_{k\ge 0}$, which is simpler than and different from that in [1], which determines the block control sequence by probability

$Pr\left(row=ik\right)=\frac{\underset{1\le i\le m}{\mathrm{max}}{\left({d}_{i}^{k}\right)}^{2}}{{\displaystyle {\sum}_{i=1}^{m}{\left({d}_{i}^{k}\right)}^{2}}}\mathrm{.}$

Steps 5 and 6 give the iteration process of the GGK method.

The following results show the convergence of Algorithm 1.

Theorem 1. Assume the linear system (1) is consistent, and then the iterative sequence ${\left\{{x}_{k}\right\}}_{k\ge 0}$ generated by Algorithm 1 converges to the least-norm solution ${x}^{\ast}={A}^{\u2020}b$ of linear systems (1). Moreover, the solution error of linear systems (1) satisfies

${\Vert {x}_{k+1}-{x}^{\mathrm{*}}\Vert}_{2}^{2}\le \left(1-\eta \frac{{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\frac{{\lambda}_{\mathrm{min}}\left({A}^{\text{T}}A\right)}{{\Vert A\Vert}_{F}^{2}}\right){\Vert {x}_{k}-{x}^{\mathrm{*}}\Vert}_{2}^{2}\mathrm{.}$ (7)

*Proof.* According to Algorithm 1, we have

$\begin{array}{c}{x}_{k+1}-{x}^{*}={x}_{k}-{x}^{*}+\frac{{\zeta}_{k}^{\text{T}}\left(b-A{x}_{k}\right)}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}{A}^{\text{T}}{\zeta}_{k}\\ ={x}_{k}-{x}^{*}-\frac{{\left({A}^{\text{T}}{\zeta}_{k}\right)}^{\text{T}}\left({x}_{k}-{x}^{*}\right)}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}{A}^{\text{T}}{\zeta}_{k}\\ =\left(I-\frac{{\left({A}^{\text{T}}{\zeta}_{k}\right)}^{\text{T}}{A}^{\text{T}}{\zeta}_{k}}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}\right)\left({x}_{k}-{x}^{*}\right).\end{array}$

Algorithm 1. A geometric Gaussian Kaczmarz algorithm (GGK).

Denote the projector ${P}_{k}=\frac{{\left({A}^{\text{T}}{\zeta}_{k}\right)}^{\text{T}}{A}^{\text{T}}{\zeta}_{k}}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}$, then ${P}_{k}$ is orthogonal because ${P}_{k}^{\text{T}}={P}_{k}$ and ${P}_{k}^{2}={P}_{k}$. Thus we have

$\begin{array}{c}{\Vert {x}_{k+1}-{x}^{*}\Vert}_{2}^{2}={\Vert \left(I-\frac{{\left({A}^{\text{T}}{\zeta}_{k}\right)}^{\text{T}}{A}^{\text{T}}{\zeta}_{k}}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}\right)\left({x}_{k}-{x}^{*}\right)\Vert}_{2}^{2}\\ ={\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-{\Vert \frac{{\zeta}_{k}^{\text{T}}\left(b-A{x}_{k}\right)}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}{A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}\\ ={\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{{\left|{\zeta}_{k}^{\text{T}}\left(b-A{x}_{k}\right)\right|}^{2}}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}\\ ={\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{{\Vert {\zeta}_{k}\Vert}_{2}^{4}}{{\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}}.\end{array}$

The last equality holds because

${\zeta}_{k}^{\text{T}}\left(b-A{x}_{k}\right)={\displaystyle \underset{i\in {\tau}_{k}}{\sum}}\left({b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}\right){e}_{i}^{\text{T}}\left(b-A{x}_{k}\right)={\displaystyle \underset{i\in {\tau}_{k}}{\sum}}{\left|{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}\right|}^{2}={\Vert {\zeta}_{k}\Vert}_{2}^{2}\mathrm{.}$

Let ${E}_{k}\in {\mathbb{R}}^{m\times \left|{\tau}_{k}\right|}$ denote the matrix whose columns orderly are constituted of all the vector ${e}_{i}\in {\mathbb{R}}^{m}$ with $i\in {\tau}_{k}$, then ${A}_{{\tau}_{k}}={E}_{k}^{\text{T}}A$. Denoted by ${\stackrel{^}{\zeta}}_{k}={E}_{k}^{\text{T}}{\zeta}_{k}$, we have

${\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}={\zeta}_{k}^{\text{T}}A{A}^{\text{T}}{\zeta}_{k}={\stackrel{^}{\zeta}}_{k}^{\text{T}}{E}_{k}^{\text{T}}A{A}^{\text{T}}{E}_{k}{\stackrel{^}{\zeta}}_{k}={\stackrel{^}{\zeta}}_{k}^{\text{T}}{A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}{\stackrel{^}{\zeta}}_{k}={\Vert {A}_{{\tau}_{k}}^{\text{T}}{\stackrel{^}{\zeta}}_{k}\Vert}_{2}^{2}\mathrm{,}$

moreover,

${\Vert {A}^{\text{T}}{\zeta}_{k}\Vert}_{2}^{2}\le {\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right){\Vert {\stackrel{^}{\zeta}}_{k}\Vert}_{2}^{2}={\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right){\Vert {\zeta}_{k}\Vert}_{2}^{2}.$

It then holds that

$\begin{array}{c}{\Vert {x}_{k+1}-{x}^{*}\Vert}_{2}^{2}\le {\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{{\Vert {\zeta}_{k}\Vert}_{2}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\\ ={\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{{\displaystyle \underset{i\in {\tau}_{k}}{\sum}}{\left|{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}\right|}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\\ ={\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{{\displaystyle \underset{i\in {\tau}_{k}}{\sum}}{\left({d}_{i}^{k}\right)}^{2}{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\\ \le {\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{\eta \underset{i\in {\tau}_{k}}{\mathrm{max}}{\left({d}_{i}^{k}\right)}^{2}{\displaystyle \underset{i\in {\tau}_{k}}{\sum}}{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\\ ={\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{\eta \underset{i\in {\tau}_{k}}{\mathrm{max}}{\left({d}_{i}^{k}\right)}^{2}{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}.\end{array}$

For each $k\ge 0$, since

$\underset{i\in {\tau}_{k}}{\mathrm{max}}{\left({d}_{i}^{k}\right)}^{2}=\underset{i\in {\tau}_{k}}{\mathrm{max}}\frac{{\left|{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}\right|}^{2}}{{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}\ge {\displaystyle \underset{i=1}{\overset{m}{\sum}}}\frac{{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}{{\Vert A\Vert}_{F}^{2}}\frac{{\left|{b}^{\left(i\right)}-{A}^{\left(i\right)}{x}_{k}\right|}^{2}}{{\Vert {A}^{\left(i\right)}\Vert}_{2}^{2}}=\frac{{\Vert b-A{x}_{k}\Vert}_{2}^{2}}{{\Vert A\Vert}_{F}^{2}},$

we have

$\begin{array}{c}{\Vert {x}_{k+1}-{x}^{*}\Vert}_{2}^{2}\le {\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\frac{\eta \frac{{\Vert b-A{x}_{k}\Vert}_{2}^{2}}{{\Vert A\Vert}_{F}^{2}}{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\\ \le {\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}-\eta \frac{{\lambda}_{\mathrm{min}}\left({A}^{\text{T}}A\right)}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\frac{{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\Vert A\Vert}_{F}^{2}}{\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}\\ =\left(1-\eta \frac{{\lambda}_{\mathrm{min}}\left({A}^{\text{T}}A\right)}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\frac{{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\Vert A\Vert}_{F}^{2}}\right){\Vert {x}_{k}-{x}^{*}\Vert}_{2}^{2}.\end{array}$

This completes the proof. □

We remark that

$0\le 1-\eta \frac{{\lambda}_{\mathrm{min}}\left({A}^{\text{T}}A\right)}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\frac{{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\Vert A\Vert}_{F}^{2}}\le \mathrm{1,}$

which means that Algorithm 1 is convergent. In fact,

$\frac{{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\ge \mathrm{1,}$

and

$0<\frac{{\lambda}_{\mathrm{min}}\left({A}^{\text{T}}A\right)}{{\Vert A\Vert}_{F}^{2}}\le 1,$

it follows that

$1-\eta \frac{{\Vert {A}_{{\tau}_{k}}\Vert}_{F}^{2}}{{\lambda}_{\mathrm{max}}\left({A}_{{\tau}_{k}}{A}_{{\tau}_{k}}^{\text{T}}\right)}\frac{{\lambda}_{\mathrm{min}}\left({A}^{\text{T}}A\right)}{{\Vert A\Vert}_{F}^{2}}\le 1-\eta \frac{{\lambda}_{\mathrm{min}}\left({A}^{\text{T}}A\right)}{{\Vert A\Vert}_{F}^{2}}<1.$

3. Numerical Experiments

In this section, we use Algorithm 1 for solving different types of consistent linear systems (1) and compare it with GPRK in [1]. All experiments are carried out using MATLAB (version R2019b) on a personal computer with 2.0 GHz central processing unit (Inter(R) Core(TM) i7-8565U CPU) and 8 GB memory, and 64 bit Windows operation system.

The coefficient matrix
$A\in {\mathbb{R}}^{m\times n}$ is either generated by the MATLAB function
$randn\left(m\mathrm{,}n\right)$ or taken from the University of Florida sparse matrix collection [12]. To ensure each linear system (1) with a selected matrix *A* is consistent, we let
$b\in {\mathbb{R}}^{m}$ in (1) be generated by
$A{x}^{\mathrm{*}}$, where the exact solution
${x}^{\mathrm{*}}\in {\mathbb{R}}^{n}$ is produced randomly by the MATLAB function randn and is to be determined. The* efficiency *of each method is evaluated by the number of iterations (*IT*) and the CPU time (*CPU*). It can be measured by the *speed-up of GGK against GPRK *(*SU*) is defined by

$SU=\frac{CPU\text{\hspace{0.17em}}of\text{\hspace{0.17em}}GPRK}{CPU\text{\hspace{0.17em}}of\text{\hspace{0.17em}}GGK}\mathrm{.}$

The *effectiveness* of both methods is measured by the *relative residual *(*RR*) defined by

$RR=\frac{{\Vert b-A{x}_{k}\Vert}_{2}}{{\Vert b\Vert}_{2}}\mathrm{.}$

We set the initial solution ${x}^{0}$ be 0 in all experiments, and the iteration does not terminate until $RR<{10}^{-6}$. The numerical results of each method shown in this section are arithmetical average quantities with respect to 50 repeated trials. We set $\eta =0.3$ in GGK for each example.

3.1. Experiments with Sparse Matrix

This subsection considers the linear systems (1) with sparse matrices. These matrices include some flat ones in Table 1 and the tall ones (their transposes) in Table 2 from the University of Florida sparse matrix collection [12].

Table 1. Comparison of IT, CPU and SU of the GGK method with that of the GPRK method for different flat spare systems.

Table 2. Comparison of IT, CPU and SU of the GGK method with that of the GPRK method for different thin spare systems.

Table 1 and Table 2 list the average number of iterations (IT), computing time (CPU), and the related speed-up (SU) of GGK against GPRK. Figure 1 and Figure 2 plot RR versus IT (left) and CPU (right) of different methods for solving (1) with the matrix *bibd_16_8* and *crew1 ^{T}*, respectively. From Table 1 and Table 2 we see that the GGK method needs smaller IT and CPU than the GPRK method does in all cases. We can see whether system 1 is overdetermined or underdetermined. When the matrix is flat, the value of SU locates in the interval (2.70, 628.92), while for the case of the thin matrix, the value of SU ranges from 2.49 to 641.08. It is observed from Figure 1 and Figure 2 that the GGK method converges faster than the GPRK method.

3.2. Experiments with Dense Matrix

In this subsection, the test matrices are dense normally distributed random matrices including thin and flat matrices. The sizes of rows and columns of the selected matrices vary from 2000 to 30,000.

Table 3 lists the test results of *IT*, *CPU* and *SU* for flat dense matrix with different size. Table 4 displays *IT*, *CPU* and *SU* for a different thin dense matrix with different size. Figure 3 and Figure 4 plot RR versus IT (left) and CPU (right) of GGK and GPRK methods for solving (1) with the matrix
$A=randn\left(\mathrm{2200,30000}\right)$ and
$A=randn\left(\mathrm{30000,2000}\right)$, respectively. From Table 1 and Table 2, we can

Figure 1. RR versus IT (left) and CPU (right) of the GGK method compared with the GPRK method for solving (1) with the matrix *bibd_*16_8.

Figure 2. RR versus IT (left) and CPU (right) of the GGK method compared with the GPRK method for solving (1) with the matrix *crew*1* ^{T}*.

Table 3. Comparison of IT, CPU and SU of the GGK method with that of the GPRK method for different flat dense systems.

Table 4. Comparison of IT, CPU and SU of the GGK method with that of the GPRK method for different thin dense systems.

see in all cases, GGK needs less IT and CPU time than GPRK does, and the speed-up of GGK against GPRK ranges from tens of times to hundreds of times, *i.e.*, the speed-up of GGK against GPRK varies from 73.30 to 227.15 in the case of flat and from 11.39 to 425.19 in the case of thin. Similar results to Figure 1

Figure 3. RR versus IT (left) and CPU (right) of the GGK method compared with the GPRK method for solving (1) with the matrix $A=randn\left(\mathrm{2200,30000}\right)$.

Figure 4. RR versus IT (left) and CPU (right) of the GGK method compared with the GPRK method for solving (1) with the matrix $A=randn\left(\mathrm{30000,2000}\right)$.

and Figure 2 are drawn from Figure 3 and Figure 4 that the GGK method converges much faster than the GPRK method.

4. Conclusion

We develop a geometric Gauss-Kaczmarz (GGK) algorithm for solving large-scale consistent linear systems and the convergence is proved for this algorithm. Numerical experiments show that the GGK algorithm has better efficiency and effectiveness than the GPRK algorithm. In our future work, we will focus on block Kaczmarz methods to solve ill-posed problems.

Acknowledgements

The authors thank the reviewers for providing some helpful comments. Research by G.H. was supported in part by Application Fundamentals Foundation of STD of Sichuan (grant 2020YJ0366) and the Opening Project of Sichuan Province University Key Laboratory of Bridge Non-destructionDetecting and Engineering Computing (grant 2020QZJ03), and research by F.Y. was partially supported by NNSF (grant 11501392) and SUSE (grant 2019RC09, 2020RC25), and research by Y.M. Liao was partially supported by the Innovation Fund of Postgraduate of SUSE (grant y2021101).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Yang, X. (2020) A Geometric Probability Randomized Kaczmarz Method for Large Scale Linear Systems. Applied Numerical Mathematics, 164, 139-160. https://doi.org/10.1016/j.apnum.2020.10.016 |

[2] | Kaczmarz, S. (1937) Angenäherte Auflösung von Systemen Linearer Gleichungen. Bulletin International de l’ Académie Polonaise des Sciences et des Lettres, 35, 355-357. |

[3] |
Ihrig, A. and Schmitz, G. (2018) Accelerating Nonlinear Speed of Sound Reconstructions Using a Randomized Block Kaczmarz Algorithm. 2018 IEEE International Ultrasonics Symposium (IUS), Kobe, Japan, 22-25 October 2018, 1-9. https://doi.org/10.1109/ULTSYM.2018.8580199 |

[4] |
Popa, C. and Zdunek, R. (2004) Kaczmarz Extended Algorithm for Tomographic Image Reconstruction from Limited-Data. Mathematics and Computers in Simulation, 65, 579-598. https://doi.org/10.1016/j.matcom.2004.01.021 |

[5] |
Lorenz, D.A., Wenger, S., Schöpfer, F. and Magnor, M. (2014) A Sparse Kaczmarz Solver and A Linearized Bergman Method for Onlinear Compressed Sensing. 2014 IEEE International Conference on Image Processing (ICIP), Parise, France, 27-30 October 2014, 1347-1351. https://doi.org/10.1109/ICIP.2014.7025269 |

[6] |
Strohmer, T. and Vershynin, R. (2009) A Randomized Kaczmarz Algorithm with Exponential Convergence. Journal of Fourier Analysis and Applications, 15, Article No. 262. https://doi.org/10.1007/s00041-008-9030-4 |

[7] |
Bai, Z.Z. and Wu, W.T. (2018) On Greedy Randomized Kaczmarz Method for Solving Large Sparse Linear Systems. SIAM Journal on Scientific Computing, 40, A592-A606. https://doi.org/10.1137/17M1137747 |

[8] |
Needell, D., Tropp, J. A. (2014) Paved with Good Intentions: Analysis of A Randomized Block Kaczmarz Method. Linear Algebra and Its Applications, 441, 199-221. https://doi.org/10.1016/j.laa.2012.12.022 |

[9] |
Niu, Y.Q. and Zheng, B. (2020) A Greedy Block Kaczmarz Algorithm for Solving Large-Scale Linear Systems. Applied Mathematics Letters, 104, Article No. 106294. https://doi.org/10.1016/j.aml.2020.106294 |

[10] |
Gower, R.M. and Richtárik, R. (2015) A Randomized Kaczmarz Algorithm with Exponential Convergence. SIAM Journal on Matrix Analysis and Applications, 36, 1660-1690. https://doi.org/10.1137/15M1025487 |

[11] |
Chen, J.Q. and Huang, Z.D. (2021) On a Fast Deterministic Block Kaczmarz Method for Solving Large-scale Linear Systems. Numerical Algorithms. https://doi.org/10.1007/s11075-021-01143-4 |

[12] |
Davis, T.A. and Hu, Y. (2011) The University of Florida Sparse Matrix Collection. ACM Transactions on Mathematical Software, 38, 1-25. https://doi.org/10.1145/2049662.2049663 |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2022 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.