1. Introduction
Many large scientific and industrial applications, such as chemistry, structures and control, radar cross section calculation in electromagnetism, wave scattering and wave propagation in acoustics, and various source locations in seismic and parametric studies in general, require the solution of a sequence of linear systems with several right-hand sides given simultaneously such that
(1.1)
where the matrix A is nonsingular and typically sparse, N is large as far as many practical problems are concerned. Direct methods, although attractive for multiple right-hand sides, cannot be used because of the size of the matrix. Iterative methods provide the only means for solving these problems. We know that block Krylov approaches are attractive not only because of this numerical feature (larger search space), but also from a computational view point as they enable the use of BLAS3-like implementation.
In order to overcome the disadvantage of using the residual error as a stopping criteria in block Krylov approaches, we [1] proposed BMinPert algorithm, based on Fischer’s theorem [2] and the condition when the left and right equalities of Neumann inequality [3] are satisfied, to treat nonsymmetric linear system with multiple right-hand sides. This method enables one to analyze the behavior of block Krylov subspace methods to a problem with sensitive dependence on the data, which generalizes the MinPert method [4] to a linear system with multiple right-hand sides.
The focus of this paper is on preconditioning techniques for improving the performance and reliability of Krylov subspace methods. It is widely recognized that preconditioning is the most critical ingredient in the development of efficient solvers for challenging problems in scientific computation, and that the importance of preconditioning is destined to increase even further. Color image restoration is an important problem in image processing. We find that our algorithms can be applied, for instance, to multi-channel image restoration when the image degradation model is described by a linear system of equations with multiple right-hand sides that are contaminated by errors. So, the applications of BMinPert algorithm and its preconditioning techniques in color image restoration are discussed. The purpose of this paper is to give the preconditioned technology for BMinPert algorithm to deal with the ill-posed problems that are more sensitive to the original data, and use it for color processing to enhance the processing ability.
The following is a brief synopsis of this paper. Section 2 shows the preconditioned BMinPert algorithm. Section 3 analyses practical implementation. Then a posteriori backward error for BGMRES [5] [6] [7] [8] is given in Section 4. Their applications in color image restoration are discussed in Section 5. The key differences between BMinPert and others methods such as BFGMRES-S (m, pf) [6], GsGMRES [9], Bl-BiCG-rQ [10] and BGMRES are illustrated with numerical experiments in Section 6. Finally, the conclusions appear in Section 7.
Throughout the paper we make use of the Frobenius norm for matrices
, which corresponds to the Euclidean norm
for vector a. We designate the exact solution by
while
is used to denote an approximate solution. The notation
will be used for the pair of matrices or associated to the generalized eigenvalue problem
. We use
to denote the ith singular value of matrix A, furthermore,
denotes its the ith column, while
denotes the jth element of
. Matrix
denotes the first s columns of the identity matrix
.
means a set of unitary matrices with grade s. Finally, superscript
denotes the complex conjugate (transpose, when applied to vectors or matrices).
2. Preconditioned BMinPert Algorithm
Among block krylov subspace methods, An essential component is the block Aronoldi procedure [11] [12] [13]. The columns of the matrix
form an orthonormal basis of the block Krylov subspace
. To this end, one employs the block Arnoldi process to compute a matrix
,
for
. By construction the Arnoldi process yields a block upper Hessenberg matrix
which satisfies
(2.1)
(2.2)
and from which
is obtained by deleting the last s row. We need
for
are nonsingular throughout the paper, where
is the
block of
.
It is well known that the performance of Krylov subspace methods may be enhanced via the application of preconditioners. Presently, much effort is directed at developing efficient preconditioning techniques which achieve good convergence rates for Krylov subspace methods, Saad [7] presented a few novel approaches well suited to parallel machines. Here, attention is focused on right preconditioning provided by the matrix M which has the property of making the modified system
easier to solve than the original problem stated in (1.1). Suppose that the initial solution estimate is
, then we wish to compute a solution
with
such that
(2.3)
which minimises
. Let
and
be a rank-revealing QR decomposition of
(suppose
is column full rank throughout the paper), where
and
. In this setting, the Arnoldi process computes an orthogonal basis for the preconditioned Krylov subspace
. As was demonstrated in Proposition 3.1 in [1], one may parameterise all perturbations which satisfy (2.3) by the set
where
(2.4)
in which
,
. It’s easy to know that the BMinPert approximate solution may be considered as minimizing perturbations on the data given in A and B into a single algorithm. Observe that P is the only free parameter in (2.4), thus a natural objective is to get the element of the smallest Frobenius norm in
. Following the development in Section 3 in [1], one may demonstrate that the smallest element of
(in the Frobenius norm) is
where
and
are given in the following theorem.
Theorem 2.1. Suppose that m steps of the block Arnoldi process have been taken. Let
be the eigenpairs of
with
for
. Suppose
, and if
is nonsingular, then the minimizing solution to (1.1) is given by
, where
(2.5)
Furthermore
(2.6)
The matrices
and
are defined such that
(2.7)
The procedure is an outline of the proposed method, see Algorithm 2.1 as follow.
Algorithm 2.1. PBMinPert.
It’s clear that we face the practical difficulties when the number of iterations k increases the number of matrices requiring storage increases like k and the number of multiplications like
. To remedy this difficulty, we can use the algorithm every m steps, where m is some fixed integer parameter. This restarted version of PBMinPert denoted by PBMinPert(m) is omitted here.
In practice, appropriately selecting m is essential because a low value of m leads to an inaccurate solution while a large value may take up computational resources without improving the accuracy of the approximate solution. Key to any iterative process are computable a posteriori error expressions which are used to gauge the algorithms progress for increasing m.
3. Practical Implementation
It is important to observe that to evaluate the minimum in (2.6), one computes the s smallest eigenvalues and eigenvectors associated with the generalised eigenvalue problem of (2.8). In a practical implementation, one would therefore seek to compute the s smallest eigenvalues of the matrix pair
without evaluating all the eigenvalues. Whether the PBMinPert solution is unique is determined by the multiplicity of
for
. The following lemma is needed in the derivation of the main result of this section.
Lemma 3.1. Suppose that m steps of the block Arnoldi process have been taken. Then the matrix
is nonsingular for any choice of initial solution estimate
.
Proof. The proof is similar to that of Lemma 3.1 in [1] and we omit it here.
A consequence of Lemma 3.1 is that (2.8) is a well-posed generalized eigenvalue problem. By Lemma 3.1, (2.8) reduce to the following singuar value problem
(3.1)
where
and
are the right and left singular vectors, respectively, and
is the associated singular value, the matrix
stems from the Cholesky factorisation
. The cost of forming the left-hand side of (3.1) may be checked by saving the vectors
into a matrix
. In the course of the Arnoldi process, such measures enable one to efficiently update the matrix entries. Finally, the GMBACK solution is concisely expressed as
. A key feature of GMBACK is that it permits flexible preconditioning; namely, the preconditioner may vary at each step of the Arnoldi process without requiring additional storage.
Then one can obtain the flexible BMinPert(m) algorithm as Algorithm 3.1.
Algorithm 3.1. Flexible BMinPert(m).
Finally, observe that
has linearly independent columns since M is nonsingular,
is therefore positive definite, thus the Cholesky factors and the left-hand side of (3.2) are well defined provided that
is nonsingular. A singular
indicates that
which signals finite termination.
Of course, one can also use other methods to solve the generalized eigenvalue problem.
4. A Posteriori Backward Error for BGMRES
In a practical implementation, the quality of
may be gauged by periodically evaluating a posteriori backward error norm in Theorem 2.1. Observe that the most expensive part of evaluating
consists of forming and normalising
. However, if
, the minimum backward error norm simplifies to
which may be evaluated easier to compute.
Next, we consider the backward error propagation for the BGMRES linear system solver, it is shown that
satisfies a perturbed set of linear equations of the form
, then a parameterisation for all such perturbations is given and a computable expression for
is derived.
Theorem 4.1. Suppose that m steps of the block Arnoldi process have been taken and that the block GMRES solution is given by
, then the set of all perturbations
such that
may be parameterised by
in which
,
denotes the last s columns of the identity matrix
. The smallest perturbation (in the Frobenius norm) then satisfies
where
Here
is the block GMRES residual error.
Proof. The proof is essentially the same as that of Theorem 2.2 in [1] except that
where
.
Last, if we consider the backward error propagation for other methods as indicated above linear system solvers, we can find that the conclusions are very similar to the result as sown in Theorem 4.1 except let the corresponding approximate solution
is used instead of the solution
.
5. Applications to Color Image Processing
It is well known that each pixel of a color image is constructed by three basic colors, that is, the red, green and blue in fixed proportion. A quaternion q is made of one real and three imaginary parts. By using the character of quaternion, Pei first gave quaternion model for color image in 1996 [14]. In this model, the red, green and blue values of each pixel of a color image are represented as a single pure imaginary quaternion valued pixel, and an
color image is then represented as a pure imaginary quaternion image:
, where
are respectively the red, green and blue components of the pixel at position
in the image
. Thus, the color image is an
pure imaginary quaternion matrix over the quaternion field. Image restoration is the process of removing or minimizing degradations in an observed image. A linear discrete model of image restoration is the matrix-vector equation
, where g is the observed image, f is the true or ideal image, n is additive noise, and K is a matrix that represents the blurring phenomena. Image restoration methods attempt to construct an approximation to f given g, K, and in some cases statistical information about the noise.
In color image processing area, one would like to segment an image into several parts and analyze them independently. It is well known that a color image can be represented by a pure imaginary quaternion matrix. In the following example, we use
to denote the image matrix of “Onion” (198 × 135) as shown in Figure 1. Let
be the column vector of F , which can be regarded as a quaternion signal. So we get the data samples of 198 quaternion signals.
By adopting the filter fspecial (‘motion’, len, theta), we blur the signal
to obtain a blurred signal
. Then we take
to disturb the quaternion signal f, or by using
, we disturb the image f such that
. We compute pure imaginary quaternion solutions of
(5.1)
then obtain the restored quaternion signals. And we know, in these cases, the matrices K can be real. So the system (5.1) can be treated as a nonsymmetric linear system with multiple right-hand sides. We aim to blur these quaternion signals at first and then restore them by BMinPert and PBMinPert algorithms.
By BMinPert and PBMinPert algorithms, we compute the solutions of
, then obtain the restored quaternion signals. Eventually, the restoration images are obtained for comparison, as shown in Figure 1, PBMinPert can reach the required accuracy within an average of 9 iterations while the number of iterations to get the required accuracy for BMinPert takes an average of 45 iterations. For “Coffee” (1920 × 1080) as shown in Figure 2, PBMinPert and BMinPert can reach the required accuracy within an average of 11 and 136 iterations respectively. We can get the conclusion that the larger the dimension of matrix is, the more superiority of PBMinPert algorithm is shown.
6. Numerical Example
All the experiments are performed using Matlab R2015a with unit round off 2.2204e−16. Throughout, attention is focused on the evolution of the true relative residual. The suite of numerical experiments presented solves several sensitive linear systems with strongly nonnormal A. The right-hand side and the starting approximate solution have been selected to highlight the advantages offered by considering a minimum perturbation approach in Example 6.1.
In the following figures, PBMinPert represents BMinPert with preconditioner and the same as PBFGMRES for BFGMRES. Moreover, we take preconditioner
for
with
, where
,
, in which
is the strict lower part of A,
is the strict upper part of A, and D is the diagonal of A. Given the numerical examples in [6],
(a)(b)
Figure 3. Example. Norm of residual for PBFGMRES, BFGMRES, PBMinPert, BMinPert, GsGMRES and Bl-BiCG-rQ: (a)
,
,
; (b)
,
,
.
we also use inner iterative number
for PBFGMRES and BFGMRES algorithms, whereas we use PBMinPert and BMinPert algorithms with no restart.
Example 6.1 let the linear system with coefficient matrix
, B and initial solution
are random matrices. The condition number of
is about 8.3776e+04.
From the two figures (Figure 3) above we know that PBMinPert and BMinPert converge to the requested tolerance after 26 iterations except when the number of right-hand sides s is 20, whereas PBFGMRES and BFGMRES stagnate with relative forward errors. Along with the increase of s, there presents a closer and closer trend of convergence of PBMinPert and BMinPert.
7. Conclusion
This paper shows the preconditioned BMinPert algorithm and analyses the practical implementation. Then, a posteriori backward error for BGMRES is given. Furthermore, we discuss their applications in color image restoration. The key differences between BMinPert and other methods such as BFGMRES-S(m, pf), GsGMRES, Bl-BiCG-rQ, BGMRES and BArnoldi are illustrated with numerical experiments expounding the advantages of BMinPert in the presence of sensitive data with ill-condition problems, which is shown commendably within the numerical examples.
Acknowledgements
The authors would like to thank the anonymous reviewers for their insightful comments and suggestions that greatly improved the quality of this paper.
Funding
This work was supported by the National Natural Science Foundation of China, Grant/Award Number: 11571004 and 11701456; Natural Science Foundation of Qinghai Province, Grant/Award Number: 2017-ZJ-908, 2018-ZJ-717, and 2019XJZ10.