Preconditioned Bminpert Algorithms for Matrix Equation AX = B and Their Applications in Color Image Restoration

Abstract

The purpose of this paper is to show the preconditioned BMinPert algorithm and analyse the practical implementation. Then a posteriori backward error for BGMRES is given. Furthermore, we discuss their applications in color image restoration. The key differences between BMinPert and other methods such as BFGMRES-S(m, pf), GsGMRES and BGMRES are illustrated with numerical experiments which expound the advantages of BMinPert in the presence of sensitive data with ill-condition problems.

Share and Cite:

Guo, C. and Yang, Z. (2023) Preconditioned Bminpert Algorithms for Matrix Equation AX = B and Their Applications in Color Image Restoration. Open Journal of Applied Sciences, 13, 461-471. doi: 10.4236/ojapps.2023.133037.

1. Introduction

Many large scientific and industrial applications, such as chemistry, structures and control, radar cross section calculation in electromagnetism, wave scattering and wave propagation in acoustics, and various source locations in seismic and parametric studies in general, require the solution of a sequence of linear systems with several right-hand sides given simultaneously such that

A X = B for A N × N and B N × s , X N × s , (1.1)

where the matrix A is nonsingular and typically sparse, N is large as far as many practical problems are concerned. Direct methods, although attractive for multiple right-hand sides, cannot be used because of the size of the matrix. Iterative methods provide the only means for solving these problems. We know that block Krylov approaches are attractive not only because of this numerical feature (larger search space), but also from a computational view point as they enable the use of BLAS3-like implementation.

In order to overcome the disadvantage of using the residual error as a stopping criteria in block Krylov approaches, we [1] proposed BMinPert algorithm, based on Fischer’s theorem [2] and the condition when the left and right equalities of Neumann inequality [3] are satisfied, to treat nonsymmetric linear system with multiple right-hand sides. This method enables one to analyze the behavior of block Krylov subspace methods to a problem with sensitive dependence on the data, which generalizes the MinPert method [4] to a linear system with multiple right-hand sides.

The focus of this paper is on preconditioning techniques for improving the performance and reliability of Krylov subspace methods. It is widely recognized that preconditioning is the most critical ingredient in the development of efficient solvers for challenging problems in scientific computation, and that the importance of preconditioning is destined to increase even further. Color image restoration is an important problem in image processing. We find that our algorithms can be applied, for instance, to multi-channel image restoration when the image degradation model is described by a linear system of equations with multiple right-hand sides that are contaminated by errors. So, the applications of BMinPert algorithm and its preconditioning techniques in color image restoration are discussed. The purpose of this paper is to give the preconditioned technology for BMinPert algorithm to deal with the ill-posed problems that are more sensitive to the original data, and use it for color processing to enhance the processing ability.

The following is a brief synopsis of this paper. Section 2 shows the preconditioned BMinPert algorithm. Section 3 analyses practical implementation. Then a posteriori backward error for BGMRES [5] [6] [7] [8] is given in Section 4. Their applications in color image restoration are discussed in Section 5. The key differences between BMinPert and others methods such as BFGMRES-S (m, pf) [6], GsGMRES [9], Bl-BiCG-rQ [10] and BGMRES are illustrated with numerical experiments in Section 6. Finally, the conclusions appear in Section 7.

Throughout the paper we make use of the Frobenius norm for matrices A F = t r ( A T A ) 1 / 2 , which corresponds to the Euclidean norm a 2 for vector a. We designate the exact solution by X * = A 1 B while X m is used to denote an approximate solution. The notation ( A , B ) will be used for the pair of matrices or associated to the generalized eigenvalue problem A x = λ B x . We use σ i ( A ) to denote the ith singular value of matrix A, furthermore, a i denotes its the ith column, while a i , j denotes the jth element of a i . Matrix E 1 denotes the first s columns of the identity matrix I ( m + 1 ) s . U s means a set of unitary matrices with grade s. Finally, superscript denotes the complex conjugate (transpose, when applied to vectors or matrices).

2. Preconditioned BMinPert Algorithm

Among block krylov subspace methods, An essential component is the block Aronoldi procedure [11] [12] [13]. The columns of the matrix V m = ( V 1 , , V m ) form an orthonormal basis of the block Krylov subspace K m = s p a n { V 1 , A V 1 , , A m 1 V 1 } . To this end, one employs the block Arnoldi process to compute a matrix V m = ( V 1 , V 2 , , V m ) , V i R N × s for i = 1 , , m . By construction the Arnoldi process yields a block upper Hessenberg matrix H ˜ m R ( m + 1 ) s × m s which satisfies

A V m = V m + 1 H ˜ m (2.1)

= V m H m + V m + 1 H m + 1 , m E m T (2.2)

and from which H m is obtained by deleting the last s row. We need H j + 1, j R s × s for j = 1 , 2 , , m are nonsingular throughout the paper, where H j + 1, j is the ( j + 1, j ) block of H ˜ m .

It is well known that the performance of Krylov subspace methods may be enhanced via the application of preconditioners. Presently, much effort is directed at developing efficient preconditioning techniques which achieve good convergence rates for Krylov subspace methods, Saad [7] presented a few novel approaches well suited to parallel machines. Here, attention is focused on right preconditioning provided by the matrix M which has the property of making the modified system

A M 1 ( M X ) = B

easier to solve than the original problem stated in (1.1). Suppose that the initial solution estimate is X 0 , then we wish to compute a solution X m = X 0 + M 1 V m Y m with Y m R m s × s such that

( A Δ A ) ( X 0 + M 1 V m Y m ) = ( B + Δ B ) (2.3)

which minimises ( Δ A , Δ B ) F . Let R 0 = B A X 0 and V 1 B 1 = R 0 be a rank-revealing QR decomposition of R 0 (suppose R 0 is column full rank throughout the paper), where V 1 R N × s and B 1 R s × s . In this setting, the Arnoldi process computes an orthogonal basis for the preconditioned Krylov subspace K m ( A M 1 , V 1 ) = s p a n { V 1 , A M 1 V 1 , , ( A M 1 ) m 1 V 1 } . As was demonstrated in Proposition 3.1 in [1], one may parameterise all perturbations which satisfy (2.3) by the set Ω X m where

Ω X m = { V m + 1 ( H ˜ m Y m E 1 B 1 ) W + + P ( I W W + ) : P R N × ( N + s ) } (2.4)

in which W = ( X m T , I ) T , X m = X 0 + M 1 V m Y m . It’s easy to know that the BMinPert approximate solution may be considered as minimizing perturbations on the data given in A and B into a single algorithm. Observe that P is the only free parameter in (2.4), thus a natural objective is to get the element of the smallest Frobenius norm in Ω X m . Following the development in Section 3 in [1], one may demonstrate that the smallest element of Ω X m (in the Frobenius norm) is

Δ A , B , m = V m + 1 ( H ˜ m Y m E 1 B 1 ) W + = V m + 1 ( H ˜ m Y m E 1 B 1 ) ( I + X m T X m ) 1 ( X m T I ) ,

where Y m and Δ A , B , m F are given in the following theorem.

Theorem 2.1. Suppose that m steps of the block Arnoldi process have been taken. Let { λ i , z i } i = 1 , , ( m + 1 ) s be the eigenpairs of ( L m T L m , G m T G m ) with λ i λ i + 1 for i = 1 , , ( m + 1 ) s . Suppose Z = ( Z 1 T Z 2 T ) T = ( z m s + 1 , , z ( m + 1 ) s ) , and if Z 1 R s × s is nonsingular, then the minimizing solution to (1.1) is given by X m = X 0 + M 1 V m Y m , where

Y m = Z 2 Z 1 1 . (2.5)

Furthermore

Δ A , B , m F 2 = i = m s + 1 ( m + 1 ) s λ i . (2.6)

The matrices L m R ( m + 1 ) s × ( m + 1 ) s and G m R ( N + 1 ) s × ( m + 1 ) s are defined such that

L m = ( E 1 B 1 , H ˜ m ) and G m = ( X 0 M 1 V m I s 0 ) . (2.7)

The procedure is an outline of the proposed method, see Algorithm 2.1 as follow.

Algorithm 2.1. PBMinPert.

It’s clear that we face the practical difficulties when the number of iterations k increases the number of matrices requiring storage increases like k and the number of multiplications like 1 2 k 2 N . To remedy this difficulty, we can use the algorithm every m steps, where m is some fixed integer parameter. This restarted version of PBMinPert denoted by PBMinPert(m) is omitted here.

In practice, appropriately selecting m is essential because a low value of m leads to an inaccurate solution while a large value may take up computational resources without improving the accuracy of the approximate solution. Key to any iterative process are computable a posteriori error expressions which are used to gauge the algorithms progress for increasing m.

3. Practical Implementation

It is important to observe that to evaluate the minimum in (2.6), one computes the s smallest eigenvalues and eigenvectors associated with the generalised eigenvalue problem of (2.8). In a practical implementation, one would therefore seek to compute the s smallest eigenvalues of the matrix pair ( L m T L m , G m T G m ) without evaluating all the eigenvalues. Whether the PBMinPert solution is unique is determined by the multiplicity of λ i for i = m s + 1 , , ( m + 1 ) s . The following lemma is needed in the derivation of the main result of this section.

Lemma 3.1. Suppose that m steps of the block Arnoldi process have been taken. Then the matrix G m T G m is nonsingular for any choice of initial solution estimate X 0 .

Proof. The proof is similar to that of Lemma 3.1 in [1] and we omit it here.

A consequence of Lemma 3.1 is that (2.8) is a well-posed generalized eigenvalue problem. By Lemma 3.1, (2.8) reduce to the following singuar value problem

( [ X 0 T ( I N M 1 V m V m T M T M 1 V m ) 1 V m T M T X 0 + I s ] 1 2 0 C ( V m T M T M 1 V m ) 1 V m T M T X 0 C ) ( L m ) 1 z i = ξ i w i . (3.1)

where z i and w i are the right and left singular vectors, respectively, and ξ i is the associated singular value, the matrix C R m × m stems from the Cholesky factorisation C T C = V m T M T M 1 V m . The cost of forming the left-hand side of (3.1) may be checked by saving the vectors { M l v j } j = l m into a matrix Z m . In the course of the Arnoldi process, such measures enable one to efficiently update the matrix entries. Finally, the GMBACK solution is concisely expressed as X m = X 0 + Z m Y m . A key feature of GMBACK is that it permits flexible preconditioning; namely, the preconditioner may vary at each step of the Arnoldi process without requiring additional storage.

Then one can obtain the flexible BMinPert(m) algorithm as Algorithm 3.1.

Algorithm 3.1. Flexible BMinPert(m).

Finally, observe that Z m has linearly independent columns since M is nonsingular, Z m T Z m is therefore positive definite, thus the Cholesky factors and the left-hand side of (3.2) are well defined provided that H ˜ m is nonsingular. A singular H ˜ m indicates that H m + 1 , m = 0 which signals finite termination.

Of course, one can also use other methods to solve the generalized eigenvalue problem.

4. A Posteriori Backward Error for BGMRES

In a practical implementation, the quality of X m A may be gauged by periodically evaluating a posteriori backward error norm in Theorem 2.1. Observe that the most expensive part of evaluating Δ A , B , m A F consists of forming and normalising X m A . However, if X 0 = 0 , the minimum backward error norm simplifies to Δ A , B , m A F = H m + 1, m E m T Y m A ( I + ( Y m A ) T Y m A ) 1 ( ( M 1 V m Y m A ) T I ) which may be evaluated easier to compute.

Next, we consider the backward error propagation for the BGMRES linear system solver, it is shown that X m G M satisfies a perturbed set of linear equations of the form ( A Δ A ) X m G M = ( B + Δ B ) , then a parameterisation for all such perturbations is given and a computable expression for Δ A , B , m G M F is derived.

Theorem 4.1. Suppose that m steps of the block Arnoldi process have been taken and that the block GMRES solution is given by X m G M = X 0 + M 1 V m Y m G M , then the set of all perturbations Ω X m G M such that ( A Δ A ) X m G M = ( B + Δ B ) may be parameterised by

Ω X m G M = { V m + 1 ( H ˜ m Y m G M E 1 B 1 ) W + + P ( I W W + ) : P R N × ( N + s ) } ,

in which W = ( ( X m G M ) T , I ) T , E m denotes the last s columns of the identity matrix I m s . The smallest perturbation (in the Frobenius norm) then satisfies

Δ A , B , m G M F = ( H ˜ m Y m G M E 1 B 1 ) W + F = R m G M W + F = ( H ˜ m Y m G M E 1 B 1 ) ( I + ( X m G M ) T X m G M ) 1 ( ( X m G M ) T I ) F .

where

Δ A , B , m G M = V m + 1 ( H ˜ m Y m G M E 1 B 1 ) W + = V m + 1 ( H ˜ m Y m G M E 1 B 1 ) ( I + ( X m G M ) T X m G M ) 1 ( ( X m G M ) T I ) .

Here R m G M = B A X m G M is the block GMRES residual error.

Proof. The proof is essentially the same as that of Theorem 2.2 in [1] except that X m G M = X 0 + M 1 V m Y m G M where Y m G M = arg min Y m R N × s H ˜ m Y m E 1 B 1 F .

Last, if we consider the backward error propagation for other methods as indicated above linear system solvers, we can find that the conclusions are very similar to the result as sown in Theorem 4.1 except let the corresponding approximate solution X m is used instead of the solution X m G M .

5. Applications to Color Image Processing

It is well known that each pixel of a color image is constructed by three basic colors, that is, the red, green and blue in fixed proportion. A quaternion q is made of one real and three imaginary parts. By using the character of quaternion, Pei first gave quaternion model for color image in 1996 [14]. In this model, the red, green and blue values of each pixel of a color image are represented as a single pure imaginary quaternion valued pixel, and an n × m color image is then represented as a pure imaginary quaternion image: q ( x , y ) = r ( x , y ) i + g ( x , y ) j + b ( x , y ) k , where r ( x , y ) , g ( x , y ) j , b ( x , y ) { 0,1,2, ,255 } are respectively the red, green and blue components of the pixel at position ( x , y ) in the image q ( x , y ) . Thus, the color image is an n × m pure imaginary quaternion matrix over the quaternion field. Image restoration is the process of removing or minimizing degradations in an observed image. A linear discrete model of image restoration is the matrix-vector equation g = K f + n , where g is the observed image, f is the true or ideal image, n is additive noise, and K is a matrix that represents the blurring phenomena. Image restoration methods attempt to construct an approximation to f given g, K, and in some cases statistical information about the noise.

In color image processing area, one would like to segment an image into several parts and analyze them independently. It is well known that a color image can be represented by a pure imaginary quaternion matrix. In the following example, we use F = F r i + F g j + F b k to denote the image matrix of “Onion” (198 × 135) as shown in Figure 1. Let f = f r i + f g j + f b k be the column vector of F , which can be regarded as a quaternion signal. So we get the data samples of 198 quaternion signals.

By adopting the filter fspecial (‘motion’, len, theta), we blur the signal f r to obtain a blurred signal f d . Then we take K = f d p i n v ( f r ) + d i a g ( r a n d ( 198,1 ) ) to disturb the quaternion signal f, or by using K = r a n d ( m , n ) , we disturb the image f such that g = ( g r , g g , g b ) = K ( f r , f g , f b ) . We compute pure imaginary quaternion solutions of

Figure 1. Comparison of four images.

K f = g , (5.1)

then obtain the restored quaternion signals. And we know, in these cases, the matrices K can be real. So the system (5.1) can be treated as a nonsymmetric linear system with multiple right-hand sides. We aim to blur these quaternion signals at first and then restore them by BMinPert and PBMinPert algorithms.

By BMinPert and PBMinPert algorithms, we compute the solutions of K f = g , then obtain the restored quaternion signals. Eventually, the restoration images are obtained for comparison, as shown in Figure 1, PBMinPert can reach the required accuracy within an average of 9 iterations while the number of iterations to get the required accuracy for BMinPert takes an average of 45 iterations. For “Coffee” (1920 × 1080) as shown in Figure 2, PBMinPert and BMinPert can reach the required accuracy within an average of 11 and 136 iterations respectively. We can get the conclusion that the larger the dimension of matrix is, the more superiority of PBMinPert algorithm is shown.

6. Numerical Example

All the experiments are performed using Matlab R2015a with unit round off 2.2204e−16. Throughout, attention is focused on the evolution of the true relative residual. The suite of numerical experiments presented solves several sensitive linear systems with strongly nonnormal A. The right-hand side and the starting approximate solution have been selected to highlight the advantages offered by considering a minimum perturbation approach in Example 6.1.

In the following figures, PBMinPert represents BMinPert with preconditioner and the same as PBFGMRES for BFGMRES. Moreover, we take preconditioner M i 1 for i = 1 , 2 , with M i 1 = M 1 , where M = ( D E ) D 1 ( D F ) ,

A = D E F , in which E is the strict lower part of A, F is the strict upper part of A, and D is the diagonal of A. Given the numerical examples in [6],

Figure 2. Comparison of four images.

(a)(b)

Figure 3. Example. Norm of residual for PBFGMRES, BFGMRES, PBMinPert, BMinPert, GsGMRES and Bl-BiCG-rQ: (a) N = 1000 , s = 20 , m = 5 ; (b) N = 1000 , s = 40 , m = 5 .

we also use inner iterative number m = 5 for PBFGMRES and BFGMRES algorithms, whereas we use PBMinPert and BMinPert algorithms with no restart.

Example 6.1 let the linear system with coefficient matrix A = g a l l e r y ( ' g c d m a t , N ) , B and initial solution X 0 are random matrices. The condition number of A = g a l l e r y ( ' g c d m a t ,1000 ) is about 8.3776e+04.

From the two figures (Figure 3) above we know that PBMinPert and BMinPert converge to the requested tolerance after 26 iterations except when the number of right-hand sides s is 20, whereas PBFGMRES and BFGMRES stagnate with relative forward errors. Along with the increase of s, there presents a closer and closer trend of convergence of PBMinPert and BMinPert.

7. Conclusion

This paper shows the preconditioned BMinPert algorithm and analyses the practical implementation. Then, a posteriori backward error for BGMRES is given. Furthermore, we discuss their applications in color image restoration. The key differences between BMinPert and other methods such as BFGMRES-S(m, pf), GsGMRES, Bl-BiCG-rQ, BGMRES and BArnoldi are illustrated with numerical experiments expounding the advantages of BMinPert in the presence of sensitive data with ill-condition problems, which is shown commendably within the numerical examples.

Acknowledgements

The authors would like to thank the anonymous reviewers for their insightful comments and suggestions that greatly improved the quality of this paper.

Funding

This work was supported by the National Natural Science Foundation of China, Grant/Award Number: 11571004 and 11701456; Natural Science Foundation of Qinghai Province, Grant/Award Number: 2017-ZJ-908, 2018-ZJ-717, and 2019XJZ10.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Yang, Z.S. and Zheng, B. (2019) Block Minimum Perturbation Algorithm Based on Block Arnoldi Process for Nonsymmetric Linear Systems with Multiple Right-Hand Sides. Applied Mathematics and Computation, 347, 741-766.
https://doi.org/10.1016/j.amc.2018.11.024
[2] Stewart, G.W. and Sun, J.G. (1990) Matrix Perturbation Theory. Academic Press, New York.
[3] Mirsky, L. (1973) A Trace Inequality of John von Neumann. Monatshefte für Mathematik, 79, 303-306.
https://doi.org/10.1007/BF01647331
[4] Kasenally, E.M. and Simoncini, V. (1997) Analysis of a Minimum Perturbation Algorithm for Nonsymmetric Linear Systems. SIAM Journal on Numerical Analysis, 34, 48-66.
https://doi.org/10.1137/S0036142994266844
[5] Agullo, E., Giraud, L. and Jing, Y.-F. (2014) Block GMRES Method with Inexact Breakdowns and Deflated Restarting. SIAM Journal on Matrix Analysis and Applications, 35, 1625-1651.
https://doi.org/10.1137/140961912
[6] Calandra, H., Gratton, S., Lago, R., Vasseur, X. and Carvalho, L.M. (2013) A Modified Block Flexible GMRES Method with Deflation at Each Iteration for the Solution of Non-Hermitian Linear Systems with Multiple Right-Hand Sides. SIAM Journal on Scientific Computing, 35, S345-S367.
https://doi.org/10.1137/120883037
[7] Saad, Y. (1993) A Flexible Inner-Outer Preconditioned GMRES Algorithm. SIAM Journal on Scientific Computing, 14, 461-469.
https://doi.org/10.1137/0914028
[8] Saad, Y. and Schultz, M.H. (1986) GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems. SIAM Journal on Scientific and Statistical Computing, 7, 856-869.
https://doi.org/10.1137/0907058
[9] Zong, Y.D. and Wang, L. (2014) Global Simpler GMRES for Nonsymmetric Systems with Multiple Right-Hand Sides. Applied Mathematics and Computation, 248, 371-379.
https://doi.org/10.1016/j.amc.2014.09.099
[10] Rashedia, S., Ebadia, G., Birkb, S. and Frommer, A. (2016) On Short Recurrence Krylov Type Methods for Linear Systems with Many Right-Hand Sides. Journal of Computational and Applied Mathematics, 300, 18-29.
https://doi.org/10.1016/j.cam.2015.11.040
[11] Golub, G.H. and Van Loan, C.F. (1990) Matrix Computations. 2nd Edition, The John Hopkins University Press, Baltimore.
[12] Simonconi, V. and Gallopoulos, E. (1996) Convergence Properties of Block GMRES and Matrix Polynomial. Linear Algebra and Its Applications, 247, 97-119.
https://doi.org/10.1016/0024-3795(95)00093-3
[13] Zhong, H.-X., Wu, G. and Chen, G.-L. (2015) A Flexible and Adaptive Simpler Block GMRES with Deflated Restarting for Linear Systems with Multiple Right-Hand Sides. Journal of Computational and Applied Mathematics, 282, 139-156.
https://doi.org/10.1016/j.cam.2014.12.040
[14] Pei, S.-C. and Cheng, C.-M. (1997) A Novel Block Truncation Coding of Color Images by Using Quaternion Moment Preserving Principle. IEEE Transactions on Communications, 45, 583-595.
https://doi.org/10.1109/26.592558

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.