Two-Level Block Decompositions for Solving Helmholtz Equation via Chebyshev Pseudo Spectral Method

Abstract

In this paper, we consider solving the Helmholtz equation in the Cartesian domain , subject to homogeneous Dirichlet boundary condition, discretized with the Chebyshev pseudo-spectral method. The main purpose of this paper is to present the formulation of a two-level decomposition scheme for decoupling the linear system obtained from the discretization into independent subsystems. This scheme takes advantage of the homogeneity property of the physical problem along one direction to reduce a 2D problem to several 1D problems via a block diagonalization approach and the reflexivity property along the second direction to decompose each of the 1D problems to two independent subproblems using a reflexive decomposition, effectively doubling the number of subproblems. Based on the special structure of the coefficient matrix of the linear system derived from the discretization and a reflexivity property of the second-order Chebyshev differentiation matrix, we show that the decomposed submatrices exhibits a similar property, enabling the system to be decomposed using reflexive decompositions. Explicit forms of the decomposed submatrices are derived. The decomposition not only yields more efficient algorithm but introduces coarse-grain parallelism. Furthermore, it preserves all eigenvalues of the original matrix.

Share and Cite:

Chen, H. (2018) Two-Level Block Decompositions for Solving Helmholtz Equation via Chebyshev Pseudo Spectral Method. Journal of Modern Physics, 9, 1713-1723. doi: 10.4236/jmp.2018.99107.

1. Introduction

In this paper, we consider the numerical solution to the Helmholtz equation 2 u + ω 2 u = f ( x , y ) in the Cartesian domain Ω = [ 1 , 1 ] [ 1 , 1 ] with homogeneous Dirichlet boundary condition, where 2 is the Laplacian operator, ω is a real parameter representing the wavenumber, u is the amplitude associated with wave propagation, and f ( x , y ) is a forcing function. Without loss of generality we assume u = 0 on the boundary, i.e.,

2 u + ω 2 u = f ( x , y ) , ( x , y ) Ω and u = 0 on Ω (1)

We present a highly parallel numerical algorithm for solving the Helmholtz equation discretized by the Chebyshev pseudospectral scheme. Chebyshev pseudo-spectral methods have long been used to numerically solve partial differential equations [1] [2] [3] . This discretization yields a linear system with coefficient matrix of a special block structure that allows for a block diagonalization which decouples the original matrix, along with some proper permutation of the decoupled matrix into several independent submatrices. By exploiting a reflexivity property that is inherent in the Chebyshev differentiation matrix, it is possible to further decompose each of the submatrices into two independent submatrices when the boundary conditions at both ends of the decoupled 1D subproblems are the same. This second level of decomposition effectively doubles the number of independent submatrices, yielding a higher degree of course-grain parallelism.

This paper is organized as follows. In Section 2, we present the general structure of the linear system derived from the Chebyshev collocation method discretized independently in x and y direction. In Section 3, we address the two-level block decomposition. This decomposition was proposed by the author in [4] for solving Poisson equations with identical grids in either direction. In this paper, we reformulate the decomposition to allow for the number of grid points in the x direction to be different from that in the y direction and derive the explicit forms for the decomposed submatrices, making applications of this approach more flexible. The computational procedure and a numerical example to demonstrate its usefulness are presented in Sections 4 and 5, respectively.

2. Linear System from the Chebyshev Collocation Method

In this section, we briefly describe the Chebyshev collocation method for solving the Helmholtz equation in two dimensions. We assume that the problem is discretized with a tensor product grid ( x i , y j ) , 0 i N x ,0 j N y , where x i and y j are the Chebyshev points in x and y direction, respectively. Let D x , indexed from 0 to N x , be the ( N x + 1 ) ( N x + 1 ) Chebyshev spectral differentiation matrix associated with the x direction. The entries of D x are given as [1] [2] [3]

( D x ) 00 = 2 N x 2 + 1 6 , ( D x ) N x N x = 2 N x 2 + 1 6 ,

( D x ) j j = x j 2 ( 1 x j 2 ) , j = 1 , 2 , , N x 1

( D x ) i j = c i ( 1 ) i + j c j ( x i x j ) , i j , i , j = 0 , 1 , , N x

where c i = { 2 for i = 0 or N x 1 otherwise . The Chebyshev spectral differentiation matrix D y associated with the y direction can be obtained in a similar may.

Now, let A x be the stripped matrix of D x 2 by removing the first and last rows and columns of D x 2 and A y be the stripped matrix of D y 2 . We have

( A x ) i j = ( D x 2 ) i j , i , j = 1 , 2 , , N x 1 , ( A y ) i j = ( D y 2 ) i j , i , j = 1 , 2 , , N y 1. (2)

With the grid just described, the Chebyshev collocation method yields, after removing the boundary nodes whose values are already known from the system, the following linear system

K u = f , K = I p A y + A x I q + ω 2 ( I p I q ) (3)

where p = N x 1 , q = N y 1 , and I k represents the identity matrix of dimension k.

Many solution schemes can be employed to solve Equation (3), including Gaussian eliminations, Gauss-Seidel iterations, ADI iterations, and matrix diagonalizations [5] [6] [7] [8] . In this paper, the block version of the matrix diagonalization method will be employed to first decompose the matrix K into a block diagonal matrix consisting of q diagonal blocks. Each diagonal block will then be further decomposed into two smaller diagonal sub-blocks using reflexive decompositions, yielding a total of 2q diagonal blocks. Note that q is the number of internal grid points along the y direction. This two-step block decomposition scheme allows for the linear system Equation (3) to be decoupled into 2q linear subsystems which can then be solved in parallel with course-grain parallelism using multiprocessors or networked computers.

3. Two-Level Block Decompositions

3.1. Level-1 Decomposition: Block Diagonalization

The diagonalization scheme has been used elsewhere [7] [8] [9] . In this section, however, we address the decomposition of the matrix K into q diagonal blocks using a different representation. To begin with, let Q y be such a matrix that the similarity transformation Q y 1 A y Q y diagonalizes A y into Λ y :

Q y 1 A y Q y = Λ y

A natural choice of Q y is to have its columns formed from the eigenvectors of A y so that Λ y is a diagonal matrix that consists of the eigenvalues of A y . Now split the coefficient matrix K in Equation (3) into three parts K 1 , K 2 , and K 3 :

K 1 = I p A y , K 2 = A x I q , K 3 = ω 2 ( I p I q )

and let Q = I p Q y . We have the transformed matrix

K ˜ = Q 1 K Q = K ˜ 1 + K ˜ 2 + K ˜ 3 (4)

with

K ˜ 1 = Q 1 K 1 Q , K ˜ 2 = Q 1 K 2 Q , K ˜ 3 = Q 1 K 3 Q , where Q 1 = I p Q y 1 .

We now show that the transformed matrix K ˜ can be permuted to yield a block diagonal matrix. First, we observe that both K ˜ 1 and K ˜ 3 are diagonal matrices and K ˜ 2 is a diagonal block matrix since

K ˜ 1 = Q 1 K 1 Q = ( I p Q y 1 ) ( I p A y ) ( I p Q y ) = I p ( Q y 1 A y Q y ) = I p Λ y ,

K ˜ 3 = Q 1 K 3 Q = ( I p Q y 1 ) ( ω 2 I p I q ) ( I p Q y ) = ω 2 ( I p I q )

and

K ˜ 2 = Q 1 K 2 Q = ( I p Q y 1 ) ( A x I q ) ( I p Q y ) = A x I q .

Accordingly,

K ˜ = Q 1 K Q = I p Λ y + A x I q + ω 2 ( I p I q ) . (5)

Let a i j be the ( i , j ) t h element of A y , i , j = 1 , 2 , , q . In its matrix form, K ˜ can be expressed as

K ˜ = [ ( a 11 + ω 2 ) I + Λ y a 12 I a 1 p I a 21 I ( a 22 + ω 2 ) I + Λ y a 2 p I a p 1 I a p 2 I ( a p p + ω 2 ) I + Λ y ]

where I is of dimension q. Apparently, K ˜ is also a diagonal block matrix because both I and Λ y are diagonal matrices. The diagonal block matrix K ˜ can be rearranged to yield a block diagonal matrix K ^ :

K ^ = [ K ^ 1 0 0 0 K ^ 2 0 0 0 K ^ q ]

via appropriate permutations of rows and columns as shown below. Let K ˜ i j be the ( i , j ) t h block of K ˜ and P be such a permutation matrix that the transformation P T K ˜ P rearranges the entries of K ˜ in such a way that

K ^ l ( i , j ) = K ˜ i , j ( l ) , i , j = 1 , 2 , , p

where K ^ l ( i , j ) is the ( i , j ) t h element of K ^ l and K ˜ i , j ( l ) is the l t h diagonal element of K ˜ i , j . Then we have

P T K ˜ P = K ^ = K ^ 1 K ^ 2 K ^ q , (6)

where

K ^ l = A x + ( ω 2 + ( λ l ) y ) I p , l = 1 , 2 , , q

with ( λ l ) y being the l t h eigenvalue of A y . Accordingly,

K ^ = P T K ˜ P = P T Q 1 K Q P

since K ˜ = Q 1 K Q . In other words, the matrix K has been transformed to a block diagonal matrix consisting of q blocks. This decomposition takes the advantage of the homogeneity property of the problem with the boundary conditions mentioned in Equation (1) along the y direction to decouple the 2D problem into q independent 1D problems. A similar decomposition can be performed by using the homogeneity property of the problem along the x direction.

3.2. Level-2 Decomposition: Reflexive Decomposition

As seen from the previous section, the matrix K can be transformed to a block diagonal matrix, implying that the original problem can be decomposed into independent subproblems which can then be solved in parallel using multi-processors. Although each diagonal block K ^ l in Equation (6) can be further decomposed into a diagonal matrix using a single transformation matrix Q x that consists of the eigenvectors of A x , this diagonalization yields little advantage in general, especially a multiprocessing environment, unless the eigenpairs are readily available, which is not the case for the matrix A x .

In this section, we shall employ the reflexivity property of the diagonal blocks K ^ l to further decompose each of them into two independent sub-blocks using another decomposition technique called the reflexive decomposition [10] .

Let J n be the cross identity matrix of dimension n: J n = [ 1 1 ] . It is

trivial to see that J n J n = I n . Now, we note that the Chebyshev differentiation matrix D x is anti-reflexive and D x 2 is reflexive, with respect to J N + 1 . In other words,

D x = J N + 1 D x J N + 1 and D x 2 = J N + 1 D x 2 J N + 1 .

Accordingly, the p × p matrix A x , which is the stripped matrix of D x 2 (Equation (2)), is reflexive with respect to J p . In fact, it is a centro-symmetric matrix [11] [12] . Recall that K ^ l = A x + ( ω 2 + ( λ l ) y ) I p . Therefore, K ^ l satisfy the reflexivity property

K ^ l = J p K ^ l J p , l = 1 , 2 , , q ,

indicating that each K ^ l can be decomposed via reflexive decompositions into two block diagonal submatrices of almost equal size, depending on whether p is even or odd. The decompositions can be done through orthogonal transformations which have been shown in [10] . Here we just present the results.

First consider the case when p is even. Assume p = 2 k for some k > 0 ,

and A x is evenly partitioned into 2 × 2 sub-blocks: A x = [ A 11 A 12 A 21 A 22 ] . Taking

the advantage of the reflexivity property of K ^ l by applying the orthogonal

transformation X A T K ^ l X A to K ^ l where X A = 1 2 [ I J k J k I ] , one can easily

decompose K ^ l into two decoupled diagonal subblocks of equal size:

D l = X A T K ^ l X A = X A T ( A x + ( ω 2 + ( λ l ) y ) I ) X = X A T A x X A + ( ω 2 + λ l ) I = [ D l 1 0 0 D l 2 ] (7)

where D l 1 = A 11 + A 12 J k + ( ω 2 + ( λ l ) y ) I k and D l 2 = A 22 A 21 J k + ( ω 2 + ( λ l ) y ) I k

In the case when p is odd, say p = 2 k + 1 , k > 0 , we consistently partition A x and J as

A x = [ A 11 A 12 A 13 A 21 a 22 A 23 A 31 A 32 A 33 ] , J = [ J k 1 J k ] .

By taking the transformation matrix X A to be X A = 1 2 [ I 0 J k 0 2 0 J k 0 I ] , the matrix K ^ l can be decoupled via the orthogonal transformation X A T K ^ l X A as

D l = X A T K ^ l X A = X A T ( A x + ( ω 2 + ( λ l ) y ) I ) X A = X A T A x X A + ( ω 2 + ( λ l ) y ) I = [ D l 1 0 0 D l 2 ]

where

D l 1 = [ A 11 + A 13 J k 2 A 12 2 A 21 a 22 ] + ( ω 2 + ( λ l ) y )

and

D l 2 = ( A 33 A 31 J k ) + ( ω 2 + ( λ l ) y ) I k .

In either case, let X = I q X A = X A X A X A . The orthogonal transformation X T K ^ X yields

X T K ^ X = X A T K ^ 1 X A X A T K ^ 2 X A X A T K ^ q X A = D 1 D 2 D q = ( D 11 D 12 ) ( D 21 D 22 ) ( D q 1 D q 2 ) ,

which obviously consists of 2q independent diagonal blocks since each D l has two independent diagonal blocks.

4. Computational Procedure

The numerical solution to the Helmholtz Equation (1) discretized by the Chebyshev pseudospectral method yields the linear system

K u = f , K = I p A y + A x I q + ω 2 ( I p I q )

where I p is the identity matrix of dimension p, A y is a q × q , A x a p × p , and K a p q × p q matrix. Here p (q) is the number of internal grid points in the x (y) direction. The two-level decomposition scheme described in this paper yields the following transformed linear system consisting of 2q independent

subsystems, each of dimension p 2 when p is even.

D v = g , D = ( D 11 D 12 ) ( D 21 D 22 ) ( D q 1 D q 2 ) .

The sizes of the sub-blocks in D differ at most by 1 when p is an odd number. Computationally, this decomposition is very attractive. Not only can the decoupled subsystems be solved much more efficiently, they can also be solved in parallel with course-grain parallelism using a computer system with multi-processors. This two-level composition consists of the following three stages.

1) Transform K u = f to K ˜ u ˜ = f ˜ where K ˜ = Q 1 K Q , u ˜ = Q 1 u , and f ˜ = Q 1 f .

This is a typical similarity transformation that is achieved by premultiplying both sides of K u = f with Q 1 and inseting Q Q 1 , which is simply an identity matrix, between K and u. The matrix Q has been defined in Section 3.1 and K ˜ can be found in Equation (5).

2) Transform K ˜ u ˜ = f ˜ to K ^ u ^ = f ^ where K ^ = P T K ˜ P , u ^ = P T u ˜ , and f ^ = P T f ˜ .

The main purpose of this transformation is to reorder the unknowns of the linear system so that the decoupled coefficent matrix K ˜ is converted to a block diagonal matrix ( K ^ ) for the sake of computational convenience. The matrix P, which is just a permutation matrix, has been explained in Section 3.1 and K ^ can be found in Equation (6).

3) Transform K ^ u ^ = f ^ to D v = g where D = X T K ^ X , v = X T u ^ , and g = X T f ^ .

This transformation is analogous to that in stage 1 except that X here is an orthogonal matrix. Therefore we replace X 1 by X T . The matrices X and D can be found in Section 3.2

The advantage of this approach is that the decomposed form of D is explicitly known and, therefore, the actual decompositions from K to K ˜ , from K ˜ to K ^ , and from K ^ to D are not necessary. Furthermore, the diagonal blocks of D can be obtained without any matrix-matrix multiplications. The computational procedure of this two-level decomposition boils down to the following three typical steps.

1) Decomposition step. In this step, we shall compute D and g. Since each diagonal block of D,

D = ( D 11 D 12 ) ( D 21 D 22 ) ( D q 1 D q 2 ) ,

is independent and explicitly known, D can be obtained in parallel using 2q processors, without any difficulty. The main task here is, thus, to obtain g. This can be achieved by transforming f to f ˜ first, which involves solving the linear system Q f ˜ = f . It is essential to note that Q, Q = I p Q y , is a block diagonal matrix and, therefore, f ˜ can be obtained by solving p independent linear subsystems, each of size q, instead of solving the linear system as a whole, which is of size pq. This step can be performed in parallel with large granularity. The matrix Q y , consisting of the eigenvectors of A y , is in general not known explicitly and, thus, has to be computed from A y . Fortunately, A y is only of dimension q. Once f ˜ has been computed, f ^ and g can be obtained easily because f ^ is simply a permuted version of f ˜ and g = X T f ^ involves no matrix-vector multiplications due to the special form of X.

2) Solution step. In this step, we solve the linear system D v = q for v. Note that D consists of 2q diagonal blocks: D l 1 and D l 2 , l = 1 , 2 , , q . By consistently partitioning the vectors x and g into 2q parts, based on the size of the blocks, the linear system D v = g can now be expressed as

D l 1 v 2 l 1 = g 2 l 1 ,

D l 2 v 2 l = g 2 l

for l = 1 , 2 , , q . Since all of the subsystems are independent, the linear system D v = g can be solved in parallel, using 2q processors, one for each subsystem.

3) Retrieval step. Once v has been obtained, the solution u to the original system can be retrieved from v as follows. We first compute u ^ from v using u ^ = X v , which again involves no matrix-vector multiplication. We then obtain u ˜ by simply permuting u ^ using u ˜ = P u ^ . The final solution u can now be computed by the matrix multiplication u = Q u ˜ . Note that X = I q X A and Q = I p Q y , implying that u ^ and u can be obtained in parallel with coursegrain parallelism.

To end this section, it deserves mentioning that this two-level decomposition is a similarity transformation and therefore, all eigenvalues of the original matrix K are preserved in matrix D. Obtaining eigenvalues from D is far more efficient than from K.

5. Numerical Example

To demonstrate the usefulness of the two-level decomposition scheme, we present a numerical example derived from the Helmholtz equation with ω = 3 over a square domain on [ 1,1 ] × [ 1,1 ] , subject to homogeneous Dirichlet boundary conditions on a grid with N x = 5 and N y = 4 . The numerical results presented in this section were conducted using GNU Octave which is software that offers many powerful high-level programming features for scientific computations, similar to Matlab. It suffices to show the final decomposed submatrices to illustrate the advantage of this approach. By excluding the grid points on the boundary and using the lexicographic ordering to number the internal nodes [3] , the matrix K, of dimension 12, yields

K = I 4 A y + A x I 3 + ω 2 ( I 4 I 3 )

where

A x = [ 31.53 12.68 3.69 2.21 7.32 10.07 5.79 1.91 1.91 5.79 10.07 7.32 2.21 3.69 12.68 31.53 ] and A y = [ 14.0 6.0 2.0 4.0 6.0 4.0 2.0 6.0 14.0 ] .

For simplicity, we have rounded off the numbers after the last digit shown. First we observe that both A x and A y are reflexive. In fact, they are centrosymmetric matrices. Let the columns of Q y be the eigenvectors of A y , of dimension 3, and P T be the following permutation matrix of dimension 12,

P T ( k , l ) = { 1 for k = i + ( j 1 ) 4 , l = j + ( i 1 ) 3 , 1 i 4 , 1 j 3 0 otherwise .

From the Level-1 decomposition presented in Section 3,

P T K ˜ P = K ^ = P T Q 1 K Q P = K ^ 1 K ^ 2 K ^ 3 , Q = I 3 Q x ,

we easily obtain the decoupled diagonal blocks K ^ l without the need of any matrix multiplications once the eigenvalues of A y , ( λ l ) y , are available:

K ^ l = A x + ( ω 2 + ( λ l ) y ) I 4 = K ^ l = A x + ( ω 2 I 4 ) + ( λ l ) y I 4 = [ 22.53 + ( λ l ) y 12.68 3.69 2.21 7.32 1.07 + ( λ l ) y 5.79 1.91 1.91 5.79 1.07 + ( λ l ) y 7.32 2.21 3.69 12.68 22.53 + ( λ l ) y ] , 1 l 3 ,

where { ( λ 1 ) y , ( λ 2 ) y , ( λ 3 ) y } = { 19.54 , 12.00 , 2.46 } .

The numerical values of K ^ l clearly indicate that each K ^ l is reflexive with respect to J 4 . Now, applying the Level-2 decomposition to each K ^ l by taking

X A = 1 2 [ I J 2 J 2 I ] yields

D 1 = X A T K ^ 1 X A = [ 39.87 8.99 5.41 14.82 26.40 9.22 16.38 44.29 ] ,

D 2 = X A T K ^ 2 X A = [ 32.32 8.99 5.41 7.28 18.86 9.22 16.38 36.74 ] ,

D 3 = X A T K ^ 3 X A = [ 22.78 8.99 5.41 2.27 9.31 9.22 16.38 27.20 ] ,

where the unshown entries in the matrices are 0. Apparently, each K ^ l has been further decomposed into two independent diagonal blocks, yielding a total of six independent diagonal blocks from the original matrix K. Note that the decomposed matrices D l ,1 l 3 can be obtained directly from A x , ω , and ( λ l ) y using Equation (7) without the need of computing K ^ l . Note also that the eigenvalues of K, of dimension 12, can now be easily obtained from the six diagonal blocks, each of dimension only 2, a clear indication of the advantage of this approach.

6. Conclusions

In this paper, we have presented a two-level block decomposition scheme for the numerical solution to the Helmholtz equation 2 u + ω 2 u = f ( x , y ) in the Cartesian domain Ω = [ 1,1 ] [ 1,1 ] , subject to homogeneous Dirichlet boundary condition, discretized with the Chebyshev pseudo-spectral method. For a computational grid with N x + 1 grid points in the x direction and N y + 1 in the y direction, our first-level decomposition yields N y 1 independent 1D subproblems, excluding the known values at boundary points, using the eigen-pairs of the 1D second-order Chebyshev differentiation matrix along y. The decoupled coefficient matrices are then rearranged to form a block diagonal matrix through a proper permutation. The second level of decomposition further decouples each of the subproblem into two smaller and independent subproblems using reflexive decompositions, yielding a total of 2 ( N y 1 ) independent subproblems.

The general form of the final decoupled submatrices has also been explicitly derived, eliminating the need of performing matrix-matrix multiplications during the decomposition step. A numerical example is presented to illustrate the applicability and to demonstrate the advantage of this two-level decomposition scheme. This scheme leads naturally to a highly parallel numerical algorithm for solving the problem under consideration, since all subproblems can be solved in parallel using multiprocessors.

To close our conclusion, it deserves mentioning that this block decomposition can be readily employed for finding the eigenvalues of the problem as well. This is due to the fact that all eigenvalues of the original matrix are preserved in the decomposed submatrices because the first-level decomposition is a similarity transformation and the second-level decomposition is an orthogonal transformation. The preservation of all eigenvalues in the submatrices makes this block decomposition scheme even more attractive because finding eigenvalues from the decomposed submatrices is obviously much more efficient than from the original matrix, not to mention the computational benefit that can be achieved from the large-grain parallelism induced by the decomposition.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Martinez, J.d.J. and Esperanca, P.d.T.T. (2007) Journal of the Brazilian Society of Mechanical Sciences and Engineering, XXIX, 317-328.
https://doi.org/10.1590/S1678-58782007000300013
[2] Peyret, R. (2002) Applied Mathematical Sciences. Vol. 148, Springer-Verlag, New York, 448 p.
[3] Trefethen, L.N. (2000) Spectral Methods in Matlab. SIAM University City Center, Philadelphia, 165 p.
[4] Chen, H.-C. (2015) Neural, Parallel, and Scientific Computations, 23, 267-275.
[5] Ehrenstein, U. and Peyret, P. (1989) International Journal for Numerical Methods in Fluids, 9, 427-452.
https://doi.org/10.1002/fld.1650090405
[6] Golub, G.H. and Van Loan, C.F. (1983) Matrix Computations. The Johns Hopkins University Press, Maryland, 476 p.
[7] Haidvogel, D.B. and Zang, T. (1979) Journal of Computational Physics, 30, 167-180.
https://doi.org/10.1016/0021-9991(79)90097-4
[8] Julien, K. and Watson, M. (2009) Journal of Computational Physics, 228, 1480-1503.
https://doi.org/10.1016/j.jcp.2008.10.043
[9] Lynch, R.E., Rice, J.R. and Thomas, D.H. (1964) Numerische Mathematik, 6, 185-199.
https://doi.org/10.1007/BF01386067
[10] Chen, H.-C. and Sameh, A. (1989) SIAM Journal on Matrix Analysis and Applications, 10, 39-64.
https://doi.org/10.1137/0610004
[11] Andrew, A.L. (1973) Technometrics, 15, 405-407.
https://doi.org/10.1080/00401706.1973.10489052
[12] Cantoni, A. and Butler, P. (1976) Linear Algebra and Its Applications, 13, 275-288.
https://doi.org/10.1016/0024-3795(76)90101-4

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.