Block Decompositions and Applications of Generalized Reflexive Matrices

Abstract

Generalize reflexive matrices are a special class of matrices  that have the relation where  and  are some generalized reflection matrices. The nontrivial cases ( or ) of this class of matrices occur very often in many scientific and engineering applications. They are also a generalization of centrosymmetric matrices and reflexive matrices. The main purpose of this paper is to present block decomposition schemes for generalized reflexive matrices of various types and to obtain their decomposed explicit block-diagonal structures. The decompositions make use of unitary equivalence transformations and, therefore, preserve the singular values of the matrices. They lead to more efficient sequential computations and at the same time induce large-grain parallelism as a by-product, making themselves computationally attractive for large-scale applications. A numerical example is employed to show the usefulness of the developed explicit decompositions for decoupling linear least-square problems whose coefficient matrices are of this class into smaller and independent subproblems.

Share and Cite:

Chen, H. (2018) Block Decompositions and Applications of Generalized Reflexive Matrices. Advances in Linear Algebra & Matrix Theory, 8, 122-133. doi: 10.4236/alamt.2018.83011.

1. Introduction

In [1] we introduced two special classes of rectangular matrices A and B that have the relations

A = P A Q and B = P B Q , A , B C n × m ,

where P and Q are two generalized reflection matrices of dimensions n and m, respectively. A matrix X is said to be a generalized reflection matrix if X = X * = X 1 , i.e., if X is unitary and Hermitian. The matrices A (and B) are referred to as generalized reflexive (and antireflexive respectively) matrices. They are a generalization of centrosymmetric (anti-centrosymmetric) matrices whose special properties have been under extensive studies [2] - [11] and a generalization of reflexive (antireflexive) matrices U (V), exploited in [1] [12] [13] , that have the relations

U = P U P and V = P V P , U , V C n × n ,

where P is some reflection (symmetric signed permutation) matrix.

Like U, the generalized reflexive matrices A arise naturally and frequently from physical problems with some sort of reflexive symmetry. Although the generalized antireflexive matrices B also also possess many interesting properties, in this paper, we shall focus only on generalized reflexive matrices. Our main objective is, thus, to present a generalized simultaneous diagonalization theorem and various decomposition schemes for the matrices A so that linear least-squares problems (or linear systems) whose coefficient matrices are of this class can be solved more efficiently. The decomposition schemes can be applied to a great number of scientific and engineering problems.

The organization of this paper is as follows. In §2, we present a generalization to the classical simultaneous diagonalization of two diagonalizable commuting square matrices. Our generalization, referred to as the generalized simultaneous diagonalization, simultaneously diagonalize a rectangular matrix H and two square matrices F and G that have the relation F H = H G , assuming F and G are diagonalizable. Based on this simultaneous diagonalization, we develop explicit and semi-explicit decomposed forms in §3 for some important types of generalized reflexive matrices. An application of the decompositions to linear least-squares problems of this class is also given to show the usefulness of the decompositions. More numerical examples are provided in §4 to demonstrate the frequent occurrences of generalized reflexive matrices in many scientific and engineering disciplines.

Throughout this paper, we use the superscripts T, *, and −1 to denote the transpose, conjugate transpose, and inverse of matrices (vectors), respectively. The symbol stands for the direct sum of matrices as usual. Unless otherwise noted, we use I k to denote the identity matrix of dimension k. All matrix-matrix multiplications and additions are assumed to be conformable if their dimensions not mentioned. .

2. Generalized Simultaneous Diagonalization

Before developing the (semi-)explicit block-diagonal structures for some important types of generalized reflexive matrices, we present first the following theoretically simple yet computationally useful observation regarding a simultaneous diagonalization process. Although diagonalization usually refers to square matrices, in this paper, we use the same term for rectangular matrices. In other words, a rectangular matrix A = ( a i j ) C n × m is also said to be diagonal if a i j = 0 for i j . Block-diagonal rectangular matrices are defined in an analogous way.

Theorem 2.1. (Generalized Simultaneous Diagonalization) Let F C n × n and G C m × m be diagonalizable, A C n × m . If F A = A G , then there exist nonsingular matrices S f and S g such that

S f 1 F S f , S f 1 A S g and S g 1 G S g

are all diagonal matrices.

Proof. The proof given below basically employs the same technique used in [14] [15] for the simultaneous diagonalization of two square matrices that commute. Let X f and X g be the matrices that diagonalize F and G, respectively:

X f 1 F X f , = Λ f and X g 1 G X g = Λ g (1)

where the diagonal elements of Λ f (respectively Λ g ) are the eigenvalues of F (respectively G). Suppose that the matrix F has k distinct eigenvalues λ 1 , , λ k with multiplicities p 1 , , p k , respectively, where p 1 + + p k = n ; and the matrix G has l distinct eigenvalues μ 1 , , μ l with multiplicities q 1 , , q l , respectively, where q 1 + + q l = m . Assume further that among the k distinct eigenvalues of F, s of them are also eigenvalues of G, 1 s min { k , l } . If s = 0 , then all λ i and μ j are distinct, implying that A is a null matrix, as can be seen later. Therefore, we exclude this trivial case. Without loss of generality, we can assume that

Λ f = b d i a g ( λ 1 I p 1 , , λ s I p s , , λ k I p k ) ,

Λ g = b d i a g ( μ 1 I q 1 , , μ s I q s , , μ l I q l ) (2)

where b d i a g ( ) denotes a block-diagonal matrix and λ 1 = μ 1 , , λ s = μ s . Note that λ s + 1 , , λ k and μ s + 1 , , μ l are all distinct. Now, partition the matrix X f 1 A X g , denoted by B, according to the block forms of Λ f and Λ g as B = ( B i j ) so that B i j are pi-by-qj submatrices, i = 1 , , k and j = 1 , , l . If F A = A G , we have Λ f B = B Λ g which implies that

λ i B i j = B i j μ j or ( λ i μ j ) B i j = 0. (3)

Since λ i = μ j only if i = j = 1 , , s , we know that B is a block-diagonal matrix, or more precisely, B i j = 0 if i j or if i = j > s . (This can be considered as a block-equivalence decomposition for rectangular matrices.) It is well-known that for any matrix B in C n × m there exist unitary matrices U C n × n and V C m × m such that the singular value decomposition U * B V is diagonal with nonnegative elements [16] . Now, let U i and V i be the matrices that diagonalize B i i , i = 1 , , s and take

U = U 1 U s I p s + 1 I p k ,

V = V 1 V s I q s + 1 I q l . (4)

Let Σ a = U 1 X f 1 A X g V . We see that Σ a = U 1 B V is diagonal. Taking

S f = X f U and S g = X g V , it is clear that

S f 1 F S f = Λ f , S f 1 A S g = Σ a and S g 1 G S g = Λ g . (5)

Therefore, they are all diagonal matrices.

Remark 1: Note that the converse of this theorem is not true in general. It is simple to construct such examples from diagonal matrices.

Remark 2: If the diagonalizable matrix F is the same as G, and A is diagonalizable (A is a square matrix in this case), by taking U i to be the matrices such that U i 1 B i i U i are diagonal and replacing S g with S f , this theorem along with its converse part (it now exists) then reduces to the classical simultaneous diagonalization theorem for commuting square matrices as given in ( [15] , p. 50).

Note also that this theorem is different from the simultaneous diagonalization theorems presented in [14] [16] where the simultaneous diagonalization applies to rectangular matrices of the same size.

Corollary 2.2. Let F C n × n and G C m × m be Hermitian, A C n × m . If F A = A G , then there exist unitary matrices S f and S g such that

S f * F S f , S f * A S g and S g * G S g

are all diagonal matrices.

Proof. Since Hermitian matrices are diagonalizable by unitary matrices, the proof is trivial.

The usefulness of Theorem 2.1 or Corollary 2.2 lies in the fact that if we know the eigenpairs of the matrices F and G, then the matrix A can be block-diagonalized into independent submatrices by the eigenvectors (with some proper ordering) of F and G so that a single large problem can be handled via smaller and independent subproblems, yielding computational efficiency and large-grain parallelism at the same time. The question then boils down to whether those eigenpairs can easily be obtained or not. This of course depends on F and G. Fortunately, for our generalized reflexive matrices that come from physical problems, their eigenpairs of P and Q are explicitly known in most cases, as can be seen from the example presented in Section 4. In the next section, we present several generalized reflexive decompositions that lead to either explicit or semi-explicit block-decomposed forms, which are computationally attractive.

3. Decompositions for Generalized Reflexive Matrices

We now turn to generalized reflexive matrices A, which are not necessary square matrices. The decomposition schemes presented below for A are special applications of the general results developed in the previous section. Our main purpose is to obtain explicit forms of the block structure for some frequently encountered cases of A. Let P A = A Q be generalized reflexive. Recall that P and Q are two generalized reflection matrices, which are unitary Hermitian matrices. Therefore, they have at most two distinct eigenvalues 1 and −1. Furthermore, the relation A = P A Q can be expressed as P A = A Q since P = P * = P 1 . From Corollary 2.2, we know that A can be block-diagonalized into two independent submatrices. This information along, however, is not enough from the computational point of view. we still need to know the eigenpairs of P and Q in order to obtain the explicit decomposed form of A. In the following, we derive several explicit or semi-explicit decomposed forms for some important types of generalized reflexive matrices, starting with the simplest one.

Theorem 3.1. Let P C n × n and Q C m × m , n and m even, be two matrices that take the following forms.

P = [ 0 P 1 * P 1 0 ] and Q = [ 0 Q 1 * Q 1 0 ] (6)

where P 1 and Q 1 are unitary. Let A C n × m be partitioned as ( A i j ) , i , j = 1 , 2 , with each A i j C p × q , p = n 2 and q = m 2 . If A = P A Q , then there exist two unitary matrices X and Y such that

X * A Y = ( A 11 + A 12 Q 1 ) ( A 22 A 21 Q 1 * ) = ( A 11 + P 1 * A 21 ) ( A 22 P 1 A 12 ) . (7)

Proof. Clearly, both P and Q are generalized reflection matrices. Therefore, A is a generalized reflexive matrix. Take X and Y to be the unitary matrices

X = 1 2 [ I P 1 * P 1 I ] and Y = 1 2 [ I Q 1 * Q 1 I ] . (8)

Then

X * A Y = 1 2 [ I P 1 * P 1 I ] [ A 11 A 12 A 21 A 22 ] [ I Q 1 * Q 1 I ] = 1 2 [ ( A 11 + A 12 Q 1 ) + ( P 1 * A 21 + P 1 * A 22 Q 1 ) ( A 12 P 1 * A 21 Q 1 * ) + ( P 1 * A 22 A 11 Q 1 * ) ( A 21 P 1 A 12 Q 1 ) + ( A 22 Q 1 P 1 A 11 ) ( A 22 A 21 Q 1 * ) + ( P 1 A 11 Q 1 * P 1 A 12 ) ] = [ A 11 + A 12 Q 1 0 0 A 22 A 21 Q 1 * ] = [ A 11 + P 1 * A 21 0 0 A 22 P 1 A 12 ] (9)

where we have used the unitarity of P 1 and Q 1 and the relations A 11 = P 1 * A 22 Q 1 and A 21 = P 1 A 12 Q 1 , which results from the assumption of A = P A Q . Note that X * P X = I p I p and Y * Q Y = I q I q , which also explains, via Corollary 2.2, why this decomposition is possible.

Theorem 3.2. Let P C n × n and Q C m × m , n = 2 p + r and m = 2 q + s , be the following two generalized reflection matrices:

P = [ 0 0 P 1 * 0 I r 0 P 1 0 0 ] and Q = [ 0 0 Q 1 * 0 I s 0 Q 1 0 0 ] (10)

where P 1 and Q 1 are unitary matrices of dimensions p and q, respectively; α = ± 1 , β = ± 1 . Let A C n × m be partitioned as ( A i j ) , i , j = 1 , 2 , 3 , with A 11 C p × q , A 22 C p × q , and A 33 C p × q . If A = P A Q , then there exist two unitary matrices X and Y such that

X * A Y = [ A 11 + A 13 Q 1 2 A 12 2 A 21 A 22 ] ( A 33 A 31 Q 1 * ) if α = β = 1 ,

X * A Y = ( A 11 + A 13 Q 1 ) [ A 22 2 A 23 2 A 32 A 33 A 31 Q 1 * ] if α = β = 1 ,

X * A Y = [ A 11 + A 13 Q 1 2 A 21 ] [ 2 A 32 A 33 A 31 Q 1 * ] if α = β = 1 ,

and

X * A Y = [ A 11 + A 13 Q 1 2 A 12 ] [ 2 A 23 A 33 A 31 Q 1 * ] if α = β = 1.

Proof. Take X and Y to be the following two unitary matrices:

X = 1 2 [ I 0 P 1 * 0 2 I r 0 P 1 0 I ] and Y = 1 2 [ I 0 Q 1 * 0 2 I s 0 Q 1 0 I ] . (11)

Then the unitary transformation X * A Y yields

X * A Y = 1 2 [ I 0 P 1 * 0 I r 0 P 1 0 I ] [ A 11 A 12 A 13 A 21 A 22 A 23 A 31 A 32 A 33 ] [ I 0 Q 1 * 0 2 I s 0 Q 1 0 I ] = 1 2 [ ( A 11 + A 13 Q 1 ) + ( P 1 * A 31 + P 1 * A 33 Q 1 ) 2 ( A 12 + P 1 * A 32 ) ( A 13 P 1 * A 31 Q 1 * ) + ( P 1 * A 33 A 11 Q 1 * ) 2 ( A 21 + A 23 Q 1 ) 2 A 22 2 ( A 23 A 21 Q 1 * ) ( A 31 P 1 A 13 Q 1 ) + ( A 33 Q 1 P 1 A 11 ) 2 ( A 32 P 1 A 12 ) ( A 33 A 31 Q 1 * ) + ( P 1 A 11 Q 1 * P 1 A 13 ) ] (12)

If A = P A Q , we immediately have the following relations among the submatrices A i j .

A 11 = P 1 * A 33 Q 1 , A 13 = P 1 * A 31 Q 1 * ,

A 12 = P 1 * A 32 , A 21 = A 23 Q 1 and A 22 = A 22 .

Employing these relations and the unitarity of P 1 and Q 1 for (12), we obtain a much simplified form of the transformation X * A Y . Namely,

X * A Y = 1 2 [ 2 ( A 11 + A 13 Q 1 ) 2 ( 1 + β ) A 12 0 2 ( 1 + α ) A 21 ( 1 + α β ) A 22 2 ( 1 α ) A 23 0 2 ( 1 β ) A 32 2 ( A 33 A 31 Q 1 * ) ] . (13)

Accordingly, we have the results we want:

X * A Y = [ A 11 + A 13 Q 1 2 A 12 0 2 A 21 A 22 0 0 0 A 33 A 31 Q 1 * ] for α = β = 1 ,

X * A Y = [ A 11 + A 13 Q 1 0 0 0 A 22 2 A 23 0 2 A 32 A 33 A 31 Q 1 * ] for α = β = 1 ,

X * A Y = [ A 11 + A 13 Q 1 0 0 2 A 21 0 0 0 2 A 32 A 33 A 31 Q 1 * ] for α = β = 1 ,

and

X * A Y = [ A 11 + A 13 Q 1 2 A 12 0 0 0 2 A 23 0 0 A 33 A 31 Q 1 * ] for α = β = 1.

Note that in (13), A 13 Q 1 can be replaced by P 1 * A 31 and A 31 Q 1 * replaced by P 1 A 13 since A 31 = P 1 A 13 Q 1 . Computationally, one should use the expressions that are easier to compute. Note also that X and Y do not depend on α and β , and

X * P X = I p I r I p and Y * Q Y = I q I r I q .

Remark 3: In Theorem 3.1 and 3.2 if the unitarity requirement of P, Q, X, and Y is lifted, a slightly more general case can be obtained simply by replacing the conjugate transpose with the inverse (existence of P 1 1 and Q 1 1 assumed) in places of P 1 * , Q 1 * , X * , and Y * . With this replacement, all the results in the proofs remain intact. The matrices A in this case, however, are not necessarily generalized reflexive since P and Q may not be generalized reflection matrices.

Remark 4: Obviously, Theorem 3.2 reduces to Theorem 3.1 if I r and I s in (10) do not exist, i.e., r = s = 0 . If I r is present and I s disappears, then by partitioning A as ( A i j ) , i = 1 , 2 , 3 and j = 1 , 2 , according to the block forms of P and Q, we have

X * A Y = [ A 11 + A 12 Q 1 0 1 2 ( 1 + α ) A 21 1 2 ( 1 α ) A 22 0 A 32 A 31 Q 1 * ] (14)

which is decoupled into two independent sub-blocks when α = ± 1 . Analogous to (13), A 12 Q 1 and A 31 Q 1 * can be expressed as P 1 * A 31 and P 1 A 12 , respectively since in this case A 31 = P 1 A 12 Q 1 . Instead, if I r disappears and I s remains and the matrix A is partitioned in accordance with P and Q as

A = [ A 11 A 12 A 13 A 21 A 22 A 23 ] ,

then we have

X * A Y = [ A 11 + A 13 Q 1 1 2 ( 1 + β ) A 12 0 0 1 2 ( 1 β ) A 22 A 23 A 21 Q 1 * ] (15)

where A 13 Q 1 = P 1 * A 21 and A 21 Q 1 * = P 1 A 13 because A 21 = P 1 A 13 Q 1 . This transformation again decouples the matrix A into two independent sub-blocks when β = ± 1 .

4. Applications

As seen from the transformations presented in the previous section, the decomposed forms of A of this class are very simple to compute. This is especially true when P and Q are reflection (symmetric signed permutation) matrices, which arise frequently in a very wide range of real-world applications, because any reflection matrix can be symmetrically permuted to yield one of the forms of (6) and (10), with P 1 and Q 1 being some signed permutation matrices whereas P 2 and Q 2 some reflection matrices. Furthermore, the decompositions preserve all singular values because they make use of unitarily equivalence transformations, which can be applied to both square matrices and rectangular matrices. Therefore, they are useful not only for linear systems but for linear least-squares problems and singular value problems as well. The only requirement is the existence of the generalized reflexivity property of the matrix A. When P is the same as Q, the decompositions lead to similarity transformations and, accordingly, preserve all eigenvalues. It is exactly this simplicity and preservance of singular values or eigenvalues that makes these decompositions computationally attractive. To demonstrate the usefulness of these decompositions in attacking applications of this type, we present in this section an application of the decompositions to one of the numerical examples described in [1] , where the same problem is solved using only basic generalized reflexive properties, without resorting to matrix decompositions.

Numerical example. Consider the following overdetermined linear system:

[ 1 1 0 0 0 1 0 0 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 1 0 ] [ x 1 x 2 x 3 x 4 ] = [ 50 152 78 33 30 123 2 ] . (16)

Let A be the coefficient matrix of the overdetermined system. It is simple to observe that A is a generalized reflexive matrix: A = P A Q where

P = [ 0 0 I 3 0 1 0 I 3 0 0 ] and Q = [ 0 I 2 I 2 0 ] (17)

are two reflection matrices. It deserves mentioning that the coefficient matrix A is the edge-node incidence matrix of a level network with reflexive symmetry.

Whether this overdetermined linear system is to be solved via its normal equation or using a QR decomposition instead, we can decompose the original problem into two independent subproblems first, using the decomposition techniques presented in the previous section. Let

X = 1 2 [ I 3 0 I 3 0 2 0 I 3 0 I 3 ] and Y = 1 2 [ I 2 I 2 I 2 I 2 ] . (18)

The overdetermined system A x = b is then transformed to A ˜ x ˜ = b ˜ with

A ˜ = X T A Y , x ˜ = ( 2 Y T ) x and b ˜ = ( 2 X T ) b

where 2 is intentionally inserted to avoid unnecessary multiplications of 1 2 in forming b ˜ from b. Now, let A x = b be partitioned, according to the block forms of X and Y, as

[ A 11 A 12 A 21 A 22 A 31 A 32 ] [ x 1 x 2 ] = [ b 1 b 2 b 3 ] . (19)

The transformation X T A Y can easily be obtained without actually performing expensive matrix-matrix multiplications. We simply use the explicit form of (14) by substituting I 2 for Q 1 and −1 for α , yielding

A ˜ = X T A Y = A ˜ 1 A ˜ 2

where

A ˜ 1 = A 11 + A 12 = [ 1 1 0 1 1 1 ] and A ˜ 2 = [ 2 A 22 A 32 A 31 ] = [ 0 2 1 1 0 1 1 1 ] . (20)

It is simple to obtain b ˜ without resorting to a dense matrix-vector multiplication.

b ˜ = [ b 1 + b 3 2 b 2 b 3 b 1 ] = [ 80 275 80 | 2 ( 33 ) | 20 29 76 ] T .

This transformation then decouples the original system A x = b into

A ˜ 1 x ˜ 1 = b ˜ 1 with b ˜ 1 = [ 80 275 80 ] T (21)

and

A ˜ 2 x ˜ 2 = b ˜ 2 with b ˜ 2 = [ 2 ( 33 ) 20 29 76 ] T . (22)

The normal equations of (21) and (22) are simply

[ 2 2 2 3 ] x ˜ 1 = [ 160 115 ] and [ 2 0 0 5 ] x ˜ 2 = [ 96 151 ] ,

respectively, whose solutions are x ˜ 1 = [ 355 275 ] T and x ˜ 2 = [ 48 30.2 ] T . The final solution x can now be retrieved from x ˜ 1 and x ˜ 2 with ease.

x = 1 2 [ x ˜ 1 x ˜ 2 x ˜ 1 + x ˜ 2 ] = [ 201.5 152.6 153.5 122.4 ] T ,

whose correctness can be verified from the normal equation of the original system.

At this point, it is clear that the main reason why transformations of this type are so cheap to obtain is not only that explicit forms are available but that no arithmetic multiplications or divisions are involved in forming the decoupled subsystems except the central block row of A and b and the central block column of A, if any, such as A 22 and b 2 in this example. The dimensions of these blocks are usually very small for large-scale problems with reflexive symmetry because they involve only the nodes/edges on the line or plane of symmetry. Therefore, this extra work can easily be offset by the tremendous savings resulting from solving two smaller subproblems whose sizes are only about half of the original problem. It is worth mentioning that solving sequentially two independent decomposed subproblems each of half size of a single problem is about four times faster than solving the undecomposed one. This is exactly where computational efficiency comes from. The large-grain parallelism induced by these decompositions is an additional advantage when the subproblems are solved on a multiprocessor on multiple networked computers.

We close this section by emphasizing the fact that a great number of scientific and engineering applications require solutions to linear least-squares problems, singular value problems, linear systems, or eigenvalue problems whose coefficient matrices are either generalized reflexive nontrivial reflection matrices P and Q or reflexive P (or Q). Instead of giving more numerical examples, we just mention that the node-edge (or edge-node) incidence matrix of any finite network or graph that possesses reflexive symmetry or that can be redrawn as one that displays reflexive symmetry is generalized reflexive. Refer to [1] for more numerical examples.

5. Conclusions

Generalized reflexive matrices, a newly exploited special class of matrices A C n × m that have the relation A = P A Q with P and Q being some generalized reflection matrices, are a generalization of centrosymmetric matrices and reflexive matrices. Although it is not trivial to realize their existence purely from the entries of a given matrix, this new class of matrices indeed arise very often from physical problems in many areas of scientific and engineering applications, especially from those with reflexive symmetry. Three such nontrivial numerical examples, each from a distinct real-world application area, can be found in [1] .

A major part of this paper has been devoted to the exploration of computationally attractive decompositions for taking advantage of the special relation possessed by this class of matrices. The decompositions are based on a generalized simultaneous diagonalization theorem presented in this paper and derived using the eigenvectors of P and Q via unitarily equivalence transformations. When the eigenpairs of P and Q are explicitly known, which is usually the case for generalized reflexive matrices that arise from physical problems with reflexive symmetry, the decompositions yield simple and explicit forms of the decomposed submatrices for the matrices A. One of the generalized reflexive matrices presented in this paper has also been employed to serve as an example to show the usefulness of the derived explicit decompositions for decoupling linear least-squares problems whose coefficient matrices are of this class into smaller and independent subproblems. These decompositions, though theoretically simple, can lead to much more efficient computation for large-scale applications. It also induces large-grain parallelism as a by-product. Furthermore, they preserve either the singular values or the eigenvalues of the matrices and, therefore, immediately applicable not only for handling linear least-squares problems and linear systems but for attacking singular value problems or eigenvalue problems.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Chen, H.-C. (1998) Generalized Reflexive Matrices: Special Properties and Applications. SIAM Journal on Matrix Analysis and Applications, 19, 140-153.
https://doi.org/10.1137/S0895479895288759
[2] Zehfuss, G. (1862) Zwei Sätze über determinanten. Zeitschrift für Angewandte Mathematik und Physik, VII, 436-439.
[3] Aitken, A.C. (1949) Determinants and Matrices. 6th Edition, Wiley-Interscience, New York.
[4] Good, I.J. (1970) The Inverse of a Centrosymmetric Matrix. Technometrics, 12, 925-928.
https://doi.org/10.1080/00401706.1970.10488743
[5] Andrew, A.L. (1973) Solution of Equations Involving Centrosymmetric Matrices. Technometrics, 15, 405-407. https://doi.org/10.1080/00401706.1973.10489052
[6] Andrew, A.L. (1973) Eigenvectors of Certain Matrices. Linear Algebra and Its Applications, 7, 151-162. https://doi.org/10.1016/0024-3795(73)90049-9
[7] Pye, W.C., Boullino, T.L. and Atchison, T.A. (1973) The Pseudoinverse of a Centrosymmetric Matrix. Linear Algebra and Its Applications, 6, 201-204.
https://doi.org/10.1016/0024-3795(73)90020-7
[8] Cantoni, A. and Butler, P. (1976) Eigenvalues and Eigenvectors of Symmetric Centrosymmetric Matrices. Linear Algebra and Its Applications, 13, 275-288.
https://doi.org/10.1016/0024-3795(76)90101-4
[9] Weaver, J.R. (1985) Centrosymmetric (Cross-Symmetric) Matrices, Their Basic Properties, Eigenvalues, and Eigenvectors. The American Mathematical Monthly, 92, 711-717.
https://doi.org/10.1080/00029890.1985.11971719
[10] Weaver, J.R. (1988) Real Eigenvalues of Nonnegative Matrices Which Commute with a Symmetric Matrix Involution. Linear Algebra and Its Applications, 110, 243-253.
https://doi.org/10.1016/0024-3795(83)90138-6
[11] Tao, D. and Yasuda, M. (2002) A Spectral Characterization of Generalized Real Symmetric Centrosymmetric and Generalized Real Symmetric Skew-Centrosymmetric Matrices. SIAM Journal on Matrix Analysis and Applications, 23, 885-895.
https://doi.org/10.1137/S0895479801386730
[12] Chen, H-C. and Sameh, A. (1989) A Matrix Decomposition Method for Orthotropic Elasticity Problems. SIAM Journal on Matrix Analysis and Applications, 10, 39-64.
https://doi.org/10.1137/0610004
[13] Chen, H.-C. and Sameh, A. (1989) A Domain Decomposition Method for 3D Elasticity Problems. In: Brebbia, C.A. and Peters, A., Eds., Applications of Supercomputers in Engineering: Fluid Flow and Stress Analysis Applications, Computational Mechanics Publications, Southampton University, Southampton, England, 171-188.
[14] Gibson, P.M. (1974) Simultaneous Diagonalization of Rectangular Complex Matrices. Linear Algebra and Its Applications, 9, 45-53. https://doi.org/10.1016/0024-3795(74)90025-1
[15] Horn, R.A. and Johnson, C.A. (1985) Matrix Analysis. Cambridge University Press, New York.
https://doi.org/10.1017/CBO9780511810817
[16] Eckart, C. and Young, G. (1939) A Principal Axis Transformation for Non-Hermitian Matrices. Bulletin of the American Mathematical Society, 45, 118-121.
https://doi.org/10.1090/S0002-9904-1939-06910-3

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.