Least Squares Symmetrizable Solutions for a Class of Matrix Equations ()
1. Introduction
The matrix equations (AX = B, XC = D), where A, B, C, D are usually given by experiments, have a long history [1]. Many authors considered these matrix equations. For example, Mitra [2,3], Chu [4] discussed its unconstraint solutions with the generalized inverse of matrix and the singular value decomposition (SVD), respectively. In recent years, many authors considered its constraint solutions. A series of meaningful results were achieved [1,5- 10]. The methods in these papers are mainly the generalized inverse of matrix and special properties of finite dimensional vector spaces [1,5,6], the decomposition of matrix or matrix pairs [7,8] and the special properties of constraint matrices [9,10]. However, the least squares symmetrizable solutions for these matrix equations have not been considered. The purpose of this paper is to discuss its least squares symmetrizable solutions with the matrix row stacking, Kronecker product and special relations between two linear subspaces which are topological isomorphism because the structure of symmetrizable matrices can not be found and the methods applied in [1-10] can not solve the problem in this paper. The background for introducing the definition of symmetrizable matrices is to get “symmetric” matrices from nonsymmetric matrices [11] because of nice properties and multi-areas of applications of symmetric matrices. For example, Sun [12] introduced the definition of positive definite symmetrizable matrices to study an efficient algorithm for solving the nonsymmetry second-order elliptic discrete systems.
Throughout this paper we use some notations as follows. Let
be the set of all
real matrices and denote
;
,
and
are the set of all
orthogonal, symmetric and skewsymmetric matrices, respectively.
,
and
represent the range, the transpose and the Moore-Penrose generalized inverse of A, respectively.
In denotes the identity matrix of order n. For
,
,
denotes Kronecker product of matrix A and B;
denotes the inner product of matrix A and B. The induced matrix norm is called Frobenius norm, i.e.
, then
is a Hilbert inner product space.
Definition 1. A real
matrix A is called a symmetrizable (skew-symmetrizable) matrix if A is similar to a symmetric (skew-symmetric) matrix A. The set of symmetrizable (skew-symmetrizable) matrices is denoted by
.
From Definition 1, it is easy to prove that
if and only if there exists a nonsingular matrix W and a symmetric (skew-symmetric) matrix
such that
![](https://www.scirp.org/html/1-7401377\1f77ecbf-2664-4b93-88f3-b7ffd7c17a7e.jpg)
We now introduce the following two special classes of subspaces in
.
,
.
It is easy to see that if W is a given nonsingular matrix, then
and
are two closed linear subspace of
. In this paper, we suppose that W is a given nonsingular matrix and
. We will consider the following problems.
Problem I. Giving
,
, find
such that
.
Problem II. Given
, find
such that
where
is the solution set of Problem I.
In this paper, if C = 0, D = 0 in Problem I, then Problem I becomes Problem I of [13]. Peng [13] studied the least squares symmetrizable solutions of the matrix equation AX = B with the singular value decomposition of matrix. The method applied in [13] can not solve Problem I in this paper. In this paper, we first take matrix equations (AX = B, XC = D) into linear equations with matrix row stacking and Kronecker product. Then we obtain an orthogonal basis-set for
with special relations between two linear subspaces which are topological isomorphism. Based on these results, we obtain the general expression of Problem I.
This paper is organized as follows. In Section 2, we first discuss the matrix row stacking methods, Kronecker product of matrix and relations between
and
. Then we obtain the general solutions of Problem I. In Section 3, we derive the solution of Problem II with the invariance of Frobenius norm under orthogonal transformations. In the end, we give an algorithm and numerical experiment to obtain the optimal approximation solution.
2. The Solution Set of Problem I
At first, we discuss the matrix row stacking methods, Kronecker product of matrix and relations between two linear subspaces which are topological isomorphism.
For any
, let
denote an ordered stack of the row of A from upper to low stacking with the first row, i.e.
, (2.1)
where
denotes the ith row of A. For any vector
, let
denote the following matrix containing all the entries of vector x.
, (2.2)
where
denotes the elements from i to j of vector
. From (2.1), we can derive the following two linear subspaces of
.
, (2.3)
.
Lemma 1. [14] If
,
,
, then
. (2.4)
For any
, let
, i.e.
. (2.5)
It is no difficult to prove that mapping
is a topological isomorphism mapping from
to
. According to (2.1) and (2.5), it is easy to derive the following mapping from linear subspaces
to
.
![](https://www.scirp.org/html/1-7401377\781e7248-df64-4f3a-afb3-19ceb19bc82b.jpg)
i.e.
(2.6)
It is also easy to prove that mapping
is a topological isomorphism mapping from
to
. It is clear that the dimension of ![](https://www.scirp.org/html/1-7401377\8c666400-dfd6-4150-8c46-ae0a1b7103d0.jpg)
is
. This implies that the dimension of
and
are also
. In this paper, let
.
Lemma 2. If
is an orthonormal basisset for
, and let
, then the following relations hold.
(2.7)
![](https://www.scirp.org/html/1-7401377\ae0336a9-dbe3-4811-b4ff-c25de6a3fdaf.jpg)
From the definition of the orthonormal basis-set, it is easy to prove Lemma 2, so the proof is omitted.
For any matrix
, if let
denote the vector of coordinates of
with respect to the basis-set
, then combining (2.4) and (2.7), we have
(2.8)
Moreover, for any
, the following conclusion holds.
. (2.9)
The orthonormal basis-set
for
can be obtained by the following calculation procedure.
Calculation procedure
Step 1. Input a basis-set
for
.
Step 2. According to (2.1), compute
, and obtain a basis-set
for
.
Step 3. Input a nonsingular matrix W, compute
and obtain a basis-set for
.
Step 4. Compute the QR decomposition of matrix
and obtain an orthonormal basis-set
for
.
Lemma 3. If
,
, then the general solutions of least squares problem
![](https://www.scirp.org/html/1-7401377\15339008-6972-4ab2-a326-7e8391e01505.jpg)
is
.
With the singular value decomposition of matrix, it is easy to prove this lemma. So the proof is omitted.
Theorem 1. Giving
,
, and let
, (2.11)
then the general solutions of Problem I is
(2.12)
Proof. From Lemma 1, we have
![](https://www.scirp.org/html/1-7401377\849c1eab-dc2b-4018-9005-e9282ffadafb.jpg)
This implies that finding
such that
if and only if finding
such that
, (2.13)
where
. From Lemma 3, the general solutions of (2.13) is
(2.14)
Combining (2.13) and (2.14) gives (2.12). □
3. The Solution of Problem II
Let
be the solution set of Problem I. From (2.12), it is easy to see that
is a nonempty closed convex set. So we claim that for any given
, there exists the unique optimal approximation for Problem II.
Theorem 2. If given
,
, ![](https://www.scirp.org/html/1-7401377\bf7763e1-ad5d-4c21-97b4-d6867c37b5a3.jpg)
, then Problem II has a unique solution
. Moreover,
can be expressed as
(3.1)
where
are denoted by (2.11).
Proof. Choose
such that
. Combining the invariance of the Frobenius norm under orthogonal transformations, (2.12) and (2.14), we have
![](https://www.scirp.org/html/1-7401377\267044af-df8d-4f5a-81eb-05a1185df852.jpg)
Let
,
, it is clear that
are orthogonal projection matrices satisfying
. Hence, we have
![](https://www.scirp.org/html/1-7401377\066e801b-7e15-4566-8c63-cd62b1cec786.jpg)
It is easy to prove that
. This implies that
(3.2)
The solution of (3.2) is
. (3.3)
Substituting (3.3) to (2.12) gives (3.1). □
From Theorem 2, we can design the following algorithm to obtain the optimal approximate solution.
Algorithm
1) Input
.
2) Input a basis-set
for
.
3) According to the calculation procedure before, compute
and obtain an orthonormal basis-set for
.
4) Let
, compute A0, B0 from (2.11).
5) Compute
from the second equation of (3.1).
6) According to the first equation of (3.1), calculate
.
Example ![](https://www.scirp.org/html/1-7401377\ee48d411-fa3c-4aeb-ab8e-8ada02badf89.jpg)
1) Input
as follows.
,
,
,
,
![](https://www.scirp.org/html/1-7401377\70c59db6-5eb4-47cb-9812-f40d7055a864.jpg)
.
2) Input a basis-set
for
as follows.
![](https://www.scirp.org/html/1-7401377\d645a0f7-2a59-4c46-9828-6993d6c498af.jpg)
![](https://www.scirp.org/html/1-7401377\d3fa3f90-66f3-4443-aab3-e5a8ade07e80.jpg)
3) According to the calculation procedure, we obtain an orthonormal basis-set
for
as follows.
![](https://www.scirp.org/html/1-7401377\f4c38bf7-64b3-46d7-b3a5-2d67a37b15ef.jpg)
![](https://www.scirp.org/html/1-7401377\7f52c52d-5069-4c3c-a695-c21ff22419b1.jpg)
4) Using the software “MATLAB”, we obtain the unique solution
of Problem II.
.
4. Conclusion
In this paper, we first derive the least squares symmetrizable solutions of matrix equations (AX = B, XC = D) with the matrix row stacking and the theory of topological isomorphism, i.e. Theorem 1. Then we give the unique optimal approximation solution, i.e. Theorem 2. Based on Theorem 1 and 2, we design an algorithm to find the optimal approximation solution. Compare to [1-10], this paper has two important achievements. One is we apply the topological isomorphism theory to obtain the least squares symmetrizable solutions of matrix equations (AX = B, XC = D), and provide a method to solve the matrix equation, where the construct of constraint matrix can not be found. The other is we present a stable calculation procedure to obtain an orthonormal basis-set for
, and solve the key problem of the algorithm.
5. Acknowledgements
The authors are very grateful to the referee for their valuable comments, and also thank for his helpful suggestions.
This research was supported by National natural Science Foundation of China (31170532).