1. Introduction
This paper is a continuation of the first two papers [1] [2] published which focuses on the projections in polynomials spaces and constructs an operator expressed in terms of the Kronecker Product to allow for a projection from the subspace
onto the subspace
where
. This is also motivated by the calculations performed in [3] . Below, we first start with a motivating example from this book [3] and go on to develop a more general theory.
2. Projections in a Polynomial Space
: A Motivating Example [3]
Let
be the vector space of nth degree polynomials over some arbitrary closed interval
. We will choose
and define
with its standard ordered basis
, that is
Traditionally, we can define the projection of a function in the following way.
Let
and
be the projection of
onto
.
Then we can define the function
as follows
(2.1)
Let us consider an example.
Example 2.1 (Motivating Example [3] ). Let
such that
and
, we calculate the function
(2.2)
Suppose
then we have
(2.3)
We know that
, clearly
.
However we note that the basis
is not orthonormal, hence we now use the Gram-Schmidt procedure.
Let
,
,
.
(2.4)
(2.5)
therefore, we need to calculate the latter
(2.6)
(2.7)
(2.8)
Therefore, we have
(2.9)
It is that the ordered basis
defined as
is an orthogonal basis of
. This means that
can be expressed as a linear combination using the orthogonal basis
.
Let
. We wish to project
in the direction of
.
1) We project
(2.10)
(2.11)
2) We project
(2.12)
(2.13)
3) We now project
(2.14)
(2.15)
(2.16)
Hence, it should be true that
Hence, the coefficient vector is
.
This concludes our motivating example. We now want to find an operator which achieves the same result.
We can now consider a different way of getting to the result using an optimization technique in the following way.
We know that projecting the function
along
must be in
. Hence, we are looking for an optimized solution (in the least square sense) of the form
.
To derive the constant
we can use variations of the ideal function by a parameter
as follows
where
is the optimum choice for
in the least square sense.
Let
be LS Error integral
This integral represents the squared error. We want to find the value
which minimizes
with respect to
. That is we want to calculate
and set it equal to 0 to derive the optimal coefficients.
(2.17)
(2.18)
(2.19)
(2.20)
(2.21)
(2.22)
Setting and solving
We get
We can now algebraically solve for
We therefore conclude that the projection of
along
is given by
which is clearly in the span of
.
Thinking of
as an operator, we postulate that
is idempotent. What do we mean by that? It means that we project in the direction of some polynomial and repeating the projection one more time will leave the operation invariant. That is
(2.23)
where
.
Applying this to our example, we should find that this operation is idempotent. We already know that
By setting this to 0 we find
.
All we need to do now is project this function again, hence we compute
. Hence, we get
(2.24)
(2.25)
(2.26)
(2.27)
(2.28)
(2.29)
(2.30)
Setting our to result to 0, we get the following result
Hence, we conclude that
.
3. The General Theory
In this section, we try to develop a more general theory of projection operators over Polynomial Rings of arbitrary degree. The main idea is to investigate the properties of the operator defined as follows
(3.1)
where
is the best function which represents the projection of
onto
and
. We can, of course, see that
.
Example 3.1. Suppose
and
we wish to project
onto
i.e.
. Hence, we need to solve
We can proceed in the following way
(3.2)
Differentiating (3.1) we get
(3.3)
therefore,
(3.4)
Setting (3.4) to zero we get a square system of the form
(3.5)
(3.6)
Equations (3.5) and (3.6) can be written in matrix form as follows
(3.7)
The above system has a unique non-trivial solution since the matrix determinant is non-zero.
Theorem 1. Let
be some projection from
and
,
This is means that the operator Idempotent.
Proof. We first show that
(3.8)
(3.9)
(3.10)
(3.11)
(3.12)
Suppose that
then we get
(3.13)
(3.14)
(3.15)
Evaluating the limits, we get the following result
(3.16)
Setting Equation (3.16) = 0 we get
We now show that, this reduces to a square system as follows
therefore,
Writing this in matrix form we have
. This always will have a non-trivial solution provided that
. Hence, the optimum vector can be computed. and the projection polynomial can be written as
To show that the operator is idempotent, we assume that
is not optimal and there exists out there a polynomial
which represents a better projection. Hence, we apply the operator again noting that
.
(3.17)
(3.18)
(3.19)
(3.20)
(3.21)
(3.22)
(3.23)
(3.24)
Setting Equation (3.24) = 0 gives us
We see that this leads to the conclusion that
. Hence, our optimum vector is unique and the operator is Idempotent.
Lemma 2. In the polynomial ring
, let
, the zero polynomial projected over
over some interval
is the optimum function
. This implies the vector
,
.
Proof.
where
over
. Therefore, we compute
(3.25)
(3.26)
(3.27)
(3.28)
(3.29)
(3.30)
(3.31)
Given that
implies that
.
Lemma 3. Let
be some polynomial of degree
then
Proof. We start with some polynomial
and some
then
over some interval
. We shall write
for short.
(3.32)
(3.33)
(3.34)
(3.35)
(3.36)
(3.37)
(3.38)
Setting the above equation to 0 as before we get
This leads to the same linear system other than the fact that
hence we get
. This implies that
. Where
.
Theorem 4. For some fixed
where
. Let
be distinct polynomials in
such that
. then we have
Proof.
(3.39)
(3.40)
(3.41)
(3.42)
(3.43)
Let
such that
. Hence, we have
(3.44)
(3.45)
(3.46)
(3.47)
We have that
and the optimum vector
where
.
By the above lemmas and theorems, we clearly have the following results
• Commutativity clearly it is commutative
.
• Associativity It should also clear the sum is associative since the sum of functions in
is associative.
• Identity As demonstrated before we have shown that choosing
implies that
therefore we can conclude that
.
• Inverse We have also shown that choosing
implies we get
therefore
.
Hence, we are now in a position to talk about a group structure for this projectors on polynomials rings.
Question 2: What about projections on orthogonal subspace.
To answer this question we will think of
as a vector space with standard basis as before taken to be
. We know that for any
then
since
and
then
. Next, we define the following map
Clearly,
is a bijection. Now given some element of
, we wish to construct its orthogonal subspace i.e. given some
, we construct the subspace
which we define as follows
Working with integral, we get the following result
(3.48)
(3.49)
(3.50)
(3.51)
(3.52)
(3.53)
Hence, we seek to solve the equation
To make the notation a bit lighter, we set
, so we solve the more consise equation
Hence, we derive the required vector coefficients
given some j, as shown in Table 1. Therefore, given the basis
, we can assign to each element in
a set M such that M is defined as follows
The polynomials in
are all orthogonal to the basis elements
. Polynomial functions in
can be mapped via
, from the subspaces in the table above, into the coefficients of
.
Theorem 5. Each set
forms a
-free module [4] in
over
.
Proof. We know that each h in
for some
has the form
such that
. Let
then we have
This implies that
. It is easy to verify the other properties, hence we can conclude that
is abelian. Defining
such that
implies that
• Then it is true that
since
•
since
•
since
•
The generating set for
is the set
is linearly independent and therefore
is a free module whose base ring is a field hence
is a vector speace.
Table 1. j degree equations and their solution subspaces in
.
We, let
be the set
Let
, we wish to project f into each
such that we minimize the error squared for each k. We will then take the infimum of these errors. Hence, we can define a vector rejection
where
Suppose we have some f such that
then we seek
(3.54)
(3.55)
(3.56)
(3.57)
Setting
implies that we have
(3.58)
(3.59)
(3.60)
(3.61)
(3.62)
(3.63)
(3.64)
(3.65)
By using the j-dimensional gradient operator
. Calculating
leads to a
homogeneous system of the form
It is clear that putting together
and the linear equation
leads a
linear system. Given the linear system generated by
, it is feasible to reduce this to a square system by combining anyone pair of the
equations in
, thereby reducing the overall system to a square system of
The system is a square system i.e. a
system, hence there exists a solution provided coefficient determinant is non-zero.
4. A Different Approach
In this section, we will show that performining projections over polynomial spaces
can also be done via an injective map in
. We can start by define the following mapping
such that
Lemma 6. The mapping
is Bijective.
Proof.
is bijective since
Also, given any vector
in
such that
. Hence,
is bijective.
Theorem 7. First Theorem
Given
and
where
w.r.t basis
. The projection of
onto
relative to basis
is given by
where we define
where
.
Proof. Let
such that
and
. Therefore we have
First, let’s use the standard basis on
to be
. This implies that
1)
2)
3)
4)
5)
6)
7)
8)
9)
We can define the following recurrence relation for the values of the metric tensor
where we have used the fact that
for
i.e. the symmetric property of the metric tensor. Hence, we can write
(4.1)
(4.2)
(4.3)
(4.4)
(4.5)
(4.6)
Now, we can compute the intergral version of this projection as follows
(4.7)
(4.8)
(4.9)
(4.10)
Using the bijective mapping
we get the desired result by comparison.
Theorem 8. Second Theorem
The first theorem can be written with the Kronecker Product as follows
where
is the Kronecker Product,
due to symmetry.
where
.
Proof. Let
such that
and
. Therefore we have
Then we have
We know that G is an
matrix given
Therefore we have
(4.11)
(4.12)
(4.13)
therefore, we have
Lemma 9. The matrix
in Theorem 9 is normalised
Proof.
Suppose that we have two polynomials
of order
such that
for some
. Then, we propose that
and further such that projectors can be contructed by normalising the coefficient vector
, which I will denote by
, with
with
begin the Hypersphere in
.
where
are the squares of the normalised coefficient vector.
Proof. Let
such that
.
and
. Using the mapping
. We find that
therefore
. We can calculate its Kronecker product.
Therefore, we have
Performing the calculation we get
(4.14)
(4.15)
(4.16)
This implies that
.
It is clear that the matrix
is normalised. Hence, we may conclude that
is also normalised. It also clear that if
are such that
,
gives
which implies that
are anti-podal points on the hypersphere in
. We, therefore, can see that
Theorem 10. Given two polynomials
such that
then, given some
, the projection of f onto
is given by the following expression
Proof. Let
with
such that
and
. We know that
. We also define
. We, therefore, have
Therefore, we have
(4.17)
(4.18)
I will denote this vector
, therefore we have
.
The vector
can be written as
For clearer notation, we write
and
. Therefore, by the distributive law of the Kronecker Product, we get the follwing matrix.
which in matrix form gives us
(4.19)
(4.20)
Clearly,
is of order
,
is of order
, this implies that
is of order
and
is of order
.
Normalising
gives and multiplying by the metric tensor, we get
It can be verified that
This is equal to
5. The Group
To formulate the group structure, we focus our attention to the subspaces of
where we can define the subspaces as follows
It is clear that
. Then, we also know that
Then projectors on
can be constructed as
which represents the set of all projectors in the subspace
.
Theorem 11. The set
is a group under the mapping
such that
where the set
is defined as
.
Proof. We write
and
.
Given some
, it is clear
. We also know that
and
, hence projecting in the direction of
is the projection of
. From the above section we have
(5.1)
(5.2)
In the same way that
in commutative in
, we can see that
hence it is also commutative. In a similar way, we can argue associativity. It is also clear that for
the zero polynomial is each
. The zero polynomial will give the identity element since
. The inverse element will, simply be,
. Hence, we see that
.
We conclude that the
is a group.
Given that for each
we can say that
and
. This tells us that
whenever
. Then, from group theory, we know that the union of two subgroups is a group if one is a subset of the other. Hence, we have the following result
is also a group. Indeed, we have
6. Conclusion
In conclusion, given the results above, we find that projections in polynomials spaces are very similar to traditional projections in Euclidean spaces with the right construct. This operation can be achieved via an integral operator or a Kronecker Product. We have also noticed that very similarly, we are using hyper-spheres in
to construct such operators. Highlighting a paper, previously published in ALAMAT, discussing the differential geometry aspect of projections and the manifold structure, such a link can be established for polynomial spaces.
Acknowledgements
Dedicated to both my grandmother and grandfather who left us too early. I know Papi would be happy to see this paper. You are very much missed.
I also would like to dedicate this paper to my girlfriend, Miss Yang Xiaoying, and would like to thank her for the love, support and happiness she brings to my life.
Notation
The notation system is as follows:
1)
: The space of polynomials of degree at most n over the real numbers.
2)
: The standard basis in
.
3)
: arbitrary elements of
.
4)
: Operator on
on interval
with parameter
.
5)
: Degree of the polynomial
in
.
6)
: Mappings between
and
.
7)
: Metric Tensor on
.
8)
: arbitrary elements of
.
9)
: The Kronecker Product of the vector
.
10)
: The subspace of polynomials of degree k.
11)
: The set of projectors onto
.