Onto Orthogonal Projections in the Space of Polynomials Pn[x]

Abstract

In this article, I consider projection groups on function spaces, more specifically the space of polynomials Pn[x]. I will show that a very similar construct of projection operators allows us to project into the subspaces of Pn[x] where the function h Pn[x] represents the closets function to f Pn[x] in the least square sense. I also demonstrate that we can generalise projections by constructing operators i.e. in Rn+1 using the metric tensor on Pn[x]. This allows one to project a polynomial function onto another by mapping it to its coefficient vector in Rn+1. This can be also achieved with the Kronecker Product as detailed in this paper.

Share and Cite:

Niglio, J. (2023) Onto Orthogonal Projections in the Space of Polynomials Pn[x]. Journal of Applied Mathematics and Physics, 11, 22-45. doi: 10.4236/jamp.2023.111003.

1. Introduction

This paper is a continuation of the first two papers [1] [2] published which focuses on the projections in polynomials spaces and constructs an operator expressed in terms of the Kronecker Product to allow for a projection from the subspace k [ x ] onto the subspace j [ x ] where j k . This is also motivated by the calculations performed in [3] . Below, we first start with a motivating example from this book [3] and go on to develop a more general theory.

2. Projections in a Polynomial Space P n [ x ] : A Motivating Example [3]

Let P n [ x ] be the vector space of nth degree polynomials over some arbitrary closed interval [ a , b ] . We will choose K = and define P n [ x ] with its standard ordered basis B ( x ) , that is

B ( x ) : = { 1, x , x 2 , x 3 , , x n }

Traditionally, we can define the projection of a function in the following way.

Let f ( x ) , g ( x ) P n [ x ] and h ( x ) be the projection of f ( x ) onto g ( x ) .

Then we can define the function h ( x ) as follows

h ( x ) : = a b f ( x ) g ( x ) d x a b [ g ( x ) ] 2 d x g ( x ) (2.1)

Let us consider an example.

Example 2.1 (Motivating Example [3] ). Let f ( x ) , g ( x ) P 2 [ x ] such that f ( x ) = x 2 and g ( x ) = x , we calculate the function h ( x )

h ( x ) = a b x 3 d x a b x 2 d x x = 1 4 [ x 4 ] a b 1 3 [ x 3 ] a b x (2.2)

Suppose [ a , b ] = [ 0,1 ] then we have

1 4 [ x 4 ] 0 1 1 3 [ x 3 ] 0 1 x = 3 4 x = h ( x ) (2.3)

We know that B ( x ) = { 1, x , x 2 } , clearly h ( x ) S p a n { x } .

However we note that the basis B ( x ) is not orthonormal, hence we now use the Gram-Schmidt procedure.

Let u 1 ( x ) = 1 , u 2 ( x ) = x p r o j 1 ( x ) , u 3 ( x ) = x 2 p r o j 1 ( x 2 ) p r o j u 2 ( x 2 ) .

u 2 ( x ) = x 0 1 x d x 0 1 d x = x 1 2 (2.4)

u 3 = x 2 1 3 p r o j u 2 ( x 2 ) (2.5)

therefore, we need to calculate the latter

p r o j u 2 ( x 2 ) = 0 1 x 2 ( x 1 2 ) d x 0 1 ( x 1 2 ) 2 d x ( x 1 2 ) (2.6)

= 1 12 1 12 ( x 1 2 ) (2.7)

= x 1 2 (2.8)

Therefore, we have

u 3 ( x ) = x 2 1 3 ( x 1 2 ) = x 2 1 3 x + 1 2 = x 2 x + 1 6 (2.9)

It is that the ordered basis B * defined as

B * = { 1, x 1 2 , x 2 x + 1 6 }

is an orthogonal basis of P 2 [ x ] . This means that f ( x ) can be expressed as a linear combination using the orthogonal basis B * .

Let e 1 ( x ) = 1, e 2 ( x ) = x 1 2 , e 3 ( x ) = x 2 x + 1 6 . We wish to project f ( x ) in the direction of e i , i = 1,2,3 .

1) We project p r o j ( 1 ) ( x 2 )

p r o j ( 1 ) ( x 2 ) = 0 1 x 2 d x 0 1 d x 1 (2.10)

= 1 3 (2.11)

2) We project p r o j e 2 ( x ) ( x 2 )

p r o j e 2 ( x ) ( x 2 ) = 0 1 x 2 ( x 1 2 ) d x 0 1 ( x 1 2 ) 2 d x ( x 1 2 ) (2.12)

= x 1 2 (2.13)

3) We now project p r o j e 3 ( x 2 )

p r o j e 3 ( x ) ( x 2 ) = 0 1 x 2 ( x 2 x + 1 6 ) d x 0 1 ( x 2 x + 1 6 ) 2 d x ( x 2 x + 1 6 ) (2.14)

= 1 180 1 180 ( x 2 x + 1 6 ) (2.15)

= x 2 x + 1 6 (2.16)

Hence, it should be true that

x 2 = 1 3 + ( x 1 2 ) + ( x 2 x + 1 6 )

Hence, the coefficient vector is ( 1 3 ,1,1 ) .

This concludes our motivating example. We now want to find an operator which achieves the same result.

We can now consider a different way of getting to the result using an optimization technique in the following way.

We know that projecting the function f ( x ) = x 2 along g ( x ) = x must be in S p a n { x } . Hence, we are looking for an optimized solution (in the least square sense) of the form y ˜ ( x ) = α x .

To derive the constant α we can use variations of the ideal function by a parameter ε as follows

y ˜ ( x ) + ε α x , y ˜ ( x ) = α * x

where α * is the optimum choice for α in the least square sense.

Let I ( x , ε ) be LS Error integral

I ( x , ε ) = 0 1 ( x 2 ( y ˜ ( x ) + ε α x ) ) 2 d x = 0 1 ( x 2 α * x ε α x ) 2 d x

This integral represents the squared error. We want to find the value α * which minimizes I ( x , ε ) with respect to ε . That is we want to calculate

d I d ε | ε = 0 = d d ε | ε = 0 0 1 ( x 2 α * x ε α x ) 2 d x

and set it equal to 0 to derive the optimal coefficients.

d I d ε | ε = 0 = d d ε | ε = 0 0 1 ( x 2 α * x ε α x ) 2 d x (2.17)

= d d ε | ε = 0 0 1 ( x 2 x ( α * + ε α ) ) 2 d x (2.18)

= d d ε | ε = 0 0 1 x 4 2 x 3 ( α * + ε α ) + x 2 ( α * + ε α ) 2 d x (2.19)

= d d ε | ε = 0 x 5 5 x 4 2 ( α * + ε α ) + x 3 3 ( α * + ε α ) 2 | 0 1 (2.20)

= d d ε | ε = 0 1 5 1 2 ( α * + ε α ) + 1 3 ( α * + ε α ) 2 (2.21)

= α 2 + 2 α * α 3 (2.22)

Setting and solving

d I d ε | ε = 0 = 0

We get

α 2 + 2 α * α 3 = 0 2 α * α 3 = α 2

We can now algebraically solve for α *

α * = 3 4

We therefore conclude that the projection of f ( x ) along g ( x ) is given by

y ˜ ( x ) = 3 4 x

which is clearly in the span of g ( x ) .

Thinking of I ( x , ε ) as an operator, we postulate that I 2 is idempotent. What do we mean by that? It means that we project in the direction of some polynomial and repeating the projection one more time will leave the operation invariant. That is

d I 2 ( x , ε ) d ε | ε = 0 = d d ε ( d I ( x , ε ) d ε | ε = 0 ) | ε = 0 = d I ( x , ε ) d ε | ε = 0 (2.23)

where I ( x , ε ) = a b ( x 2 α * a x ε α x ) d x .

Applying this to our example, we should find that this operation is idempotent. We already know that

d I ( x , ε ) d ε | ε = 0 = α 2 + 2 α * α

By setting this to 0 we find y ˜ ( x ) = 3 4 x S p n { x } .

All we need to do now is project this function again, hence we compute d I ( y ˜ ( x ) , ε ) d ε | ε = 0 . Hence, we get

d I ( y ˜ ( x ) , ε ) d ε | ε = 0 = d d ε 0 1 ( 3 x 4 α * x ε α x ) 2 d x (2.24)

= 0 1 d d ε ( 3 x 4 α * x ε α x ) 2 d x (2.25)

= 0 1 2 ( 3 x 4 α * x ε α x ) α x d x (2.26)

= 2 0 1 3 α x 2 4 α * α x 2 d x (2.27)

= 2 [ 3 α x 3 12 | 0 1 α * α x 3 3 | 0 1 ] (2.28)

= 2 [ 3 α 12 α * α 3 ] (2.29)

= α 2 + 2 α * α 3 (2.30)

Setting our to result to 0, we get the following result

d I ( y ˜ ( x ) , ε ) d ε | ε = 0 = 0 α 2 + 2 α * α 3 = 0 α * = 3 4

Hence, we conclude that y ˜ ˜ = y ˜ = 3 4 x s p n ( x ) .

3. The General Theory

In this section, we try to develop a more general theory of projection operators over Polynomial Rings of arbitrary degree. The main idea is to investigate the properties of the operator defined as follows

I ( f ( x ) p r o j g ( x ) , ε ) d d ε | ε = 0 a b ( f ( x ) y ˜ ( x ) ε g ( x , α ) ) 2 d x , [ a , b ] (3.1)

where y ˜ ( x ) is the best function which represents the projection of f ( x ) onto g ( x ) and α , α * deg ( g ( x ) ) + 1 . We can, of course, see that y ˜ ( x ) S p n { g ( x ) } .

Example 3.1. Suppose f ( x ) = a x 2 + b x + c 2 [ x ] and g ( x ) = m x + d 1 [ x ] we wish to project f ( x ) onto g ( x ) i.e. f ( x ) p r o j g ( x ) . Hence, we need to solve

d d ε | ε = 0 0 1 ( a x 2 + b x + c ( α 1 * x + α 0 * + ε ( α 1 x + α 0 ) ) ) 2 d x

We can proceed in the following way

d d ε | ε = 0 0 1 ( a x 2 + b x + c ( α 1 * x + α 0 * + ε ( α 1 x + α 0 ) ) ) 2 d x = 0 1 d d ε | ε = 0 ( a x 2 + b x + c ( α 1 * x + α 0 * + ε ( α 1 x + α 0 ) ) ) 2 d x (3.2)

Differentiating (3.1) we get

d d ε | ε = 0 ( a x 2 + b x + c ( α 1 * x + α 0 * + ε ( α 1 x + α 0 ) ) ) 2 = 2 ( a x 2 + b x + c α 1 * x α 2 * ) ( α 1 x + α 0 ) (3.3)

therefore,

2 0 1 ( a x 2 + b x + c α 1 * x α 0 * ) ( α 1 x + α 0 ) d x = 2 [ a α 1 4 + b α 1 3 + c α 1 2 α 1 * α 1 3 α 1 α 0 * 2 + b α 0 2 + a α 0 3 + c α 0 α 1 * α 0 α 0 * α 0 ] (3.4)

Setting (3.4) to zero we get a square system of the form

α 1 * 3 + α 0 * 2 = a 4 + b 3 + c 2 (3.5)

α 1 * + α 0 * = a 2 + b 2 + c (3.6)

Equations (3.5) and (3.6) can be written in matrix form as follows

[ 1 3 1 2 1 1 ] [ α 1 * α 0 * ] = [ a 4 + b 3 + c 2 a 2 + b 2 + c ] (3.7)

The above system has a unique non-trivial solution since the matrix determinant is non-zero.

Theorem 1. Let I ( f ( x ) p r o j g ( x ) , ε ) be some projection from f ( x ) n [ x ] and g ( x ) j [ x ] , 0 j < n

I 2 ( f ( x ) p r o j g ( x ) , ε ) = I ( I ( f ( x ) p r o j g ( x ) , ε ) , ε ) = I ( f ( x ) p r o j g ( x ) )

This is means that the operator Idempotent.

Proof. We first show that

d d ε | ε = 0 ( a b ( f ( x ) y ˜ ( x ) ε g ( x , α ) ) 2 ) d x = a b d d ε | ε = 0 ( f ( x ) y ˜ ( x ) ε g ( x , α ) ) 2 d x (3.8)

= 2 a b ( f ( x ) y ˜ ( x ) ) g ( x , α ) d x (3.9)

= 2 a b f ( x ) g ( x , α ) y ˜ ( x ) g ( x , α ) d x (3.10)

= 2 a b f ( x ) g ( x , α ) + ( i = 0 j α i * x i ) g ( x , α ) d x (3.11)

= 2 a b f ( x ) g ( x , α ) d x + 2 k , i = 0 j ( α i * α k a b x i + k d x ) (3.12)

Suppose that f ( x ) = i = 0 n β i x i then we get

2 a b f ( x ) g ( x , α ) d x + 2 k , i = 0 j ( α i * α k a b x i + k d x ) = 2 a b ( i = 0 n β i x i ) g ( x , α ) d x + 2 k , i = 0 j ( α i * α k a b x i + k d x ) d x (3.13)

= 2 k = 0 j i = 0 n ( β i α k a b x i + k d x ) + 2 k , i = 0 j ( α i * α k a b x i + k d x ) (3.14)

= 2 k = 0 j i = 0 n ( β i α k [ x i + k + 1 i + k + 1 ] a b ) + 2 k , i = 0 j ( α i * α k [ x i + k + 1 i + k + 1 ] a b ) (3.15)

Evaluating the limits, we get the following result

2 k 0 j i = 0 n ( β i α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] ) + 2 k , i = 0 j ( α i * α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] ) (3.16)

Setting Equation (3.16) = 0 we get

k , i = 0 j ( α i * α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] k , i ) = k = 0 j i = 0 n ( β i α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] k , i )

We now show that, this reduces to a square system as follows

j + 1 rows ( i = 0 j α i * γ ( k , i ) = i = 0 n β i k i i = 0 j α i * γ ( j , i ) 0 = i = 0 n β i k i

therefore,

[ γ ( 0,0 ) γ ( 0,1 ) γ ( 0, j ) γ ( 1,0 ) γ ( 1,1 ) γ ( 1, j ) γ ( j ,0 ) γ ( j ,1 ) γ ( j , j ) ] [ α 0 * α 1 * α j * ] = [ β i k ( 0, i ) β i k ( 1, i ) β i k ( j , i ) ]

Writing this in matrix form we have Γ α * = K . This always will have a non-trivial solution provided that d e t ( Γ ) 0 . Hence, the optimum vector can be computed. and the projection polynomial can be written as

y ˜ ( x ) = i = 0 j α i * x i

To show that the operator is idempotent, we assume that y ˜ is not optimal and there exists out there a polynomial y ˜ ( x ) = i = 0 j α i * x i which represents a better projection. Hence, we apply the operator again noting that d e g ( y ˜ ( x ) ) = d e g ( y ˜ ) ( x ) = d e g ( g ( x , α ) ) .

d d ε | ε = 0 ( a b ( y ˜ ( x ) y ˜ ( x ) ε g ( x , α ) ) 2 ) d x = a b d d ε | ε = 0 ( y ˜ ( x ) y ˜ ( x ) ε g ( x , α ) ) 2 d x (3.17)

= a b d d ε | ε = 0 ( i = 0 j α i * x i i = 0 j α i * x i ε i = 0 j α i x i ) 2 d x (3.18)

= a b d d ε | ε = 0 ( i = 0 j ( α i * α i * ε α i ) x i ) 2 d x (3.19)

= 2 a b ( i = 0 j ( α i * α i * ) x i ) ( i = 0 j α i x i ) d x (3.20)

= 2 a b i = 1 ( j + 1 ) 2 ( α i * α i * ) α i x i 1 d x (3.21)

= 2 i = 1 ( j + 1 ) 2 a b ( α i * α i * ) α i x i 1 d x (3.22)

= 2 i = 1 ( j + 1 ) 2 ( α i * α i * ) α i x i i | a b (3.23)

= 2 i = 1 ( j + 1 ) 2 ( α i * α i * ) α i [ b i i a i i ] (3.24)

Setting Equation (3.24) = 0 gives us

i = 1 ( j + 1 ) 2 ( α i * α i * ) α i [ b i i a i i ] = 0

We see that this leads to the conclusion that α i * α i * = 0 α i * = α i * . Hence, our optimum vector is unique and the operator is Idempotent.

Lemma 2. In the polynomial ring n [ x ] , let f ( x ) = 0 , the zero polynomial projected over g ( x ) n [ x ] over some interval [ a , b ] is the optimum function y ˜ ( x ) = 0 . This implies the vector α * = 0 j , 0 j n .

Proof. f ( x ) p r o j g ( x ) where g ( x ) n [ x ] over [ a , b ] . Therefore, we compute

I ( 0 p r o j g ( x ) ) = a b d d ε | ε = 0 ( 0 ( y ˜ ( x ) + ε g ( x , α ) ) ) 2 d x (3.25)

= a b d d ε | ε = 0 ( y ˜ ( x ) ε g ( x , α ) ) 2 d x (3.26)

= a b d d ε | ε = 0 ( i = 0 j α i * x i + ε i = 0 j α i x i ) 2 d x (3.27)

= 2 a b ( i = 0 j α i * x i ) ( i = 0 j α i x i ) d x (3.28)

= 2 a b i = 1 ( j + 1 ) 2 α i * α i x i 1 d x (3.29)

= 2 i = 1 ( j + 1 ) 2 a b α i * α i x i 1 d x (3.30)

= 2 i = 1 ( j + 1 ) 2 α i * α i [ b i a i i ] (3.31)

Given that b a implies that α i * = 0 i = 1 , , j .

Lemma 3. Let f ( x ) n [ x ] be some polynomial of degree d e g ( f ( x ) ) n then

I ( f ( x ) p r o j g ( x ) ) = I ( f ( x ) p r o j g ( x ) ) , g ( x ) n [ x ] , d e g ( g ( x ) ) d e g ( f ( x ) ) n

Proof. We start with some polynomial f ( x ) n [ x ] and some g ( x ) then I ( f ( x ) p r o j g ( x ) ) over some interval [ a , b ] . We shall write I ( f , g ) for short.

I ( f , g ) = a b d d ε | ε = 0 ( f ( x ) ( y ˜ ( x ) + ε g ( x , α ) ) ) 2 d x (3.32)

= a b d d ε | ε = 0 ( i = 0 n β i x i ( i = 0 j α i * x i + ε i = 0 j α i x i ) ) 2 d x (3.33)

= 2 a b ( i = 0 n β i x i i = 0 j α i * x i ) ( i = 0 j α i x i ) d x (3.34)

= 2 a b ( i = 0 n β i x i + i = 0 j α i * x i ) ( i = 0 j α i x i ) d x (3.35)

= 2 k = 0 j i = 0 n ( β i α k a b x i + k d x ) + 2 k , i = 0 j ( α i * α k a b x i + k d x ) (3.36)

= 2 k 0 j i = 0 n ( β i α k [ x i + k + 1 i + k + 1 ] a b ) + 2 k , i = 0 j ( α i * α k [ x i + k + 1 i + k + 1 ] a b ) (3.37)

= 2 k 0 j i = 0 n ( β i α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] ) + 2 k , i = 0 j ( α i * α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] ) (3.38)

Setting the above equation to 0 as before we get

k , i = 0 j ( α i * α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] k , i ) = k = 0 j i = 0 n ( β i α k [ b i + k + 1 i + k + 1 a i + k + 1 i + k + 1 ] k , i )

This leads to the same linear system other than the fact that α * = α * hence we get y ˜ . This implies that I ( f , g ) = I ( f , g ) . Where α * , α * j .

Theorem 4. For some fixed g n [ x ] where d e g ( g ) n . Let f , f be distinct polynomials in n [ x ] such that d e g ( f ) , d e g ( f ) n . then we have

I ( f , g ) + I ( f , g ) = I ( f + f , g )

Proof.

I ( f , g ) + I ( f , g ) = a b d d ε | ε = 0 ( f ( x ) ( y ˜ ( x ) + ε g ( x , α ) ) ) 2 d x + a b d d ε | ε = 0 ( f ( x ) ( y ˜ ( x ) + ε g ( x , α ) ) ) 2 d x (3.39)

= a b d d ε | ε = 0 ( f ( x ) ( y ˜ ( x ) + ε g ( x , α ) ) ) 2 + ( f ( x ) ( y ˜ ( x ) + ε g ( x , α ) ) ) 2 d x (3.40)

= 2 a b ( f ( x ) y ˜ ( x ) ) g + ( f ( x ) y ˜ ( x ) ) g d x (3.41)

= 2 a b ( f ( x ) y ˜ ( x ) + f ( x ) y ˜ ( x ) ) g d x (3.42)

= 2 a b ( f ( x ) + f ( x ) ) ( y ˜ ( x ) + y ˜ ( x ) ) g d x (3.43)

Let ψ ˜ = y ˜ ( x ) + y ˜ ( x ) such that d e g ( ψ ˜ ) = d e g ( y ˜ ( x ) ) = d e g ( y ˜ ( x ) ) . Hence, we have

2 a b ( f ( x ) + f ( x ) ) ( y ˜ ( x ) + y ˜ ( x ) ) g d x = 2 a b ( ( f ( x ) + f ( x ) ) ψ ˜ ) g d x (3.44)

= a b d d ε | ε = 0 ( ( f ( x ) + f ( x ) ) ( ψ ˜ + ε g ( x , α ) ) ) 2 d x (3.45)

= d d ε | ε = 0 a b ( ( f ( x ) + f ( x ) ) ( ψ ˜ + ε g ( x , α ) ) ) 2 d x (3.46)

= I ( f + f , g ) (3.47)

We have that ψ ˜ n [ x ] and the optimum vector ( ψ 1 * , , ψ m * ) m where m = d e g ( ψ ˜ ) .

By the above lemmas and theorems, we clearly have the following results

• Commutativity clearly it is commutative I ( f + f , g ) = I ( f + f , g ) .

• Associativity It should also clear the sum is associative since the sum of functions in n [ x ] is associative.

• Identity As demonstrated before we have shown that choosing f = 0 implies that y ˜ ( x ) = 0 therefore we can conclude that I ( f + f , g ) = I ( f + 0, g ) = I ( f , g ) .

• Inverse We have also shown that choosing f = f implies we get I ( f , g ) = I ( f , g ) = I ( f , g ) therefore I ( f , g ) + I ( f , g ) = I ( f , g ) I ( f , g ) = I ( f f , g ) = I ( 0 , g ) .

Hence, we are now in a position to talk about a group structure for this projectors on polynomials rings.

Question 2: What about projections on orthogonal subspace.

To answer this question we will think of n [ x ] as a vector space with standard basis as before taken to be B = { 1, x , , x n } . We know that for any f , f n [ x ] then f + f n [ x ] since d e g ( f + f ) = m a x { d e g ( f ) , d e g ( f ) } and c then c f n [ x ] . Next, we define the following map

φ : n [ x ] n + 1 ; φ ( f ) = φ ( k = 0 n α k x k ) = ( α 0 , , α n ) n + 1

Clearly, φ is a bijection. Now given some element of B , we wish to construct its orthogonal subspace i.e. given some x k B ,0 0 n , we construct the subspace x k which we define as follows

x k : = { g n [ x ] : a b x k g d x = 0 g n [ x ] }

Working with integral, we get the following result

a b x k g d x = a b x k ( q = 0 j β q x q ) d x , 1 j n (3.48)

= a b ( q = 0 j β q x k + q ) d x (3.49)

= q = 0 j β q a b x k + q d x (3.50)

= q = 0 j β q [ x k + q + 1 k + q + 1 ] a b (3.51)

= q = 0 j β q [ b k + q + 1 k + q + 1 a k + q + 1 k + q + 1 ] (3.52)

= q = 0 j β q [ b k + q + 1 a k + q + 1 k + q + 1 ] (3.53)

Hence, we seek to solve the equation

q = 0 j β q [ b k + q + 1 a k + q + 1 k + q + 1 ] = 0 , 1 j n

To make the notation a bit lighter, we set γ ( k , q ) = b k + q + 1 a k + q + 1 k + q + 1 , so we solve the more consise equation

q = 0 j β q γ ( k , q ) = 0 , 0 j n

Hence, we derive the required vector coefficients β = ( β 0 , , β j ) given some j, as shown in Table 1. Therefore, given the basis B = { 1, x , x 2 , , x k , , x n } , we can assign to each element in B a set M such that M is defined as follows

M k : = { h n [ x ] : 0 deg ( h ) n ; 1 j n ; a b x k h d x = 0 }

The polynomials in M k are all orthogonal to the basis elements x k B . Polynomial functions in M k can be mapped via φ 1 , from the subspaces in the table above, into the coefficients of h M k .

Theorem 5. Each set M k forms a -free module [4] in n [ x ] over .

Proof. We know that each h in M k for some 0 k n has the form q = 0 j β q x q = h such that a b x k h d x = 0 , h M k . Let h , h M k then we have

a b x k ( h + h ) d x = a b x k h + x k h d x = a b x k h d x + a b x k h d x = 0

This implies that h + h M k h , h . It is easy to verify the other properties, hence we can conclude that ( M k , + ) is abelian. Defining × M k M k such that ( r , h ) r h implies that

a b r x k h d x = r a b x k h d x = 0 h M k

• Then it is true that r ( h + h ) M k since

r a b x k ( h + h ) d x = r a b x k h d x + r a b x k h d x = 0

( r + s ) h ˙ = r h + s h M k since

( r + s ) a b x k h d x = r a b x k h d x + s a b x k h d x = 0

( r s ) h ˙ = r ( s h ˙ ) M k since s a b x k h d x = 0

1 h = h M k

The generating set for M k is the set B is linearly independent and therefore M k is a free module whose base ring is a field hence M k is a vector speace.

Table 1. j degree equations and their solution subspaces in n + 1 .

We, let M be the set

M : = k = 0 n M k

Let f P n [ x ] , we wish to project f into each M k , k = 1, , n such that we minimize the error squared for each k. We will then take the infimum of these errors. Hence, we can define a vector rejection

R e j ( f ) : = { y ˜ M : S e ( f , y ˜ ) isminimum }

where

S e ( f , y ˜ ) : = i n f { S e k ( f , h ) : S e k ( f , h ) = f h , f h , h M k , k = 1, , n }

Suppose we have some f such that

f = p = 0 s α p x p n [ x ]

then we seek

a b ( p = 0 s α p x p ) k ( q = 0 j β q x q ) k d x = 0 j , k = 0, , n s .t a b [ f h ] 2 d x isminforeach M k

a b ( p = 0 s α p x p ) k ( q = 0 j β q x q ) k d x = a b ( p = 0 s q = 0 j α p β q x p + q ) k d x (3.54)

= p = 0 s q = 0 j α p β q a b x p + q d x (3.55)

= p = 0 s q = 0 j α p β q x p + q + 1 p + q + 1 | a b (3.56)

= p = 0 s q = 0 j α p β q γ ( p , q ) (3.57)

Setting p = 0 s q = 0 j α p β q γ ( p , q ) = 0 implies that we have

ϕ ( β 0 , , β j ) j + 1 , 0 j n

a b [ ( p = 0 s α p x p ) k ( q = 0 j β q x q ) k ] 2 d x = a b ( p = 0 s α p x p ) 2 d x 2 a b ( p = 0 s α p x p ) ( q = 0 j β q x q ) d x (3.58)

+ a b ( q = 0 j β q x q ) 2 d x (3.59)

= a b p , d = 0 s α p α d x p + d d x 2 a b p = 0 s q = 0 j α p β q x p + q d x (3.60)

+ a b m , q = 0 j β m β q x m + q d x (3.61)

= p , d = 0 s α p α d a b x p + d d x 2 p = 0 s q = 0 j α p β q a b x p + q d x (3.62)

+ m , q = 0 j β m β q a b x m + q d x (3.63)

= p , d = 0 s α p α d γ ( p , d ) 2 p = 0 s q = 0 j α p β q γ ( p , q ) + m , q = 0 j β m β q γ ( m , q ) (3.64)

= ψ ( β q , β m ) (3.65)

By using the j-dimensional gradient operator = ( β 0 , β 1 , , β j ) T . Calculating ψ ( β q , β m ) = 0 leads to a j + 1 homogeneous system of the form

ψ ( β q , β m ) = [ m = 0 j β m γ ( m ,0 ) p = 0 s α p γ ( p ,0 ) m = 0 j β m γ ( m ,1 ) p = 0 s α p γ ( p ,1 ) m = 0 j β m γ ( m , j ) p = 0 s α p γ ( p , j ) ] [ j + 1,1 ] = [ 0 0 0 ]

It is clear that putting together ( β q , β m ) = 0 and the linear equation ϕ ( β 0 , , β j ) leads a j + 2 × j + 1 linear system. Given the linear system generated by ( β q , β m ) , it is feasible to reduce this to a square system by combining anyone pair of the j + 1 equations in ( β q , β m ) , thereby reducing the overall system to a square system of [ j + 1 ] × [ j + 1 ]

[ p = 0 α p γ ( p , 0 ) , , p = 0 α p γ ( p , j ) γ 0 , 1 , γ 1 , 1 , γ 2 , 1 , , γ j , 1 γ 0 , 2 , γ 1 , 2 , γ 2 , 2 , , γ j , 2 , , , , γ 0 , j , γ 1 , j , γ 2 , j , , γ j , j ] [ β 0 β 1 β j ] = [ 0 p = 0 s α p γ ( p , 0 ) p = 0 s α p γ ( p , j ) ]

The system is a square system i.e. a j + 1 × j + 1 system, hence there exists a solution provided coefficient determinant is non-zero.

4. A Different Approach

In this section, we will show that performining projections over polynomial spaces n ( x ) can also be done via an injective map in n + 1 . We can start by define the following mapping

ξ : n [ ] n + 1

such that

ξ ( p n ( x ) ) = ξ ( i = 0 n a i x i ) = ( a 0 , a 1 , , a n ) T n + 1

Lemma 6. The mapping ξ is Bijective.

Proof. ξ is bijective since

ξ ( i = 0 n a i x i ) = ξ ( i = 0 n a i x i ) ( a 0 , a 1 , , a n ) T = ( a 0 , a 1 , , a n ) T

Also, given any vector ( a 0 , , a n ) in n + 1 p n ( x ) n [ ] such that p n ( x ) = i = 0 n a i x i . Hence, ξ is bijective.

Theorem 7. First Theorem

Given [ p n ( x ) ] B and [ p k ( x ) ] B where k < n w.r.t basis B . The projection of [ p n ( x ) ] B onto [ p k ( x ) ] B relative to basis B is given by

ξ 1 ( g i j ξ i ( p n ( x ) ) ξ j ( p k ( x ) ) g i j ξ i ( p k ( x ) ) ξ j ( p k ( x ) ) ξ ( p k ( x ) ) )

where we define

g i j 0 1 f i ( x ) f j ( x ) d x ; i , j = 0, , n

where f i ( x ) , f j ( x ) B = { 1, x , , x n } .

Proof. Let p n ( x ) , p k ( x ) n [ ] such that p n ( x ) = r = 0 n α r x r and p k ( x ) = h = 0 k β h x h . Therefore we have

ξ ( r = 0 n α r x r ) = ( α 0 , , α n ) T n + 1

ξ ( h = 0 k β h x h ) = ( β 0 , β k , β k + 1 = 0 , , β n = 0 ) T n + 1 , k < n

First, let’s use the standard basis on n [ ] to be B : = { 1, x , , x n } . This implies that

1) g 00 = 0 1 1 1 d x = 1 2) g 01 = 0 1 1 x d x = 1 2 = g 10 3) g 02 = 0 1 1 x 2 d x = 1 3 = g 20

4) g 11 = 0 1 x x 2 d x = 1 4 5) g 12 = 0 1 x 2 x 2 d x = 1 5 = g 21 6) g 22 = 0 1 x x d x = 1 5

7) 8) 9)

We can define the following recurrence relation for the values of the metric tensor

g i , j , i , j = 1 , n 1 i + j + 1 , i , j = 0 , , n

where we have used the fact that g i j = g j i for i j i.e. the symmetric property of the metric tensor. Hence, we can write

g i j ξ i ( p n ( x ) ) ξ j ( p k ( x ) ) g i j ξ i ( p k ( x ) ) ξ j ( p k ( x ) ) ξ ( p k ( x ) ) = g 00 ξ n 0 ξ k 0 + g 01 ξ n 0 ξ k 1 + + g 21 ξ n 2 ξ k 1 + + g n , n ξ n n ξ k n g 0 ξ k 0 ξ k 0 + g 01 ξ k 0 ξ k 1 + + g 10 ξ k 1 ξ k 0 + + g n , n ξ k n ξ k n ξ ( p k ( x ) ) (4.1)

= g 00 α 0 β 0 + g 01 α 0 β 1 + + g 10 α 1 β 0 + + g n , n α n β n g 00 β 0 β 0 + g 01 β 0 β 1 + + g 10 β 1 β 0 + + g n , n β n β n ξ ( p k ( x ) ) (4.2)

= g 00 α 0 β 0 + g 01 α 0 β 1 + + g 10 α 1 β 0 + + g n , k α n β k g 00 β 0 β 0 + g 01 β 0 β 1 + + g 10 β 1 β 0 + + g k , k β k β k ξ ( p k ( x ) ) (4.3)

= α 0 β 0 + 1 / 2 α 0 β 1 + + 1 / 2 α 1 β 0 + + 1 / ( n + k + 1 ) α n β k β 0 β 0 + 1 / 2 β 0 β 1 + + 1 / 2 β 1 β 0 + + 1 / ( 2 k + 1 ) β k β k ξ ( p k ( x ) ) (4.4)

= α 0 β 0 + 1 / 2 α 0 β 1 + + 1 / 2 α 1 β 0 + + 1 / ( n + k + 1 ) α n β k β 0 β 0 + 1 / 2 β 0 β 1 + + 1 / 2 β 1 β 0 + + 1 / ( 2 k + 1 ) β k β k { β 0 β 1 β k 0 0 } (4.5)

= { i = 0 n j = 0 k α i β j β 0 i + j + 1 i = 0 n j = 0 k β i β j i + j + 1 i = 0 n j = 0 k α i β j β 1 i + j + 1 i = 0 k j = 0 k β i β j i + j + 1 i = 0 n j = 0 k α i β j β k i + j + 1 i = 0 k j = 0 k β i β j i + j + 1 0 0 } (4.6)

Now, we can compute the intergral version of this projection as follows

0 1 ( r = 0 n α r x r ) ( h = 0 k β h x h ) d x 0 1 ( h = 0 k β h x h ) 2 d x ( h = 0 k β h x h )

0 1 ( r = 0 n α r x r ) ( h = 0 k β h x h ) d x 0 1 ( h = 0 k β h x h ) 2 d x ( h = 0 k β h x h ) = 0 1 r = 0 n h = 0 k α r β h x r + h d x 0 1 h , s = 0 k β h β s x h + s d x ( h = 0 k β h x h ) (4.7)

= r = 0 n h = 0 k α r β h 0 1 x r + h d x h , s = 0 k β h β s 0 1 x h + s d x ( h = 0 k β h x h ) (4.8)

= r = 0 n h = 0 k α r β h 1 r + h + 1 h , s = 0 k β h β s 1 h + s + 1 ( h = 0 k β h x h ) (4.9)

= r = 0 n h = 0 k α r β h r + h + 1 h , s = 0 k β h β s h + s + 1 ( h = 0 k β h x h ) (4.10)

Using the bijective mapping ξ : n [ ] n + 1 we get the desired result by comparison.

ξ ( r = 0 n h = 0 k α r β h r + h + 1 h , s = 0 k β h β s h + s + 1 ( h = 0 k β h x h ) ) = ( r = 0 n h = 0 k α r β h r + h + 1 h , s = 0 k β h β s h + s + 1 ) [ β 0 β 1 β k 0 0 ] = { i = 0 n j = 0 k α i β j β 0 i + j + 1 i = 0 n j = 0 k β i β j i + j + 1 i = 0 n j = 0 k α i β j β 1 i + j + 1 i = 0 k j = 0 k β i β j i + j + 1 i = 0 n j = 0 k α i β j β k i + j + 1 i = 0 k j = 0 k β i β j i + j + 1 0 0 }

Theorem 8. Second Theorem

The first theorem can be written with the Kronecker Product as follows

ξ 1 ( ( ξ ¯ k ξ ¯ k ) G g i j ξ k i ξ k j ξ ¯ n )

where ( ξ ¯ n ξ ¯ n ) T is the Kronecker Product, G = ( g i j ) = g j i due to symmetry.

g i j { 0 1 f 0 ( x ) f 0 ( x ) d x 0 1 f 0 ( x ) f 1 ( x ) d x 0 1 f 0 ( x ) f n ( x ) d x 0 1 f 1 ( x ) f 0 ( x ) d x 0 1 f 1 ( x ) f 1 ( x ) d x 0 1 f 1 ( x ) f n ( x ) d x 0 1 f n ( x ) f 0 ( x ) d x 0 1 f n ( x ) f 1 ( x ) d x 0 1 f n ( x ) f n ( x ) d x }

where f i ( x ) , f j ( x ) B .

Proof. Let p n ( x ) , p k ( x ) n [ ] such that p n ( x ) = r = 0 n α r x r and p k ( x ) = h = 0 k β h x h . Therefore we have

ξ ( r = 0 n α r x r ) = ( α 0 , , α n ) T n + 1

ξ ( h = 0 k β h x h ) = ( β 0 , β k , β k + 1 = 0 , , β n = 0 ) T n + 1 , k < n

Then we have

[ ξ ¯ k ξ ¯ k ] n × n = [ β 0 β 0 β 0 β 1 β 0 β k 0 k + 1 0 n β 1 β 0 β 1 β 1 β 1 β k 0 k + 1 0 n β k β 0 β k β 1 β k β k 0 k + 1 0 n 0 0 0 0 k + 1 0 n 0 0 0 0 k + 1 0 n ] ( ξ ¯ k ξ ¯ k M k + 1 , n k = 0 M n k , k + 1 = 0 M n k , n k = 0 )

[ ξ ¯ k ξ ¯ k ] n × n T = ( ξ ¯ k ξ ¯ k M k , n k = 0 M n k , k = 0 M n k , n k = 0 ) = [ ξ ¯ k ξ ¯ k ] n × n

We know that G is an n × n matrix given

G = [ g 00 g 01 g 02 g 0 n g 10 g 11 g 12 g 1 n g n 0 g n 1 g n 2 g n n ] = [ 1 1 / 2 1 / 3 1 / ( n + 1 ) 1 / 2 1 / 3 1 / 4 1 / ( n + 2 ) 1 / ( n + 1 ) 1 / ( n + 2 ) 1 / ( n + 3 ) 1 / ( 2 n + 1 ) ]

Therefore we have

[ ξ ¯ n ξ ¯ n ] n × n G = [ β 0 β 0 β 0 β 1 β 0 β k 0 k + 1 0 n β 1 β 0 β 1 β 1 β 1 β k 0 k + 1 0 n β k β 0 β k β 1 β k β k 0 k + 1 0 n 0 0 0 0 k + 1 0 n 0 0 0 0 k + 1 0 n ] n × n × [ 1 1 / 2 1 / ( n + 1 ) 1 / 2 1 / 3 1 / ( n + 2 ) 1 / ( n + 1 ) 1 / ( n + 2 ) 1 / ( 2 n + 1 ) ] n × n (4.11)

= [ r = 0 k β 0 β r g r 0 r = 0 k β 0 β r g r 1 r = 0 k β 0 β r g r n r = 0 k β 1 β r g r 0 r = 0 k β 1 β r g r 1 r = 0 k β 1 β r g r n r = 0 k β k β r g r 0 r = 0 k β k β r g r 1 r = 0 k β k β r g r n 0 0 0 0 0 0 0 0 ] n × n (4.12)

= [ r = 0 k β 0 β r r + 1 r = 0 k β 0 β r r + 2 r = 0 k β 0 β r k + r + 1 r = 0 k β 1 β r r + 1 r = 0 k β 1 β r r + 2 r = 0 k β 1 β r k + r + 1 r = 0 k β k β r r + 1 r = 0 k β k β r r + 2 r = 0 k β k β r k + r + 1 0 0 0 0 0 0 0 0 ] n × n (4.13)

therefore, we have

1 g i j ξ i ξ j [ r = 0 k β 0 β r r + 1 r = 0 k β 0 β r r + 2 r = 0 k β 0 β r k + r + 1 r = 0 k β 1 β r r + 1 r = 0 k β 1 β r r + 2 r = 0 k β 1 β r k + r + 1 r = 0 k β k β r r + 1 r = 0 k β k β r r + 2 r = 0 k β k β r k + r + 1 0 0 0 0 0 0 0 0 ] [ α 0 α 1 α k α n ] = { i = 0 n j = 0 k α i β 0 β j i + j + 1 i = 0 k j = 0 k β i β j i + j + 1 i = 0 n j = 0 k α i β 1 β j i + j + 1 i = 0 k j = 0 k β i β j i + j + 1 i = 0 n j = 0 k α i β k β j i + j + 1 i = 0 k j = 0 k β i β j i + j + 1 0 0 }

Lemma 9. The matrix

( ξ ¯ k ξ ¯ k ) G g i j ξ k i ξ k j

in Theorem 9 is normalised

Proof.

Suppose that we have two polynomials g , g of order k n such that g = γ g for some γ \ { 0 } . Then, we propose that

( ξ ¯ k ξ ¯ k ) G g i j ξ k i ξ k j = ( ξ ¯ k ξ ¯ k ) G g i j ξ k i ξ k j

and further such that projectors can be contructed by normalising the coefficient vector ξ ¯ k , which I will denote by ξ ^ k , with ξ ^ k S k s = k + 1 n { 0 } with S k begin the Hypersphere in k + 1 .

S k s = k + 1 n { 0 } = { ξ ^ k s = k + 1 n { 0 } n + 1 | r = 0 k β ^ r 2 = 1 }

where β ^ k 2 are the squares of the normalised coefficient vector.

Proof. Let f , g n [ ] such that d e g ( g ) d e g ( f ) . f ( x ) = k = 0 n α k x k and g ( x ) = r = 0 k β r x r . Using the mapping ξ . We find that ξ ( r = 0 k β r x r ) ( β 0 , β 1 , , β k ) therefore ξ ¯ k s = k + 1 n { 0 } = ( β 0 , β 1 , , β k ,0, ,0 ) n + 1 . We can calculate its Kronecker product.

ξ ¯ k s = k + 1 n { 0 } ξ ¯ k s = k + 1 n { 0 } = [ β 0 β 0 β 0 β 1 β 0 β k 0 k + 1 0 n β 1 β 0 β 1 β 1 β 1 β k 0 k + 1 0 n β k β 0 β k β 1 β k β k 0 k + 1 0 n 0 0 0 0 k + 1 0 n 0 0 0 0 k + 1 0 n ]

Therefore, we have

ξ ¯ k s = k + 1 n { 0 } ξ ¯ k s = k + 1 n { 0 } F 2 = r = 0 n p = 0 n | β r β p | 2 = ξ ¯ k ξ ¯ k 2

Performing the calculation we get

ξ ¯ k ξ ¯ k 2 = r = 0 n p = 0 n | β r β p | 2 (4.14)

= r = 0 k β r 4 + 2 r = 0 k s = r + 1 k | β r β k | 2 (4.15)

= ( r = 0 k β r β r ) 2 (4.16)

This implies that ξ ¯ k ξ ¯ k = r = 0 k β r β r = T r ( ξ ¯ k ξ ¯ k ) = T r ( ξ ¯ k s = k + 1 n { 0 } ξ ¯ k s = k + 1 n { 0 } ) .

It is clear that the matrix

ξ ¯ k s = k + 1 n { 0 } ξ ¯ k s = k + 1 n { 0 } T r ( ξ ¯ k s = k + 1 n { 0 } ξ ¯ k s = k + 1 n { 0 } ) = ξ ¯ k s = k + 1 n { 0 } ξ ¯ k s = k + 1 n { 0 } δ i j β i β j

is normalised. Hence, we may conclude that

( ξ ¯ k ξ ¯ k ) G g i j ξ k i ξ k j

is also normalised. It also clear that if g , g are such that g = γ g , γ \ { 0 } gives ± ξ ^ k s = k + 1 n { 0 } which implies that ± ξ ^ k are anti-podal points on the hypersphere in k + 1 . We, therefore, can see that

( ξ ¯ k ξ ¯ k ) G g i j ξ k i ξ k j = ( ξ ¯ k ξ ¯ k ) G g i j ξ k i ξ k j

Theorem 10. Given two polynomials g , g n [ ] such that d e g ( g ) = k d e g ( g ) = k n then, given some f n [ ] , the projection of f onto g + g is given by the following expression

ξ 1 ( ( ξ ¯ k ξ ¯ k g i j ξ k i ξ k j + ξ ¯ k ξ ¯ k g i j ξ k i ξ k j + ξ ¯ k ξ ¯ k g i j ξ k i ξ k j + ξ ¯ k ξ ¯ k g i j ξ k i ξ k j ) G )

Proof. Let g , g n [ ] with d e g ( g ) = k d e g ( g ) = k n such that g = r = 0 k β r x r and g = r = 0 k β r x r . We know that g + g = m a x ( d e g ( k ) , d e g ( k ) ) . We also define f = q = 0 n α q x q . We, therefore, have

g + g = g = r = 0 k β r x r + r = 0 k β r x r = r = 0 max { k , k } ( β r + β r ) x r + s = min { k , k } + 1 max { k , k } β s x s

Therefore, we have

ξ ( g + g ) = ξ ( r = 0 min { k , k } ( β r + β r ) x r + s = min { k , k } + 1 max { k , k } β s x s ) (4.17)

= ( ( β 0 + β 0 ) , , ( β min { k , k } + β min { k , k } ) , β min { k , k } + 1 , , β max { k , k } ) T max { k , k } (4.18)

I will denote this vector ξ ¯ k , k , therefore we have ξ ¯ k , k s = max { k , k } + 1 n { 0 } .

ξ ¯ k , k s = max { k , k } + 1 n { 0 } ξ ¯ k , k s = max { k , k } + 1 n { 0 }

The vector ξ ¯ k , k s = max { k , k } + 1 n { 0 } can be written as

ξ ¯ k , k s = max { k , k } + 1 n { 0 } = ξ ( g ) + ξ ( g ) = ξ ¯ k ( g ) s = k + 1 { 0 } + ξ ¯ k ( g ) s = k + 1 { 0 }

For clearer notation, we write A = ξ ¯ k ( g ) s = k + 1 { 0 } and B = ξ ¯ k ( g ) s = k + 1 { 0 } . Therefore, by the distributive law of the Kronecker Product, we get the follwing matrix.

( A + B ) ( A + B ) = A A + A B + B A + B B

which in matrix form gives us

( A + B ) ( A + B ) = ( ξ ¯ k ξ ¯ k M k + 1 , n k = 0 M n k , k + 1 = 0 M n k , n k = 0 ) + ( ξ ¯ k ξ ¯ k M k , n k = 0 M n k , k = 0 M n k , n k = 0 ) (4.19)

+ ( ξ ¯ k ξ ¯ k M n k , k = 0 M k , n k = 0 M n k , n k = 0 ) + ( ξ ¯ k ξ ¯ k M k , n k = 0 M n k , k = 0 M n k , n k = 0 ) (4.20)

Clearly, ξ ¯ k ξ ¯ k is of order ( k + 1, k + 1 ) , ξ ¯ k ξ ¯ k is of order ( k + 1, k + 1 ) , this implies that ξ ¯ k ξ ¯ k is of order ( k + 1, k + 1 ) and ξ ¯ k ξ ¯ k is of order ( k + 1, k + 1 ) .

Normalising A A + A B + B A + B B gives and multiplying by the metric tensor, we get

( A A g i j ξ i ξ j + A B g i j ξ i ξ j + B A g i j ξ i ξ j + B B g i j ξ i ξ j ) G

It can be verified that

( A A g i j ξ i ξ j + A B g i j ξ i ξ j + B A g i j ξ i ξ j + B B g i j ξ i ξ j ) G = ( A A T r ( ( A A ) G ) + A B T r ( ( A B ) G ) + B A T r ( ( B A ) G ) + B B T r ( ( B B ) G ) ) G

This is equal to

1 g i j ( m = 1 2 m = 1 2 ξ m i s = k + 1 n ξ m j s = k + 1 n ) ( A A + B A + A B + B B ) G

5. The Group

To formulate the group structure, we focus our attention to the subspaces of n [ ] where we can define the subspaces as follows

Ω k : = { g n [ ] : d e g ( g ) = k g n [ ] }

It is clear that Ω k n [ ] . Then, we also know that

ξ ¯ ( Ω k ) s = k + 1 n { 0 } = ( β 0 , β 1 , , β k ,0, ,0 ) n + 1

Then projectors on Ω k can be constructed as G Ω k which represents the set of all projectors in the subspace Ω k .

Theorem 11. The set G Ω k is a group under the mapping

ψ : ( G Ω k × G Ω k ) G Ω k

such that

ϕ : = ( ϕ ξ ¯ ( g ) = ϕ ( ξ ¯ ( g ) ) = ξ ¯ ( g ) s = k + 1 n { 0 } ξ ¯ ( g ) s = k + 1 n { 0 } ϕ ξ ¯ ( g ) = ϕ ( ξ ¯ ( g ) ) = ( ξ ¯ ( g ) s = k + 1 n { 0 } ξ ¯ ( g ) s = k + 1 n { 0 } )

ψ ( P g , P g ) = P g + P g g + P g g + P g = P g + g

where the set G Ω k is defined as G Ω k : = { P : P = ϕ ξ ¯ ( g ) , g Ω k } .

Proof. We write A = ξ ¯ ( g ) s = k + 1 { 0 } and A = ξ ¯ ( g ) s = k + 1 { 0 } .

Given some g , g Ω k , it is clear g + g Ω k . We also know that ξ ( g ) = A and ξ ( g ) = A , hence projecting in the direction of g + g is the projection of A + A . From the above section we have

( A + A ) ( A + A ) = ( ξ ¯ k ξ ¯ k M k + 1 , n k = 0 M n k , k + 1 = 0 M n k , n k = 0 ) + ( ξ ¯ k ξ ¯ k M k + 1 , n k = 0 M n k , k + 1 = 0 M n k , n k = 0 ) (5.1)

+ ( ξ ¯ k ξ ¯ k M k + 1 , n k = 0 M n k , k + 1 = 0 M n k , n k = 0 ) + ( ξ ¯ k ξ ¯ k M k + 1 , n k = 0 M n k , k + 1 = 0 M n k , n k = 0 ) (5.2)

In the same way that g + g = g + g in commutative in Ω k , we can see that ( A + A ) ( A + A ) = ( A + A ) ( A + A ) hence it is also commutative. In a similar way, we can argue associativity. It is also clear that for k = 0, , n the zero polynomial is each Ω k , k = 0, , n . The zero polynomial will give the identity element since ( A + 0 ¯ ) ( A + 0 ¯ ) = A A . The inverse element will, simply be, A = ξ ¯ k ( g ) s = k + 1 n = ξ ( g ) = ξ ( g ) . Hence, we see that ( A + ( A ) ) ( A + ( A ) ) = P 0 .

We conclude that the G Ω k is a group.

Given that for each k = 0, , n we can say that Ω k Ω k k < k and Ω k n [ ] . This tells us that G Ω k < G Ω k whenever k < k . Then, from group theory, we know that the union of two subgroups is a group if one is a subset of the other. Hence, we have the following result

G = i I G Ω k

is also a group. Indeed, we have

G Ω 0 G Ω 1 G Ω 2 G Ω n

6. Conclusion

In conclusion, given the results above, we find that projections in polynomials spaces are very similar to traditional projections in Euclidean spaces with the right construct. This operation can be achieved via an integral operator or a Kronecker Product. We have also noticed that very similarly, we are using hyper-spheres in k + 1 to construct such operators. Highlighting a paper, previously published in ALAMAT, discussing the differential geometry aspect of projections and the manifold structure, such a link can be established for polynomial spaces.

Acknowledgements

Dedicated to both my grandmother and grandfather who left us too early. I know Papi would be happy to see this paper. You are very much missed.

I also would like to dedicate this paper to my girlfriend, Miss Yang Xiaoying, and would like to thank her for the love, support and happiness she brings to my life.

Notation

The notation system is as follows:

1) P n [ x ] , P n [ ] : The space of polynomials of degree at most n over the real numbers.

2) B ( x ) : The standard basis in P n [ x ] .

3) f ( x ) , g ( x ) , h ( x ) : arbitrary elements of P n [ ] .

4) I ( x , ε ) : Operator on P n [ ] on interval [ a , b ] with parameter ε .

5) d e g ( f ) : Degree of the polynomial f ( x ) in P n [ x ] .

6) φ , ξ : Mappings between P n [ x ] and n + 1 .

7) g i j : Metric Tensor on P n [ x ] .

8) f ( x ) , g ( x ) , h ( x ) : arbitrary elements of P n [ ] .

9) ξ ¯ ξ ¯ : The Kronecker Product of the vector ξ ¯ .

10) Ω k : The subspace of polynomials of degree k.

11) G Ω k : The set of projectors onto Ω k .

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Jean-Francois, N. (2018) The Projective Group as a Projective Manifold. Advances in Linear Algebra & Matrix Theory, 8, 134-142.
https://doi.org/10.4236/alamt.2018.84012
[2] Niglio, J.-F. (2019) A Follow-Up on Projection Theory: Theorems and Group Action. Advances in Linear Algebra & Matrix Theory, 9, 1-19.
https://doi.org/10.4236/alamt.2019.91001
[3] Donald Hartig Orhogonal Projections in Function Spaces.
http://wk.ixueshu.com/file/062e60b8cfa8b283318947a18e7f9386.html
[4] Liesen, J. and Mehrmann, V. (2015) Linear Algebra. Springer.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.