Legendre Polynomial Kernel: Application in SVM

Abstract

In machines learning problems, Support Vector Machine is a method of classification. For non-linearly separable data, kernel functions are a basic ingredient in the SVM technic. In this paper, we briefly recall some useful results on decomposition of RKHS. Based on orthogonal polynomial theory and Mercer theorem, we construct the high power Legendre polynomial kernel on the cube [-1,1]d. Following presentation of the theoretical background of SVM, we evaluate the performance of this kernel on some illustrative examples in comparison with Rbf, linear and polynomial kernels.

Share and Cite:

Rebei, H. and Alharbi, N. (2022) Legendre Polynomial Kernel: Application in SVM. Journal of Applied Mathematics and Physics, 10, 1732-1747. doi: 10.4236/jamp.2022.105121.

1. Introduction

The approach of reproducing kernel function has many applications in probability theory, in statistics and recently in machine learning (see [1] [2] [3]). They have been applied in many fields and domains of life sciences such as pattern recognition, biology, medical diagnosis, chemistry and bio-informatics.

It is well-known from many years that data sets can be modeled by a family of points X d and similarity between them is given by an inner product on d . The classification problem consists of separating these points into classes with respect to given properties. In the simplest situation, the points are linearly separable in the sense that there exists an hyperplane H s separating the two classes.

Support vector machines (SVMs) have become a very powerful tool in machine learning in particular in classification problems and regression ( [1] [4] [5]). In classification problem, we can consider only two-classes of data, so we speak about a binary classification. Also we can rencounter more than two classes of data. This is called a multi-task classification problem. In our cases, we focus on binary classification and the application of kernel functions in SVM classification.

In simplest situations, the linear SVM technic allows to find the optimal hyperplane separating points in two classes. Unfortunately, in concrete examples, these points are not linearly separable. Then, one can traduce similarity of points in term of positive definite kernel function k on X . This leads to invent a new technic named non-linear SVM which combines linear SVM and kernel tools.

Mathematically, this new approach of non-linear SVM is related to the Kolmogorov representation ( H , Φ ) , where H represents the feature space and Φ is the feature map (see [1]). Since separation expresses a degree of similarity between points in the same class, Vapnick used the kernel approach to translate the problem from initial space d to the feature space. The transfer between initial space and the feature space is made under the feature map and similarities are expressed by the inner product in the feature space which is given by a kernel k. The crucial idea of the kernel function methods is that non-linearly separable points can be transformed to linearly separable points in the feature space guarding similarities which is expressed by

Φ ( x i ) , Φ ( x j ) = k ( x i , x j ) .

Furthermore, the solution of the classification problem using non-linear SVM is given by the decision function which depends only on the kernel and support vectors ( x i S ). Precisely, it takes the following form

F ( x ) = s g n ( b + x i S α i k ( x i , x ) ) .

Since the choice of the kernel function is not canonic, the first problem of this approach is how to choose a suitable kernel function for such given data points. The second problem is that the feature space has an infinite dimensional and this causes a technical problem in modeling computational algorithm for such a solution. This problem is also related to the feature map Φ which is in general unknown. In our case, we use the Mercer decomposition theorem in order to obtain a type polynomial kernel having a good separation property. This means that infinite dimensional feature space can be approximated by finite dimensional feature space. This reduces the dimensionality of new data in feature space.

The paper is organized as follows: In Section 2, we recall some known results on Reproducing Kernel Hilbert Space (RKHS in what follow), in particular, the Mercer decomposition theorem 2.2 and the high power kernel theorem 2.3. Section 3 is devoted to the main result in which we introduce the one-dimensional Legendre polynomial kernel K n and we give its canonical decomposition in term of Legendre orthogonal polynomials (see Theorem 3.1). Next, we deduce the high power Legendre polynomial kernel K n = K n d defined on the cube [ 1,1 ] d for which the feature space H n = H n d is the tensor product Hilbert space of the RKHS associated with K n . In Section 4, we recall the theoretical foundation of linear and non-linear SVMs. In Section 4, we give some illustrative examples in order to evaluate the performance of the Legendre polynomial kernel in comparison with some predefined kernels in Python.

2. Preliminaries

In this section, we begin by describing the RHKS and its associated kernel. Then we give the decomposition of a given positive definite kernel on a measure space X (see Theorem 2.2).

Definition 1. (Positive definite kernel).

Let | X X be a nonempty set. A symmetric function k : X × X is called positive definite kernel if,

i = 1 n j = 1 n a i a j k ( x i , x j ) 0 n 1 , a 1 , , a n , x 1 , , x n X (2.1)

and for mutually distinct x i , the equality holds only when all the a i are zero.

Clearly that every inner product is a positive definite function. Moreover if H is any Hilbert space and X a nonempty set and ϕ : X H , then the function h ( x , y ) : = ϕ ( x ) , ϕ ( y ) is a positive definite.

Now let H be a Hilbert space of functions mapping from some non-empty set X to , i.e., H is considered as a subset of X . We write the inner product on H as

f , g , f , g H

and the associated norm will be denoted by

f 2 = f , f 1 2 , f H .

We may alternatively write the function f as f ( ) , to indicate it takes an argument in X .

Definition 2. (Reproducing kernel Hilbert space).

Let ( H , , ) be a Hilbert space of -valued functions defined on a non-empty set X . We say that H is a Reproducing Kernel Hilbert Space if there exists a kernel function k : X × X such that the following properties are satisfied:

1) x X , k x : = k ( ., x ) H ,

2) x X , f H , f ( x ) = k x , f (the reproducing property).

Remark 1. From the reproducing property, we note that

k ( x , y ) = k y ( x ) = k x , k y = k ( ., x ) , k ( ., y ) , x , y X . (2.2)

Thus necessarily, k is positive definite. Note also that reproducing kernel k associated with H , it is unique.

Theorem 2.1. (Moore-Aronszajn [6])

Let k : X × X be a positive definite kernel. There is a unique RKHS H X with reproducing kernel k. Moreover, if space

H 0 = s p a n [ { k x } x X ] (2.3)

is endowed with the inner product

f , g H 0 = i = 1 n j = 1 m α i β j k ( x i , y j ) , (2.4)

where f = i = 1 n α i k x i and g = j = 1 m β j k y j , then H is the completion of H 0 w.r.t the inner product (4).

Now let us consider a compact metric space X and a finite Borel measure ν on it. We consider a positive definite continuous kernel on X . Let S k be the linear map

S k : L 2 ( X ; ν ) C ( X ) ,

( S k f ) ( x ) = X k ( x , y ) f ( y ) d ν ( y )

and T k = I k S k , where I k : C ( X ) L 2 ( X ; ν ) . Then we have the following results:

Theorem 2.2. (Mercer’s theorem [6])

1) T k is a compact self-adjoint positive definite from L 2 ( X ; ν ) to itself. It is called the integral operator corresponding to the kernel k.

2) There exists an ortho-normal system { e ˜ j } j J of eigenvectors of T k with eigenvalues { λ j } j J , where J is at most countable set of indices, corresponding to the strictly positive eigenvalues of T k such that

T k ( f ) = j J λ j e ˜ j , f e ˜ j , f L 2 ( X , ν ) . (2.5)

3) For all x , y X

k ( x , y ) = j J λ j e j ( x ) e j ( y ) , (2.6)

where e j = λ j 1 S k e ˜ j C ( X ) and where the convergence is uniform on X × X , and absolute for each pair ( x , y ) X × X .

4) The Hilbert space

H = { f = j J a j e j : ( a j λ j ) l 2 ( J ) }

endowed with the inner product

j J a j e j , j J b j e j H = j J a j b j λ j (2.7)

is the RKHS associated with k.

Theorem 2.3. (Power of kernel [4] [7])

Let k be a kernel on X . Then

K d ( x , y ) : = i = 1 d k ( x i , y i ) , x = ( x 1 , , x d ) , y = ( y 1 , , y d ) X d

is a kernel on X d . In addition, there is an isometric isomorphism between H K and the Hilbert space tensor product H k d .

Example 1. Here we give some most known kernels used in SVM.

1) The linear kernel: k l i n ( x , y ) = x , y + c , c 0 .

2) The polynomial kernel: k p o l y ( x , y ) = ( a x , y + b ) m , a > 0 , b 0 , m 1 .

3) The exponential kernel: k e x p ( x , y ) = e σ x , y , σ > 0 .

4) The radial basis function (Rbf): k r b f ( x , y ) = e γ x y 2 , γ > 0 . It is also called the Guassian kernel.

5) The sigmoid kernel: k s i g m ( x , y ) = tanh ( x , y + c ) , c 0 .

3. Legendre Polynomial Kernel and High Power Legendre Polynomial Kernel

Let us consider the family of Legendre polynomials { L n } n 0 defined by the following recurrence relation (see [8])

{ L 0 ( x ) = 1 , L 1 ( x ) = x , L n + 1 ( x ) = x L n ( x ) w n L n 1 ( x ) , n 1 , (3.1)

where

w n = n 2 ( 2 n 1 ) ( 2 n + 1 ) , n 1. (3.2)

It well-known [8] that they are orthogonal w.r.t the inner product

f , g : = 1 1 f ( x ) g ( x ) d x .

Moreover we have

L n 2 = w 1 w n . (3.3)

In order to apply the Mercer theorem in SVM, we consider the Legendre polynomial kernel defined on [ 1,1 ] 2 by

K n ( x , y ) = k = 0 n L k ( x ) L k ( y ) . (3.4)

Theorem 3.1. Let

H n = { f = k = 0 n α k L k , α k } (3.5)

be the linear space generated by this family { L k } 0 k n . Then

1) H n is a Hilbert space endowed with the inner product

i = 0 n α i L i , j = 0 n β j L j = j = 0 n α j β j (3.6)

2) The sequence { L k } 0 k n is an orthonormal basis of H n .

3) H n is a RKHS with reproducing kernel K n .

4) The feature map associated with K n is

ϕ n : [ 1 , 1 ] H n , t ϕ ( t ) = j = 0 n L j ( t ) L j .

Proof. First: Clearly that (3.6) defines a inner product on H n , for which H n becomes an Hilbert space since it has a finite dimension.

Second: Form the definition of the inner product (3.6), clearly that { L k } 0 k n is an orthonormal system of H n whose cardinality coincides with the dimension of H n . Thus it is an orthonormal basis of H n .

Third: Let us consider the operator S n defined on H L : = L 2 ( [ 1 , 1 ] ) by

S n : f 1 1 K n ( x , y ) f ( y ) d y C ( [ 1,1 ] ) .

Setting T n = I n S n , where I n is the canonic injection of C ( [ 1,1 ] ) into H L .

1) K n is positive definite. In fact

1 i , j m a i a j K n ( x i , x j ) = 1 i , j m a i a j k = 0 n L k ( x i ) L k ( x j ) = k = 0 n 1 i , j m a i a j L k ( x i ) L k ( x j ) = k = 0 n ( i = 1 m a i L k ( x i ) ) 2 0.

2) T n is symmetric since K n is symmetric, so by Mercer theorem 2.2, T n is self-adjoint compact and positive definite.

3) For all j = 0 , 1 , , n , we have

T n ( L j ) ( x ) = 1 1 K n ( x , y ) L j ( y ) d y = 1 1 ( k = 0 n L k ( x ) L k ( y ) ) L j ( y ) d y = k = 0 n 1 1 L k ( x ) L k ( y ) L j ( y ) d y = k = 0 n L k ( x ) 1 1 L k ( y ) L j ( y ) d y = k = 0 n L k ( x ) L j 2 δ k , j = L j 2 L j ( x ) .

So the eigenvectors of T n are exactly the polynomials L j and the corresponding eigenvalues are λ j = L j 2 . Thus { L j , 0 j n } is an orthonormal basis of H n formed by eigenvectors of T n .

From Mercer theorem 2.2, The RKHS associated with K n is given by

H K n = { f = j = 0 n α j L j , α j λ j l 2 }

Since α j λ j l 2 for all sequence α j , the space H K n coincides with H n given by (3.5). Thus H n is the RKHS associated with the reproducing kernel K n .

The reproducing property is immediate from Mercer theorem but also it can be checked by hand using (3.6)

K n ( x , ) , f ( ) = k = 0 n L k ( x ) L k ( ) , j = 0 n α j L j ( ) = k = 0 n α k L k ( x ) = f ( x ) .

Forth: The feature map is given by

Let ϕ n given by

ϕ n : [ 1 , 1 ] H n , t ϕ n ( t ) = i = 0 n L j ( t ) L j ( ) .

So from (3.6), we get

ϕ n ( s ) , ϕ n ( t ) = i = 0 n L i ( s ) L i , j = 0 n L j ( t ) L j = j = 0 n L j ( s ) L j ( t ) = K n ( s , t ) .

Now let us consider the set X = [ 1,1 ] d . Define the high power Legendre polynomial kernel function K n : X × X as follows:

K n ( x , y ) = i = 1 d K n ( x i , y i ) = i = 1 d k = 0 n L k ( x i ) L k ( y i ) , x = ( x 1 , , x d ) , y = ( y 1 , , y d ) . (3.7)

Let us introduce H n = H n d = H n H n d -times called the d--times tensor product of H n .

It’s known that H n is a real Hilbert space endowed with the inner product

f | g H n = i = 1 d f i , g i , f = f 1 f d , g = g 1 g d H n , f i , g i H n . (3.8)

For ( x , y ) X × X , we have

K n ( x , y ) = i = 1 d K n ( x i , y i ) = i = 1 d ϕ n ( x i ) , ϕ n ( y i ) = i = 1 d ϕ n ( x i ) | i = 1 d ϕ n ( y i ) H n . (3.9)

Define now the function Φ n : X H n given by

Φ n ( x ) = i = 1 d ϕ n ( x i ) , x = ( x 1 , , x d ) . (3.10)

by substituting (3.10) in (3.9), we get

K n ( x , y ) = Φ n ( x ) | Φ n ( y ) H n . (3.11)

Hence K n is a kernel, with the feature map Φ n and feature space H n .

Now, applying Theorem 2.3 to the kernel K n , we deduce the following result.

Corollary 3.1.1. The space H n is a RKHS with corresponding kernel K n .

4. Application of Kernels in Classification Problem

The binary classifications means that given a date points X = ( x i ) 1 i L d which are belonging to two classes. One is denoted Class(1), the other is denoted Class(−1). To each point x i we associate y i = ± 1 indicating to which class x i belongs. The points ( x i ) 1 i L are called training points. The idea of SVM is to find an hyperplane H S separating the two classes and construct a decision function f from which a new point can be classified (x belongs to Class(1) or x belongs to Class(−1)).

4.1. Support Vector Machine: Linearly Separable Classification

When the training samples X = ( x i ) 1 i L d are linearly separable, we speak about a linear classification (i.e., there exists an hyperplane separating the two classes), the idea of SVM is the following:

Let us consider the set of training points, { ( x i , y i ) } 1 i L , where x i = ( x i ( 1 ) , , x i ( d ) ) d and the corresponding labels y i { 1,1 } . The first step is to find an hyperplane H S in the space d separating the two classes. The second step is to construct the decision function f.

Support Vectors are the examples closest to the separating hyperplane H S and the aim of SVM is to orientate this hyperplane in such a way as to be as far as possible from the closest members of both classes. For example in 2-dimensional space this means that we can draw a line on a graph of x i v.s x j separating the two classes. In high dimensional space ( d > 2 ), the line will be replaced by an affine hyperplane H S , i.e., dim H S = d 1 (see Figure 1).

Step.1.

Recall that any affine hyperplane is described by the equation

w x + b = 0 ,

where “ ” is the dot product in d and w is normal to the hyperplane. So we have to determinate the appropriate values of w and b for the hyperplane H S . If we now just consider the points that lie closest to the separating hyperplane, i.e., the Support Vectors (shown in circles in the diagram), then the two planes H 1 and H 1 that these points lie on can be described by:

Figure 1. Separating hyperplane and marginal distance.

x i w + b = 1 for H 1 (4.1)

x i w + b = 1 for H 1 (4.2)

Referring to Figure 1, we define δ 1 as being the distance from H 1 to the hyperplane H S and δ 2 from H 1 to it. The hyperplane’s equidistance from H 1 and H 1 means that δ = δ 1 = δ 2 . The quantity δ is known as the SVM’s margin. In order to orientate the hyperplane to be as far from the Support Vectors as possible, we need to maximize this margin. It is known from [1] that this margin is equal to

δ = 1 w .

Now the problem is to find w and b such that the marginal distance δ is maximal and for all i = 1 , , L ,

w x i + b 1 for x H 1 and w x i + b 1 for x H 1 . (4.3)

This is equivalent to the optimization problem under constraint

min ( 1 2 w 2 ) S .T y i ( w x i + b 1 ) 0 i = 1, , L , (4.4)

which is in fact a quadratic programming optimization (Q.P-optimization). Using the Lagrange multipliers λ = ( λ 1 , , λ L ) , where λ i 0 i = 1 , , L , the problem (4.4) will be equivalent to

max λ [ i = 1 L λ i 1 2 λ H λ T ] S .T λ i 0, λ i y i = 0, (4.5)

where

H = ( H i , j ) 1 i , j L = ( y i y j x i x j ) 1 i , j L .

This is a convex quadratic optimization problem, and we run a QP--solver which will return λ . Thus we can deduce w and b which are given by [1]

w = i = 1 L λ i y i x i ; b = 1 N S s S ( y s k S λ k y k x k x s ) , (4.6)

where S the set of the support vectors x s (i.e., the vectors of indices i corresponding to λ i > 0 ) and N S is the number of support vectors.

Step.2.

The second step is to create the decision function f which determines to which class a new point belongs. From [1], the decision function is given by

f ( x ) = s g n ( y i ( w x i + b ) 1 ) { ± 1 } . (4.7)

Thus for a new data x, f ( x ) = 1 means x belongs to the first class and f ( x ) = 1 means that x belongs to the second class.

In practise, in order to use an SVM to solve a linearly separable, binary classification problem we need to:

1) Create the matrix H = ( H i , j ) M L ( ) , where H i , j = y i y j x i x j .

2) Find λ so that

i = 1 L λ i 1 2 λ H λ T

is maximized, subject to the constraints ( λ i 0 and i , i = 1 L λ i y i = 0 ).

This is can be done using a QP solver.

3) Calculate w = i = 1 L λ i y i x i .

4) Determine the set of Support Vectors S by finding the indices such that λ i > 0 .

5) Calculate b = 1 N S s S ( y s k S λ k y k x k x s ) .

6) Each new point x is classified by evaluating f ( x ) = s g n ( w x + b ) .

4.2. SVM for Data That Is Not Fully Linearly Separable

In order to extend the SVM methodology to handle data that is not fully linearly separable, we relax the constraints (4.3) slightly to allow for misclassified points. This is done by introducing a positive slack variable ξ i 0 , i = 1 , , L

w x i + b 1 ξ i for y i = + 1,

w x i + b 1 + ξ i for y i = 1.

which can be combined into:

y i ( w x i + b ) 1 + ξ i 0 i = 1 , , L . (4.8)

In this soft margin SVM, data points on the incorrect side of the margin boundary have a penalty that increases with the distance from it. As we are trying to reduce the number of misclassifications, a sensible way to adapt our objective function (4.8) from previously, is to find

min ( 1 2 w 2 + C i = 1 L ξ i ) S .T y i ( w x i + b ) 1 + ξ i 0 i = 1, L , (4.9)

where the parameter C controls the trade-off between the slack variable penalty and the size of the margin.

Similarly to what have been done in the previous case, the Lagrange method leads to the following convex quadratic optimization problem

max λ [ i = 1 L λ i 1 2 λ H λ T ] S .T 0 λ i C , λ i y i = 0. (4.10)

We run a QP-solver which will return λ . The values of w and b are calculated in the same way as (4.6), though in this instance the set of Support Vectors used to calculate b is determined by finding the indices i for which 0 < λ i < C .

In practise, we need to:

1) Create the matrix H, where H i , j = y i y j x i x j .

2) Select a suitable value for the parameter C which determines how significantly misclassifications should be treated.

3) Find λ satisfying

i = 1 L λ i 1 2 λ H λ T is maximized S .T 0 λ i C i and i = 1 L λ i y i = 0.

This is done using a QP-solver.

4) Calculate w = i = 1 L λ i y i x i .

5) Determine the set of Support Vectors S by finding the indices such that 0 < λ i < C .

6) Calculate b = 1 N S s S ( y s k S λ k y k x k x s ) .

7) Each new point x is classified by evaluating

f ( x ) = s g n ( w x + b ) = s g n [ ( i = 1 L λ i y i x i x ) + b ] .

4.3. Non-Linear SVM

In the case when the data points are not linearly separable, i.e., there is no hyperplane separating data in two classes, we have to insert some modification on the data in order to obtain a linearly separable points. This is based on the kernel functions. It is worth noting that in the case of linearly separable data, the decision function requires only the dot product of the data points x i and the input vector x with each x i . In fact, when applying the SVM technic to linearly separable data we have started by creating a matrix H and the scalar b from the dot product of our input variables

H i , j = y i y j x i x j ; b = 1 N S s S ( y s k S λ k y k x k x s ) .

This is an important constatation for the Kernel Trick. In fact the dot product will be replaced by such a kernel which also a positive definite function. The idea is based on the choice of such kernel function k and the trick is to maps the data into a high-dimensional feature space H via a transformation ϕ related to k, in such way that the transformed data are linearly separable. ϕ is called feature map ϕ : X H . When this hyperplane is back into the original space it describes a surface.

Similarly to the previous section, we adopt the same procedure of separation at the level of the feature space H . This leads to the following steps.

1) Choose a kernel function k.

2) Create the matrix H, where H i , j = y i y j k ( x i , x j ) .

3) Choose how significantly misclassifications should be treated, by selecting a suitable value for the parameter C.

4) Determine λ so that

i = 1 L λ i 1 2 λ H λ T

is maximal under the constraint 0 λ i C i and i = 1 L λ i y i = 0 .

This is can be done by using a QP-solver.

5) Determine the set of Support Vectors S by finding the indices such that 0 < λ i < C .

6) Calculate b = 1 N S s S ( y s i S λ i y i k ( x i , x s ) ) .

7) Each new point x is classified by evaluating

f ( x ) = s g n ( w x + b ) = s g n [ ( i = 1 L λ i y i k ( x i , x ) ) + b ] .

Note that in general, the feature map ϕ is unknown so w is also unknown. But we don’t need it! We need only the values of the kernel at the training points (i.e., k ( x i , x j ) for all i , j ) to know b, and k ( x i , x ) to determine the values of the decision function f at the input vector x .

5. Numerical Simulations

Dataset of female patients with minimum twenty one year age of Pima Indian population has been taken from UCI machine learning repository. This dataset is originally owned by the National institute of diabetes and digestive and kidney diseases. In this dataset, there are total 768 instances classified into two classes: diabetic and non diabetic with eight different risk factors: Pregnancies, Glucose, Blood Pressure, Skin Thickness, Insulin, BMI, Diabetes Pedigree Function and Age. To diagnose diabetes for Pima Indian population, performance of all kernels is evaluated upon parameters like Accuracy, precision, recall score, F1 score and execution time.

Before giving experiment results, we would like to recall some characteristics of the confusion matrix (Table 1) and related metric evaluation.

5.1. Terminology and Derivations from a Confusion Matrix

Table 1. Confusion Matrix.

where:

- (TP) is the number of True Positive cases (real positive cases and detected positive),

- (TN) is the number of True Negative cases (real negative cases and detected negative),

- (FP) is the number of False Positive cases (real negative cases and detected positive),

- (FN) is the number of False Negative cases (real positive cases and detected negative),

- (PP) is the number of Predicted Positive cases PP = TP + FP ,

- (PN) is the number of Predicted Negative cases PN = FN + TN ,

- (RP) is the number of Real Positive cases RP = TP + FN ,

- (RN) is the number of Real Negative RN = FP + TN ,

- The Total Population is given by

T p = PP + PN = RP + RN .

These are some metrics used in order to compare performance of Legendre polynomial kernel with linear, rbf and polynomial kernels.

- Precision (or positive predictive value PPV) has been used to determine classifier’s ability that provides correct positive predictions of diabetes.

PPV = TP TP + FP .

- Recall score (or true positive rate or sensitivity) is used in our work to find the proportion of actual positive cases of diabetes correctly identified by the classifier used.

TPR = TP RP = TP TP + FN .

- Precision and recall score provide F1 score, so this score takes into account of both. It is the harmonic mean of precision and sensitivity:

F 1 = 2PPV × TPR PPV + TPR .

- Accuracy (ACC) is the ratio of true positives and true negatives to all positive and negative observations. In other words, accuracy tells us how often we can expect our machine learning model will correctly predict an outcome out of the total number of times it made predictions.

ACC = TP + TN RP + RN = TP + TN TP + TN + FP + FN .

5.2. Numerical Results

Since Legendre polynomial kernel is defined on [ 1,1 ] d , a scaling procedure is needed in order to make data in [ 1,1 ] . For this, we suggest the following transformation z i ( j ) = 2 x i ( j ) ( M j + m j ) M j m j , j = 1 , , d , i = 1 , , L , where M j = max 1 i L x i ( j ) ; m j = min 1 i L x i ( j ) .

The confusion matrices corresponding to each kernels mentioned above are given in Figures 2-5, where the test size is 0.05 and the penalty coefficient C= 0.01.

Figure 2. Polynomial Legendre kernel.

Figure 3. Liner kernel.

Figure 4. Rbf kernel.

Figure 5. Polynomial kernel.

In order to show the powerful separation properties of the kernel defined by (3.1), our numerical simulations were carried out on the diabetes detection model mentioned above. In this example we apply SVM with different kernels in Pima Indian diabetes dataset in order to compare the performance of the Legendre polynomial kernel with linear, rbf, polynomial kernels with respect to their Accuracy, Precision, Recall score, F1 score and Execution time (see Table 2).

Table 2. Numerical results.

The programs were implemented in Python.3.7 a Windows 7 computer with 3G memory. The test size is equal to 0.2 and the penalty is taken to be C= 0.001.

5.3. Conclusion

Form the comparative table above, it is clear that the polynomial Legendre kernel have a good separation property with a good precision and accuracy w.r.t the classical predefined kernel in Python. Essentially, we have shown that it is not necessary to have an RKHS with an infinite dimension in order to separate by an hyperplane all dataset in the feature space. In fact we can obtain a good classification with non big degree N = 20 . The more N increases, the more we obtain a better separation. The only disadvantage is the time required to fit the model with the polynomial Legendre kernel. The idea used here can be generalized with arbitrary sequence of orthogonal polynomials in order to get new kernel functions. The performance of separation property depends on the polynomials sequence and it should be tested on some examples of dataset.

Acknowledgements

The authors gratefully acknowledge Qassim University, represented by the Deanship of Scientific Research, for the support for this research.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Cortes, C. and Vapnik, V. (1995) Support-Vector Networks. Machine Learning, 20, 273-297.
https://doi.org/10.1007/BF00994018
[2] Chatterjee, R. and Yu, T. (2017) Generalized Coherent States, Reproducing Kernels and Quantum Support Vector Machines. arXiv:1612.03713v2.
[3] Abbassa, N., Abdessamad, A. and Bahri, S.M. (2019) Regularized Jacobi Wavelets Kernel for Support Vector Machines. Statistics, Optimization & Information Computing, 7, 669-685.
https://doi.org/10.19139/soic-2310-5070-634
[4] Jakkula, V. (2010) Tutorial on Support Vector Machine (SVM). School of EECS, Washington State University, Pullman.
[5] Hastie, T., Tibshirani, R. and Friedman, J. (2009) The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag, New York.
https://doi.org/10.1007/978-0-387-84858-7
[6] Aronszajn, N. (1950) Theory of Reproducing Kernels. Transactions of the American Mathematical Society, 686, 337-404.
https://doi.org/10.1090/S0002-9947-1950-0051437-7
[7] De Vito, E., Umanità, V. and Villa, S. (2011) An Extension of Mercer Theorem to Vector-Valued Measurable Kernels. arXiv:1110.4017v1.
[8] Rebei, H., Riahi, A. and Chammam, W. (2022) Hankel Determinant of Linear Combination of the Shifted Catalan Numbers. Multilinear Algebra.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.