Hyperspectral Image Classification Based on Hierarchical SVM Algorithm for Improving Overall Accuracy ()
1. Introduction
In order to discriminate between the similar species, hyperspectral image (HSI) including a large number of spectrum bands is introduced. The large number of spectral bands in hyperspectral remote sensing images is challenging in the classification algorithms from two perspectives. Firstly, due to the proximity and narrow spectral bands, redundancy information is significant. On the other hand, high information volume leads to confusion and degradation of the classification algorithm performance.
HSI classification is a significant challenge in remote sensing applications. Generally, the HIS classification algorithms fall into three categories: supervised, unsupervised, and semi-supervised. Due to the high feature space dimension of the hyperspectral images, the supervised algorithms are encountered with the Hughes phenomenon. Two approaches are proposed for solving this problem. The first, the semi-supervised algorithm [1] prevent from Hughes phenomenon with predicting initial labels for the test pixels. The feature space reduction [2] which includes two different methods, feature extraction [3] and feature selection [4] is the second approach for reducing computational complexity and increasing prediction accuracy. In [5] , a Genetic Algorithm (GA) based wrapper method is presented for classification of hyperspectral image using (SVM), a state-of-art classifier that has found success in a variety of areas.
The large number of algorithms has been proposed for HSI classification in the last decades. Among these methods, SVMs is most compatible with HSI Classification optimization problem [6] [7] [8] [9] . In [10] , the SVM method is introduced to classify the spectral data directly with a polynomial kernel. In order to improve the classification performance, different kinds of SVM-based algorithms [11] - [19] have been proposed. The semi-supervised learning based on labeled and unlabeled samples and kernel combination for integrating both spec- tral and spatial information are two ways to deal with Hughes phenomenon and linear of SVM algorithm.
SVM algorithm is particularly attractive in RS (Remote Sensing) applications. The main properties of it can be summarized as follows:
・ The SVM algorithm is designed based on the structural risk minimization principle, which results in high classification accuracy and very good generalization capabilities. This property is significant in HIS classification problem with high dimensional feature spaces and few training samples.
・ The data is mapped into a high dimensional feature space to solve non-linear separable classification problems by the kernel function. Thus, the data is separated with a simple linear function.
・ The optimization problem in the learning processes of the classifier is convex, which is solved by linearly constrained quadratic programming (QP) characterized from a unique solution. Thus, the system cannot fall into suboptimal solutions associated with local minima.
・ A dual formulation of the convex optimization problem can be represented, where only non-zero Lagrange multipliers are necessary for defining the separation hyper-plane. This is related to the property of sparseness of the solution.
In attention to the high information volume of hyperspectral images, a proposed hierarchical algorithm based on SVM for processing data without pro- cessing the additional information is presented in two stages. In the proposed algorithm, clusters including several same classes are introduced. The same classes are determined based on the Euclidean distance between the centers of class. Due to the lower number of clusters and less similarity between the clusters, a limited number of features for clustering pixels are required. Features are selected based on correlation criteria between cluster labels and features. The number of clusters and the used features at any stage is determined in preprocessor block. Then, SVM algorithm is applied at each stage and predicted labels are presented. The first, the proposed classifiers and preprocessing block explained. Then, the data sets used in the evaluation process is presented. Finally experiment results are shown for evaluating the hierarchical SVM method.
The proposed method is presented in Section 2. Section 3 shows the result simulation and discuses on effective parameters in determining classification accuracy. Finally, Section 4 concludes the paper.
2. Material and Method
In this paper, the proposed algorithm is based on SVM algorithm which machine learning is supervised. In general, supervised learning stages as follows:
1) Prepare image: Preprocessing block is responsible the preparation of the data for the image classification algorithm.
2) Select the algorithms: algorithms based on factors speed the process of learning, memory requirements, new data prediction accuracy and transparency of the relationship between output and input is selected.
3) Allocate Model
4) Apply the model to test data (prediction)
In this thesis, in order to cope with the effects of high information volume of images in the classification accuracy, preprocessing block is designed according to the data structure. The proposed hierarchical classifier is shown in Figure 1. In this design, data set is analyzed by preprocessing block so that the required number of classes and features is determined in each stage. Classification accuracy depends on the number of classes, training samples and features. Assuming a constant number of samples and features, classification accuracy decreases if the number of classes will increase. On the other hand, reducing the number of training samples leads to degradation in the classification performance. The high number of features in HIS and correlation of them, by increasing data redundancy and computational complexity, lead to confusion in classification algorithms result. The high resolution hyperspectral images to enhance discrimination of the high similarity classes are proposed. The proposed algorithm reduces computational complexity by combining similar classes and choosing a limited number of features in the first level. In the next step, classes within every cluster are separated.
Preprocessing Block
The preprocessor block determines the new clusters and features which the classifiers require at both levels. The mean pixels in each class are considered as the class center. The new clusters include classes which the Euclidean distance is minimum. In this block, feature selection method type is filtering. Features rank is determined based on the correlation criterion between Features and labels according to Equation (1).
(1)
and Y are i-th feature and labels vector respectively. A correlation criterion reveals linear dependence between features and labels. On the other hand for the aggregation class, the average of each class is intended to represent each class. This block is shown in Figure 2.
3. Discussion and Simulation Results
As noted earlier, the classification accuracy of HSI remote sensing images depend on the number of classes, features, training data as well as the kernel function. Overall classification accuracy is reduced by growing the number of classes. According to the simulation, overall classification accuracy reaches to 73% when applying SVM to IPS image with 16 classes and 100 features. While the overall accuracy approaches to 86% by reducing the number of classes and features to 7 and 50, respectively. Different kernel functions vary the classification accuracy of about 20%. According to tradeoff between accuracy and complexity, Gaussian kernel function is acceptable. Training data limitation is a most important for reducing the classification accuracy. In order to determine the range of changes, the simulation was performed with assuming 50 features of IPS data. As shown
Figure 1. The proposed hierarchical classifier block.
Figure 3. Classification accuracy of SVM algorithm versus training data set.
in Figure 3 if training data set is changed from 10% to 75%, the overall accuracy increases as much as 10%.
Another contributing factor in the origin of HIS classification accuracy is the number of features. It seems that a rising number of features should increase the classifier accuracy. But since the number of training data is limited, the classification accuracy does not improve with increasing the number of features in practice. SVM algorithm is applied on IPS with assuming the number of different features. As shown in Figure 4, classification accuracy versus the number of features is not monotonically increasing function. Not only does not increasing the number of features of the 170 improve the classification accuracy but also classification accuracy is decreased with increasing computational complexity and information redundancy.
3.1. Data Set
First, we evaluate the performance of proposed method on the Indian Pines data set. This data set totally consists of 145 × 145 pixels and 224 spectral bands in the
Figure 4. Classification accuracy of SVM algorithm versus the number of feature.
wavelength range 0.4 - 2.5 μm, which contains 16 ground-truth classes corresponding to different plants. Figure 5 illustrates original image and ground-truth classes.
3.2. Results
In order to evaluate the proposed hierarchical algorithm, the performance of this algorithm and SVM algorithm were compared on Indian Pines data. The proposed algorithm steps are given in the Table 1.
Simulation is done based on SVM algorithm assuming a Gaussian kernel function, 100 features and 20% training data set. While proposed hierarchical algorithm is applied assuming Gaussian kernel function, 20% training and 50 (level 1), 30 (level 2) features. Figure 6 and Figure 7 illustrate the overall accuracy of the SVM algorithm and proposed hierarchy algorithm, respectively.
In Table 2, detail the classification accuracy of all mentioned two classifiers including the SVM classifiers and the proposed methods on the Indian Pines data is shown.
4. Conclusion
According to the correlation of classes and features in hyperspectral images, it is not needed to all the features for discriminating the classes. In the proposed algorithm, classification is accomplished in both levels so that computational complexity reduces and overall accuracy increases. Feature selection is based on filter method which decision criteria is correlation between classes and features. Thus, the proposed hierarchical SVM algorithm achieves an acceptable accuracy
(a)(b)
Figure 5. (a) Original image; (b) Ground-truth classes of Indian Pines data set.
Table 1. The proposed algorithm steps.
Figure 6. Classification map with SVM algorithm.
Figure 7. Classification map with proposed hierarchical SVM algorithm.
Table 2. Classification Accuracy on IPS for Hierarchical SVM and SVM.
with the number of fewer features. The simulation results also show an increase about 7% in the accuracy of the proposed method rather than SVM algorithm.