Face Reconstruction Using Hybrid PCA, LDP and SVM Machine Learning

Abstract

This paper aims to develop a platform that allows face features to be extracted faster using multiple algorithms for looking up people in a large database. We will be presenting an enhanced technique for human face recognition where we will be using an image-based approach (process of using two-dimensional images to create three-dimensional models) towards artificial intelligence by extracting features from face images by using Principle Component Analysis, Local Directional Pattern and SVM Machine Learning. Up until now, studies focusing on face recognition rely on the fusion of PCA (Principle Component Analysis) and LBP (Local Binary Pattern) for feature extraction, PCA and LBP were used for global feature extraction of the whole image and the features of the mouth area separately. Results show that this method was susceptible to random noise and resulted in a performance rate of 89.64% [1]. Also, recent studies have shown the fusion of PCA (Principle Component Analysis) and LDP (Local Directional Pattern) for feature extraction [2]. First, PCA is adopted to extract global features of facial images, then LDP operator is used to extract local texture features of eyes and mouth area and these areas are calculated by comparing the relative edge response value of a pixel in different directions. This fusion resulted in a performance rate of 91.61%. The results of PCA and LDP method show that it is more effective than adopting the fusion of PCA and LBP. It’s more robust to noise and improves the rate of facial recognition. However, both methods still suffer from changes in illumination, pose changes, random noise, and aging. In this paper, we propose using a set of trained images to make the facial recognition process faster and provide more accurate results.

Share and Cite:

Shbib, R. , Sabbah, H. , Trabulsi, H. and Al-Timen, N. (2019) Face Reconstruction Using Hybrid PCA, LDP and SVM Machine Learning. Journal of Computer and Communications, 7, 1-19. doi: 10.4236/jcc.2019.711001.

1. Introduction

With the advancement of technology, face recognition is being used in many applications, such as biometric face detection for mobile devices, laptops and computer devices as well as in investigations in the police force. There have been new developments in the field of face detection to better enhance face recognition.

The era of 21st century is often regarded as an era of technology. Technology, today, plays a very important role in our life. It is seen as a basis of growth of an economy. An economy which is poor in technology can never grow in today’s scenario. This is because technology makes our work much easier and less time consuming. The impact of technology can be felt in every possible field; one such field is Education.

There is an increasing trend for higher education institutions to be expected to monitor student attendance, on the assumption that better attendance leads to higher retention rates, higher marks, and a more satisfying educational experience. Empirical evidences have shown that there is a significant correlation between students’ attendances and their academic performances. There was also a claim stated that the students who have poor attendance records will generally link to poor retention. Therefore, faculty has to maintain proper record for the attendance.

The manual attendance record system is not efficient and requires more time to arrange record and to calculate the average attendance of each student. Hence there is a requirement of a system that will solve the problem of student record arrangement and student average attendance calculation. One alternative to make student attendance system automatic is provided by facial recognition.

Attendance maintenance is an important task in all the institutions to check the performance of students. Every institute has its own way to do so. Some use the old paper or file-based approach and some have adopted methods of automatic attendance using some biometric techniques. There are many automatic methods available for this purpose. Face recognition is a technique of biometric recognition. It is considered to be one of the most successful applications of image analysis and processing; that is the main reason behind the great attention it has been given in the past several years.

The facial recognition process can be divided into two main stages: processing before detection where face detection and alignment take place (localization and normalization), and afterwards recognition occurs through feature extraction and matching steps. This system uses the face recognition approach for the automatic attendance of students in the classroom without student’s intervention. This attendance is recorded by using a camera that captures images of students, detect the faces in images, compare the detected faces with the database and mark the attendance.

In time, most devices will require the use of face recognition to have better security access on their platforms and to help identify people based on their facial features. The advancement in technology will provide several aspects on how humans interact with machines and the other way around. There have been seven emotions (neutral, anger, disgust, fear, happiness, sadness and surprise) used in the application of Automatic Facial Recognition, which has three main stages, Feature Extraction, Face Detection and Classification. Feature Extraction is used to extract the values of the facial features. Following that, Face Detection is used to detect the face and identify the main components. And the final stage is the Classification which is based on classifying the selected features [3] . The goal is to enhance the recognition process by adding additional features to help make it become faster and more precise.

In the automatic facial recognition process, the poses have been captured and saved in the database to be extracted later for training the face recognition system. In this case, if a new face appears it will not be recognized easily but there will be several trained images that it will be able to relate to.

There is another recognition process which is based on tracking specific points of a face. For example, it can be selected from a video sequence where a trajectory of the face’s movement will be created; then, using Gabor filter and Local Binary Patterns (LBP), the extraction and analysis of the facial features will be performed, where vectors of the new features will be pointed out. Then using the Support Vector Machine (SVM) classifier, the classification of the spontaneous facial data will be created in order to detect the new features. They have obtained a recognition rate of the SVM method that has reached up to 85% [4] .

Existing methods, even though proven not 100% accurate, involve Principle Component Analysis (PCA) [5] ; PCA uses orthogonal transformation to convert a set of observations of correlated variables into a set of linearly uncorrelated variables. While in Local Binary Pattern (LBP), histograms are extracted and combined in as single vector to form an efficient representation of the face and are used to measure similarities between images [6] . These methods are high in accuracy, but the problem is the time needed to identify the image and extract it for it to be recognized. In addition, they are both proven to be prior to noise.

The aim of this paper is to resolve the time problem by applying Machine Learning; this will help the face recognition process by following certain patterns of trained images in a database. When the face recognition process begins, a new image will be presented and it will be compared with the existing ones, once the image is matched it will be extracted from the database and viewed to the user. In this context, new problems may occur where there will be similar features for multiple people; in this case there will be more than one facial extraction possible for more than one person. For this problem, a more precise algorithm will be used to reduce similar faces by comparing related edges’ response value of one pixel in different directions using Local Dimensional Pattern LDP.

To reach this aim, we propose using Principle Component Analysis, in addition to Local Directional Pattern which improves the texture and stability for random noise problems in the Local Binary Pattern, and finally in the last phase we will use Machine leaning which will help extract the images faster than the previous methods by applying training algorithms to these images. The process will be as follows (extract global features, reduce dimensions, create Eigen faces, correlate variables and finally reconstruct original image from training sets).

2. Background to Work

2.1. Face Detection

Face Detection in Figure 1 is the first and essential step for face recognition. It is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source. Face recognition is a personal identification system that uses personal characteristics of a person to identify the person’s identity. Facial recognition is mostly used for security purposes, though there is increasing interest in other areas of use [7] [8] .

2.2. Feature Extraction

Feature extracting is a very important step in face recognition. The recognition rate of the system depends on the data extracted from the face image. If the features belong to different classes and the distance between these classes are bigger, then these features are important for recognition of the images [9] . For example, as presented in Figure 2 the features extracted belong to multiple classes such as the eyes, nose and mouth according to the distance between each feature.

Figure 1. Face detection1.

Figure 2. Feature extraction.

2.3. Classification Using Machine Learning

Machine Learning is a data analysis algorithm that builds an automated analytical model, it is related to artificial intelligence which is based on the idea that systems can learn from data identify patterns and make decisions with minimal human intervention. It was created by using pattern recognition and a theory that computers can learn without being programmed to perform specific tasks [10] .

3. Related Works

3.1. Introduction

Several factors can cause an image of the face to change, such as intrinsic and extrinsic factors [11] [12] . The intrinsic factor has something to do with the human’s nature look, for example the age, facial hair, glasses; these factors are defined as the intrapersonal factors. Other factors may include gender and ethnicity these factors are defined as interpersonal factor. On the other hand, in the extrinsic factors we deal with appearance due to interaction with the light, for example the illumination, pose, resolution, focus and others [13] .

Machine learning can be used to recognize faces. We use this method due to its simplicity of training a system to capture difficult face patterns. It is proven to be more efficient in face extraction than the principle component analysis [14] .

3.2. Hybrid PCA and LBP for Facial Expression Feature Extraction

3.2.1. PCA

Principle component analysis is a mathematical linear method that is used to convert a set of observations of related variables into a set of unrelated variables. The objective of PCA is to find vectors that give better results when dealing with changes of the face images in the image space. PCA transforms the original image space to an orthogonal eigenspace with recued dimensions. Eigenspace is a set of Eigen vectors associated with eigenvalue [15] .

Ÿ Eigenface Calculation

Step 1: Call the vector of all the training images saved in the ASCII file and stores in a matrix.

T = { T 1 , T 2 , T 3 , , T N } (1)

The above equation indicates a set T which comprises all the training images and N is number of available or existing training images stored in database.

Step 2: Find the mean of loaded training images using the equation given below.

Ψ = 1 N m = 1 N Γ m (2)

Ÿ Y = Mean of the training set

Ÿ Gm = Vector of Training image

Ÿ N = Number of training images

Step 3: Subtract the calculated mean from the vector of training image to find subtracted mean.

Φ m = Γ m Ψ (3)

Ÿ F = The subtracted vector of training image

Ÿ Gm = Vector of training image

Ÿ Y = Mean of the training set

Here, the Φm indicates the difference between the vector of m’th training image and the mean image.

Step 4: Calculate the covariance matrix for the subtracted vector of training image.

C = 1 N m = 1 N Φ m Φ m T (4)

Here, C is the covariance matrix of matrix, which has subtracted image vectors.

Step 5: Calculate Eigen vectors and Eigen values, eigenvectors and corresponding eigenvalues should be calculated from covariance matrix C.

C = A A T : A = [ Ψ 1 , Ψ 2 , Ψ 3 , , Ψ n ] (5)

Step 6: Obtaining the appropriate eigenfaces.

In this step, the Eigenvectors that have highest Eigen values will be chosen as the well-fit eigenface because it describes more characteristic features of the selected face. Eigenfaces with low Eigen values could be omitted since they matched only a small part of characteristic features of the faces [16] .

3.2.2. LBP

Local Binary Pattern is a method for texture classification; it works by labeling the pixels of an image by using segmentation of the neighboring pixels and considering the result in binary number. It is a simple approach for texture analysis used in many applications. The main property of this method is its robustness to monatomic grey-scale changes caused by illumination variations [16] .

3.2.3. PCA and LBP

Recent studies for using this method has proven accuracy rate of 89.64% in addition to the use of SVM classifier. In the beginning, they used geometry normalization, energy normalization and eight-eye segmentation to improve the facial image quality before processing the image. After that, they used PCA and LBP to extract global features of the whole face and local features of the mouth area separately [1] .

3.3. Fusion of PCA and LDP for Facial Expression Feature Extraction

As discussed in the previous method, PCA and LBP were used to extract global features and local features of the mouth area [1] . However, it was affected by random noise and change of non-monotone illumination. Whereas this method proposes a fusion of PCA and LDP (Local Directional Pattern) for feature extraction, which is more resistant to noise and changes of non-monotone illumination with an increased recognition rate.

Local directional pattern (LDP) is an improvement of the LBP method; it is a local feature descriptor for recognizing human face. Since LDP is an eight-bit binary code assigned to each pixel, it is obtained by computing the values of edge responses in all eight directions in each pixel position and generates a code according to its magnitude strength [17] . Each face is divided into small regions where the LDP is extracted and combined into a single feature to represent the facial image efficiently [18] . The edges value of a pixel is calculated using Kirsch masks. The kirsch mask finds the maximum edge strength in all 8 compass directions in Figure 3. Then the eight edge response values are obtained (Figure 4).

Face detection is a technology used to identify human faces in digital images. Firstly, the human eye is detected by testing all the regions in the gray-level image. Then the algorithm is used to generate possible face regions such as iris, eyebrows and mouth corners. Viola-Jones in face detection is used due to its high processing speed and high detection rate. It has four stages, which are: Haar feature selection, create integral image, Adaboost training and cascading classifiers.

The Haar feature calculates the difference between the sum of the pixels within the white and black region of interest using the equation below. The selection of features matches similar face properties, some of these common properties in human faces include, eye region darker than the upper cheeks (Figure 5), or nose bridge region is brighter than the eyes (Figure 6).

Figure 3. Kirsch masks.

Figure 4. Returned values.

Figure 5. Eye region.

Figure 6. Nose region.

After locating the eyes and mouth area as shown in Figure 5 and Figure 6, and preform a position check, then we can confirm the position of the face [19] . Then the center of these images is passed to the next stage of classification, this stage will check the face from the dataset. Creating an integral image is identified as to how we will be representing an image. This integral image evaluates rectangular features in constant time. Each rectangular area is adjacent to at least one other rectangle. Adaboost Training algorithm is a learning algorithm used to select the best features to train the SVM classifiers and use them. It constructs strong classifier from weak classifiers. Finally, after the classifier is trained it is applied to a region of an image and detects the object in question. This process requires two samples: positive and negative, where the negative represents the non-object images.

Feature extraction occurs after the face has been detected by the Viola-Jones algorithm. The face obtained from that algorithm is then used for feature extraction. Selecting the features, which are unique in every face, is very important because they will be used to store discriminated information using feature vectors called histogram of oriented gradients (HOG). HOG features can be extracted with these steps: calculate gradient of the image, calculate the histogram of gradients, normalize histograms and finally form the HOG feature vector.

To calculate the HOG, the image is broken into a grid of cells with a size of 8 × 8 pixels. For each cell, the histogram has 9 channels each associated with a range of directions from 0 to 180 degrees [20] . The pixels in the corresponding cells select a channel according to direction and magnitude of the gradient (Figure 7).

Figure 7. HOG of the eye.

4. Proposed Work

In order to implement this method with a high recognition rate and good performance speed, we propose the use of Machine Learning. This method uses trained images by following a set of algorithms to extract face images and compare them with the existing ones. By using trained images alongside the two methods, which will be used to detect the features and extract them, we will be able to predict the facial feature changes in time based on the eigenvalues that were calculated upon feature extraction. This allows us to overcome the aging and no monotone illumination variation problems in the face images. Using machine learning, each image is preprocessed using a set of algorithms and is saved in the database and is then used in comparison with the proposed input image, it will reduce the processing time and provide more accurate face recognition results. This method aims to provide a supervised self-learning algorithm, which provides effective face recognition and face extraction results. This technique will use a hybrid method of PCA, LDP and Machine learning, where we will detect the face, extract the features and use machine learning to train the images and classify them into categories. The proposed work is shown below.

Ÿ The first step is locating the faces in an image; we start by converting the image to grayscale to simplify the extraction since there is no need for colors.

Ÿ After converting the image to grayscale, every single pixel in the image will be observed at a time [21] . For every pixel there will be many pixels surrounding it as shown in Figure 8, the purpose of this is to see how dark the current pixel is compared to the surrounding pixels. Then an arrow will be drawn to determine the direction of the image as it becomes darker as shown in Figure 9 and Figure 10. After repeating this step for every pixel in the face, all pixels will be replaced by an arrow, which is identified as gradients, which shows the flow from light to dark across the image (Figure 11).

The reason why the pixels are replaced by gradients is that if we want to analyze pictures directly, the really light and really dark images of the same person will have different pixel values. On the other hand, by using the direction we will

Figure 8. Flowchart of the proposed work.

Figure 9. Surrounding pixels.

Figure 10. Darkeness direction.

Figure 11. (a) Original image; (b) Light flow.

have exact representation of dark and light images. However, it would be better to see the flow of light instead of the direction of every pixel so we can have the basic pattern of the image. This is accomplished by splitting the image into squares of 16 × 16 pixel each. Then the gradient points in each square with all major directions will be counted, and then replaced with the arrow direction that is the strongest. This will provide us a simple representation of the image that has the basic structure of the face (Figure 12).

The image is converted into HOG representation to capture the main features of the image regardless of the brightness. In order to find all the faces in the image it is necessary to find the part of our image that looks like the HOG patters that we have extracted from previous trained faces.

In case of pose changes, meaning if the face is rotated in different directions it might not seem the same to a computer. For this issue, there is an algorithm that can be implemented called face landmark estimation. This algorithm works as follows, it will 68 specific pointes called landmarks that are found in all faces (Figure 13), and then it will use machine-learning algorithms for training to be able to find these landmarks.

After locating the eyes and mouth; the image will be scaled and rotated in order to center the eyes and mouth (Figure 14).

Ÿ SVM Machine Learning

After the features are extracted, we need to extract 128 measurements from each face in order to measure the unknown face the same way and find the known face with the closest measurements [22] . In this training process, first an image of a known person is introduced, then another image of the same person is added, and finally an image of a different person is added for comparison. The algorithm checks the measurements for the first and second image and confirms that the measurements are similar while confirming that the measurements for the second and third images are different as shown in Figure 15.

This process requires a lot of data and computer power; it takes approximately 24 hours of training to get accurate results. However, as soon as the system is trained it can simply generate measurements for any face even new ones.

The final step is finding the person from the database of the existing persons who has the closest measurements to unknown face that was introduced. This can be accomplished by using a simple classification algorithm known as SVM classifier. This classifier needs to be trained so that it can take the measurements from the new image and identifies the closest match to that image.

Figure 12. HOG representation.

Figure 13. 68 landmarks.

Figure 14. Scale and rotate.

Figure 15. Compare 128 measurments.

We can see in the flowchart presented above the steps to preform our face recognition hybrid method. First, the image will be captured by the camera or included in the system manually, the system will then try to locate the face outline and preform extraction of the global features. The global feature extraction includes detecting the eyes and mouth area, if the extraction was not successful the system will try to reload the image and preform the extraction again.

On the other hand, if the eyes and mouth area were detected then the system will apply the PCA algorithm to reduce the face dimensions, and then calculate the Eigen values and Eigen vectors by using the equation mentioned Before. After the calculations have been completed, an Eigen face will be created and then the system will apply the LDP for texture extraction. LDP will compute the values of edge responses in 8 directions where each face is divided into small regions and the LDP is extracted and combined into a single feature.

After the face features are combined; the system will check if this image is being trained or is used for comparison, if the image is new and being trained it will include it to the database for future use, however if it is being used for comparison, the system compare this image with the existing trained images to see if it will be recognized or not.

The process of comparison occurs by preforming a pattern recognition method known as supervised learning. As mentioned in part two, the SVM will use this equation f i = S u m ( r i , w h i t e ) S u m ( r i , b l a c k ) to perform an accuracy, check of the training set of data if there is a match it will compute the differences between both images and if the difference between images is less than the threshold then the face is recognized, however if the difference is higher than the threshold then the face is not recognized.

We defined two parameters to measure the success of our proposed algorithm. First is the Error Rate (ER), which is defined as number of false detections in the image divide by the total number of detections (face and non-face).

We will use these equations to calculate the success rate of our proposed method, and we will observe the following (Table 1):

ErrorRate ( ER ) = Numberoffalsedetection TotalNumberofDetections × 100 % (6)

FaceDetectionSuccessRate ( FDSR ) = Numberoffacedetected TotalNumberofFaces × 100 % (7)

ErrorRate = 1 13 × 100 % = 7.69 %

FaceDetectionSuccessRate = 12 7 × 100 % = 92.31 %

Table 1. Rate of the proposed methods.

Our proposed method shows an accuracy rate of 92.31% using a set of trained faces. After running the “test your model” function in our program, we have obtained 12 accurate detections from a total of 13 images. We can assume that the undetected face was not obtained due to the difficulty of detecting the eye region since the person is wearing sunglasses which is covering his eyes. Therefore, when the system came across this image, the eye feature could not be extracted, as a result the image was not recognized.

5. Experimental Results

Planning the design of the evaluation; we considered using SVM Machine Learning algorithm along with a set of trained images and used new images to compare results. We chose to perform this experiment using two different approaches, one to train the image and one to search for the image in the same program to help speed the process. Images were prepared using a set of cropping tools reducing their size for faster results. In the end, the program uses the trained images to compare results with the new input image to see if it exists or not in all directories.

After implementing the hybrid algorithm that is composed of PCA, LDP and SVM Machine Learning by using Python code and running it on Spyder program, which is imbedded in Anaconda Environment, we have obtained very accurate results in a short period. Firstly, we have trained a directory of a specific person in a test directory, then we tried obtaining this person’s image by preforming the test option which identifies this person from the test directory are identifies this person among a set of other images.

Training and Results

The first step for implementing this system is to train a person directory in a test directory. The system will access this test directory and preform the face recognition process to identify the person we are going to be looking for. As shown in Figure 16 below we will perform the training process.

Figure 16. Traning phase.

In this step, we will press on “open person dir” to choose the directory of the person we want to train. Then we will press on “open another dir” to choose the test directory where we want our person image to be trained.

The results are computed in the program and are not visible to the user. After clicking on “Train model”, the person directory images will be detected by the HOG algorithm and added to the “testimage” directory. This process will be repeated for all the images in the person’s directory. After the process is complete, all the trained images will be included in the test directory and we will be able to perform a face recognition search by using it (Figure 17).

Here we will perform a test to see if the system will capture the images of the person that we are looking for. We will first enter the person’s name who we want to look for in the input box under “Enter person name”, then we will choose the directory where we want the system to access for the search process by pressing on “open test dir”. First, let us test the system on the person dir itself to see if it obtains all his images. The results will be shown in Figure 18.

Figure 17. Testing phase.

Figure 18. Testing results.

As we can see from the obtained results, all the images related to “tom” are obtained from the “tom” directory. Here we can conclude that the system recognized all “tom’s” images from the “tom” folder.

In our next test, we will try to obtain “tom” from a “test” directory, which we have previously trained and see the results as follows. We will click on “open test dir” and chose a folder named “testimage” to perform the test (Figure 19).

As we can see from the results, upon entering the person’s name “tom” the system has identified only the images of “tom” amongst the other images in “testimage” folder. Here below are the results obtained in close caption.

The experiments show that the system has a high accuracy rate in face recognition, where the images obtained are the images of the person that we were looking for. In addition to its being compared with the other images in the “testimage” directory and recognizing only the person himself while the others are labeled “unknown” because they were not recognized by the system.

It can be seen in this experiment that the system proposed works more efficiently than other systems with different algorithm due to its performance speed and high accuracy rate, in addition to its simplicity on the user’s side while running this program for face recognition. The obtained results took no more than 10 seconds for all the images to be recognized from a directory composed of 13 images. Finally, the system used 29% only from the memory space, which states that the system works at a very high speed comparing to others.

The system used in this work identify and recognize face them on the basis of accuracy and computational time. Some of them have disadvantages in term of detection rate, accuracy or timings. The greatest optimal detection percentage can be obtained through our algorithm. The success of implementation depends on pre-processing stage on the images because of illumination and feature extraction.

Figure 19. Testing results.

6. Conclusions and Future Work

In this paper, we presented the problem, which is the need for an accurate and fast face recognition technique using the fusion of principle component analysis and local directional pattern. We then proceeded to discuss the current technique, which is using only machine learning and we highlighted their drawbacks and limitations. Following that, we proposed a new technique that combines these three methods while adding some features to address the problems that resulted from previous methods, in addition to providing a program for our evaluation along with the results obtained.

Several contributions have been made:

Ÿ Our first contribution was the major reduction in the face recognition speed for images that were not trained by the system, and replacing them with trained images created by the SVM Machine Learning to reduce the processing speed by training a set of images and accessing a locally trained directory of these images and use it in comparison with the input image.

Ÿ The other contribution made was to handle all the image variations including, pose changes, aging, and illumination changes. We have used PCA to handle the illumination variations by drawing an arrow to determine the direction of the image as it becomes darker, also, we have used LDP to fix the aging problem by dividing the face into small regions and uses Kirsch mask to find the edges and form a feature.

Ÿ Attendance management is significant to all organizations such as educational institutions. It can manage and control the success of any organization by keeping track of people within the organization such as students to maximize their performance. The proposed system offers the process of monitoring attend students, it aims to help the teacher in the classroom or laboratories to manage and record students’ presence electronically and directly without the need to list on paper so it will save time and effort. The system can analyze the data and display statistics about the student’s absences, printing reports about absence percentages and students’ warnings for the specified period.

While working on our proposed method and presenting new techniques to solve the problem we are addressing, we came across some issues that we would like to tackle in the future. Here are some of these issues.

Since our system is based on image training, all images are trained and are included in the image directories. As a result, while trying to obtain an image for a certain individual, the system must go through all these images to find the intended one. This process takes time there for causes a delay in retrieving the image of the person we are looking for. In the future, we plan to create a solution different from the conventional solutions that will avoid this sort of delay caused by image load while maintaining the accuracy level of the images retrieved.

Ÿ Memory Space

After training all the images, we will have many directories of the person’s image and the test images, in addition to each image will be of a certain size, large images will cause a memory overload. If we can reduce the image size in the future upon training them, it will reduce the directory memory, which will reduce the overall memory of all directories and help make the image retrieving process faster.

Ÿ Image Resolution

Some of the images included in the person’s directory might have a low resolution, which will make it hard to recognize while comparing it with a given input picture. In the future, we can include a feature, which will automatically enhance the image before preforming the recognition process.

NOTES

1Face detection—An overview and comparison of different solutions.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Yuan, L.U.O., Wu, C.-M. and Zhang, Y. (2013) Facial Expression Feature Extraction Using Hybrid PCA and LBP. The Journal of China Universities of Posts and Telecommunications, 20, 120-124.
[2] Yuan, L., Zhang, T. and Zhang, Y. (2016) A Novel Fusion Method of PCA and LDP for Facial Expression Feature Extraction. Optik-International Journal for Light and Electron Optics, 127, 718-721.
https://doi.org/10.1016/j.ijleo.2015.10.147
[3] Asad, U., Kashyap, N. and Narayan Singh, S. (2017) Recent Advancements in Facial Expression Recognition Systems: A Survey. 2017 International Conference on Computing, Communication and Automation, Greater Noida, India, 5-6 May 2017, 1203-1208.
https://doi.org/10.1109/CCAA.2017.8229981
[4] Piatkowska, E. and Martyna, J. (2012) Spontaneous Facial Expression Recognition: Automatic Aggression Detection. Institute of Applied Computer Science Jagiellonian University, Jagiellonian University, Cracow, Poland.
[5] Zhang, Z.H. and Castelló, A. (2017) Principal Components Analysis in Clinical Studies. Annals of Translational Medicine, 5, 351.
https://doi.org/10.21037/atm.2017.07.12
[6] Rahim, A., Tanzillah Wahid, N.H. and Azam, S. (2013) Face Recognition Using Local Binary Patterns (LBP). Global Journal of Computer Science and Technology Graphics & Vision, 13, 1-9.
[7] Vineetha Sai, M., Varalakshmi, G., Bala Kumar, G. and Prasad, J. (2013-2017) Face Recognition System with Face Detection. Jawaharlal Nehru Technological University Kakinada, Kakinada.
[8] Techopedia (2018) What Is Facial Recognition?
https://www.techopedia.com/definition/32071/facial-recognitionn
[9] Abiyev, R.H. (2014) Facial Feature Extraction Techniques for Face Recognition. Journal of Computer Science, 10, 2360-2365.
https://doi.org/10.3844/jcssp.2014.2360.2365
[10] Bishop, C.M. (2006) Pattern Recognition and Machine Learning. Springer, Amsterdam.
[11] Riddhi, P. and Yagnik, S.B. (2013) A Literature Survey on Face Recognition Techniques. International Journal of Computer Trends and Technology, 5, 189-194.
[12] Yang, M.-H., Kriegman, D.J. and Ahuja, N. (2002) Detecting Faces in Images: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 34-58.
https://doi.org/10.1109/34.982883
[13] Dandpat, S.K. and Meher, S. (2013) Performance Improvement for Face Recognition Using PCA and Two-Dimensional PCA. 2013 International Conference on Computer Communication and Informatics, Coimbatore, India, 4-6 January 2013, 1-5.
https://doi.org/10.1109/ICCCI.2013.6466291
[14] Thuseethan, S. and Kuhanesan, S. (2016) Eigenface Based Recognition of Emotion Variant Faces.
https://doi.org/10.2139/ssrn.2752808
[15] Pietikäinen, M. (2010) Local Binary Patterns. Department of Electrical and Information Engineering, University of Oulu, Finland.
[16] Jabid, T., Kabir, M.H. and Chae, O. (2010) Local Directional Pattern (LDP) for face recognition. 2010 Digest of Technical Papers International Conference on Consumer Electronics, Las Vegas, NV, 9-13 January 2010, 329-330.
https://doi.org/10.1109/ICCE.2010.5418801
[17] Shah, P.M. (2012) Face Detection from Images Using Support Vector Machine.
[18] Arun, A. and Barrina, P.N. (2004) Face Recognition Using Machine Learning.
[19] Anissa, B., et al. (2011) Face Detection and Recognition Using Back Propagation Neural Network and Fourier Gabor Filters. Signal & Image Processing, 2, 15-21.
https://doi.org/10.5121/sipij.2011.2302
[20] Russell, S.J. and Norvig, P. (2010) Artificial Intelligence: A Modern Approach.
[21] Mehryar, M., Afshin, R. and Ameet, T. (2012) Foundations of Machine Learning.
[22] Ethem, A. (2010) Introduction to Machine Learning. MIT Press, Cambridge, MA.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.