Research on Key Technologies of Hand Function Rehabilitation Training Evaluation System Based on Leap Motion

Abstract

This paper proposes an immersive training system for patients with hand dysfunction who can perform rehabilitation training independently. The system uses Leap Motion binocular vision sensors to collect human hand information, and uses the improved PCA (Principal Component Analysis) to perform data fusion on the real-time data collected by the sensor to obtain more hands with fewer principal components, and improve the stability and accuracy of the data. Immediately, the use of improved SVM (Support Vector Machine) and KNN (K-Nearest Neighbor Algorithm) for gesture recognition and classification is proposed to enable patients to perform rehabilitation training more effectively. Finally, the effective evaluation results of the rehabilitation effect of patients by the idea of AHP (Analytic Hierarchy Process) are taken as necessary reference factors for doctors to follow up treatment. Various experimental results show that the system has achieved the expected results and has a good application prospect.

Share and Cite:

Xiao, Z. , Zhao, Y. , Li, N. , Zhou, S. and Xu, H. (2021) Research on Key Technologies of Hand Function Rehabilitation Training Evaluation System Based on Leap Motion. Journal of Computer and Communications, 9, 19-35. doi: 10.4236/jcc.2021.91003.

1. Introduction

According to the data in the report of stroke prevention and treatment in China in 2019, the number of stroke patients over the age of 40 reached 12.42 million. At present, the number of patients in China continues to grow at a rate of 12% per year, bringing heavy burden to patients’ families and this society. There are currently 17 million new stroke patients worldwide each year (equivalent to the total population of Beijing), 6.5 million deaths from strokes each year, and 26 million surviving stroke patients worldwide (equivalent to that of a European country’s total population). Everyone has a one-sixth chance to be related to stroke. Stroke is an acute cerebrovascular disease with high morbidity, high mortality and high disability. It is the leading cause of disability in Chinese adults. Stroke finger weakness or poor movement affects the patient’s normal rehabilitation progress. Stroke rehabilitation has become a major problem for stroke patients [1], so how to use modern human-computer interaction technology to play a certain key to the rehabilitation of patients, compared with large and expensive machinery and equipment, is more important and simpler, and at the same time, it can be afforded by most patients. Under this background, the key technology of hand function rehabilitation evaluation system based on new somatosensory equipment is proposed in Research.

In recent years, with the maturity of the computer level all over the world, human-computer interaction based on virtual reality technology has also become the focus of research. In-depth research on the existing new body-sensing device Leap Motion is also a problem that many scientists are keen on. In 2019, Z. W. Zhu [2] used Kinect to introduce the Bhattacharyya distance into the Bayesian Perceptual Hidden Markov Model to develop a depth image-based gesture recognition system, and verified the superiority of the system, but overall, Kinect gesture recognition is far less accurate and reliable than Leap Motion. In 2019, P. Sun [3] and others used a combination of principal component analysis and support vector machine to classify and recognize static gesture pictures. The results show that the algorithm has certain application value. The disadvantage is that the system only uses gesture images. Simple identification and classification have been performed. The accuracy needs to be improved, and no specific application scenario is mentioned. In 2017, C. X. Tang [4] used the effective combination of multiple sensors to reduce the decline in gesture recognition rate due to occlusion and other factors, thereby effectively improving the recognition rate. However, for the joint effect of multiple Leap Motion, there is still no more convincing experimental proof. In 2016, Z. H. Liu [5] of Donghua University and others used Leap Motion and PC to build a low-cost stroke upper limb rehabilitation and evaluation system. Patients completed training tasks and achieved a certain degree of rehabilitation under the guidance of virtual games. The rehabilitation evaluation system only uses the example of Leap Motion’s official website, and does not reflect the detailed evaluation scores in real-time rehabilitation with real patients, which is highly subjective. In 2015, J. T. Hu [6] improved the static and dynamic gesture recognition algorithms of Leap Motion, and also applied them to some simple daily activities. However, the types of gestures that can be recognized are too simple, and the accuracy is still slightly insufficient.

To sum up, there is currently no system for efficient rehabilitation training for patients with hand dysfunction and the training results are recorded and fed back to the doctor in real time. This paper proposes the key technology research of hand function rehabilitation training system based on Leap Motion. The real-time rehabilitation training information collected by Leap Motion is used to effectively identify and classify the optimized PCA and SVM. This not only avoids the problem of low recognition rate caused by Leap Motion, but also overcomes the problem of losing gesture information caused by simply using an algorithm, and accurately improves the recognition rate of gestures. Then, the effect of rehabilitation training on patients is evaluated with the idea of AHP, so that doctors can grasp the rehabilitation information of patients at any time. In order to achieve more effective rehabilitation of patients’ hand function training.

2. Design and Implementation of Hand Rehabilitation Training System

Leap Motion is a new type of somatosensory device [7], which adopts the principle of infrared binocular vision and uses infrared LED and cameras to complete the recognition and tracking of human hand movements in a way different from other motion control technologies. The two built-in cameras can capture the information in the shape of an inverted pyramid between 25 - 600 ms above, as shown in Figure 1. Leap Motion uses triangulation to locate the position information of the hand in three dimensions. The basic unit of its collection is frame, with an average capture accuracy of 0.7 mm. At the same time, it records and tracks hand movement data at a rate of 200 frames per second. Each frame of data contains the position information of the key parts of the hand, including palm movement speed, palm normal vector, finger orientation and so on. This accuracy is much higher than Microsoft’s Kinect, and has higher acquisition efficiency and accuracy.

Leap Motion transmits the captured static gesture position, vector information, and dynamic gesture movement information to the computer for subsequent processing and gesture extraction and recognition through the USB interface [8]. The specific gesture recognition process is shown in Figure 2.

Figure 1. Leap Motion mapping range map.

Figure 2. Leap Motion gesture recognition flowchart.

Among them, Leap Motion’s most important steps in gesture recognition are gesture segmentation, gesture analysis and tracking, and gesture recognition. The role of gesture segmentation is to separate the required gestures from the surrounding environmental factors, so as to better realize the recognition of gestures; the role of gesture analysis and tracking is to obtain the feature information and motion characteristics of the gestures, thereby ensuring the subsequent algorithms Robustness; the role of gesture recognition is to accurately classify various types of gestures, and it is also the most critical step to make the type of gesture required to play a better role in applications based on gesture recognition.

The system is based on Leap Motion hand function rehabilitation training system. For patients with hand dysfunction, they can perform self-rehabilitation training through PC-side system software, and the attending doctor can view the related evaluations obtained from patient training through its rehabilitation management system. Data for real-time post-rehabilitation tracking. First, a Leap Motion collects the patient’s gesture tracking data in real time, and reads and optimizes the gesture data through the data processing center on the PC, and matches the optimal path with the relevant model in the database to make the gesture data more accurate. Recognize more quickly, and then drive the gesture model in the Unity3D scene to perform a series of rehabilitation training. The overall implementation of the gesture rehabilitation training system is shown in Figure 3.

The hand functional rehabilitation training system is mainly divided into several modules. The main part is around the PC end of the data processing center, followed by training games, gesture input, and system evaluation. Next is the specific function of each module of the system in Figure 4.

Training game module: In order to enable patients to actively perform hand function rehabilitation training, author adopts a 3D virtual game developed in the Unity 3D environment. The hand model is made by 3Ds Max, which is more realistic, so that patients can feel the rehabilitation training in their hands. The fun is shown in Figure 5. Under the guidance of medical staff, patients can choose the difficulty of the game according to their illness, different difficulty games, the degree of bending of the fingers and the trigger effect to be achieved in the game are also different. Such immersive rehabilitation training will bring

Figure 3. Overall system framework.

Figure 4. Internal structure of hand function rehabilitation system.

Figure 5. System game interface.

the body and mind to patients Joy.

Gesture entry module: After the patient selects the difficulty of the game, there will be a demonstration of standard movements, what angle the finger needs to reach, and how long it is necessary to continue such movements. The patient can know in advance. After entering the action, after the improved AM-PCA algorithm and SVM-NN optimization processing, the model in the game is driven to start the human-computer interaction experience.

System evaluation module: The patient’s specific gesture data characteristic values (finger bending angle, action progress time, angular velocity, and spatial position) will be recorded in the database, and the final score will be calculated by weighted average calculation. Perform a comparative analysis to obtain the score this time. Follow-up treatment can be followed. One month is a course of treatment, and re-evaluation is performed every 3 - 5 days. The evaluation results of one month are visualized and processed. Assessing the undulation of the results, you can also continue to perform multiple courses and compare the undulations between the assessment results of each course, which is very helpful for the patients’ subsequent rehabilitation training.

3. Improvement of Data Fusion Method Based on Leap Motion

Leap Motion sensors have limited recognition capabilities from my experiments. Although human hand motion information can be obtained, it is inevitable that the lack of key information and non-contrast of key parts due to mutual occlusion between fingers, as shown in Figure 6.

In order to improve the stability and accuracy of the attitude data, this paper proposes to use the improved traditional principal component analysis data fusion technology to perform data fusion on the real-time data collected by the sensor. Based on the characteristics of Leap Motion, reference [4] adopted a multi-sensor data fusion algorithm based on traditional principal component analysis (PCA). Although the ideal data was obtained, the traditional principal component analysis method was used to standardize the original data. At the same time as eliminating the impact of dimensions and orders of magnitude, it also eliminates the difference information of the degree of variation of each indicator. Since the original data includes two types of necessary information, one type is the difference information of the degree of variation of each indicator itself, and the other type is the information of the interaction between the interacts, so the resulting data cannot reflect all the information contained in the original data. Here, the mean value method (AM) [9] is added on the basis of PCA to improve the dimensionlessness of the original data, referred to as AM-PCA. Compared with traditional PCA, this fusion algorithm can collect more original information with fewer principal components, greatly reducing unnecessary workload and improving work efficiency.

Figure 6. Missing information capture by Leap Motion due to finger occlusion.

1) The system sets the original gesture data information collected by the Leap Motion sensor to X = ( x 1 , x 2 , , x n ) , and makes it as a whole, the detection value of the sensor is x i , and the averaged result is:

Q i j = x i j x ¯ j (1)

And the average value of each indicator is:

x ¯ j = 1 n i = 1 n x i j , ( j = 1 , 2 , , m ) (2)

2) Calculating the covariance matrix of p indicators after averaging

U = ( u i j ) m × m , where u i j = S i j x ¯ i × x ¯ j .

3) Calculating the eigenvalue λ 1 λ 2 λ m 0 of the covariance matrix U of the m indexes after averaging, where λ i represents the variance of the i-th principal component y i and its weight at the corresponding position. And the corresponding normalized orthogonal eigenvector ( α 1 , α 2 , , α m ) , where α i = ( α i 1 , α i 2 , , α i m ) , ( i = 1 , 2 , , m ) .

4) Calculating the variance contribution rate of each principal component:

α i = λ i / j = 1 m λ j , ( i = 1 , 2 , , k ) (3)

and cumulative variance contribution rate:

α ( k ) = i = 1 k λ i / i = 1 m λ i (4)

y k ’s variance contribution rate α k represents the proportion of var ( y k ) = λ k in the total variance i = 1 m var ( x i ) = i = 1 m var ( y i ) = i = 1 m λ i in the original index. That is, the information amount of the original m indicators extracted from the k-th principal component is proportional to the variance contribution rate.

5) Determine the k principal components according to the principle that the cumulative contribution rate is not lower than a certain threshold (85%), and then use the variance contribution rate of each selected principal component as a reference to obtain a comprehensive evaluation.

In the experiment, Ap, Ai, At, Am, Ar and Al are selected to represent the normal vector perpendicular to the palm, the direction vector of the index finger, the direction vector of the thumb, the direction vector of the middle finger, the direction vector of the ring finger, and the direction vector of the little finger. There are 30 sets of data from 0 to 150 ms, and the time interval of each set of data is 5 ms. The original data is shown in Table 1.

From the raw data in Table 1, the characteristic root λ i , the contribution rate α k and the cumulative contribution rate α ( k ) of the covariance matrix V of the raw data before and after optimization are obtained by using Python to obtain t(ms). The specific results are shown in Table 2.

From Table 2 combined with Figure 7, the results can be analyzed more intuitively. In the histogram, the abscissa corresponds to the cumulative variance

Table 1. Raw data.

Table 2. Comparison of results before and after raw data optimization.

Figure 7. Histogram of cumulative variance contribution ratio before and after optimization.

contribution rate of the first n components (n = 1,2,3,4,5,6). Ap is the first principal component. After the AM-PCM optimization, the variance contribution rate is about 15% higher than that before the optimization (40.43% to 55.06%). If the traditionally set cumulative contribution rate is not less than 85% threshold, then 4 principal components are required before optimization, and the cumulative contribution rate of variance can only reach79.24%, but only 2 principal components are required after optimization, that is the palm normal vector and the position characteristic information of the index finger, and the cumulative contribution rate of variance is It has reached 90.51%. It can be seen that the AM-PCA optimization method can use less principal components to represent more original data information, thereby reducing a certain amount of work, improving efficiency and reducing time complexity. Figure 7 corresponds to Table 2, that is the histogram of the cumulative variance contribution ratio before and after AM-PCA optimization, and Figure 8 is the time complexity comparison line chart before and after AM-PCA optimization.

4. Gesture Classification and Recognition Algorithm and Optimization Based on Leap Motion

Nearest neighbor method (NN for short) is one of the most important methods in pattern recognition non-parametric method. A great feature of NN is that all sample points in each category are regarded as “representative points”. 1NN uses all training samples as representative points, so the distance between the sample to be identified x and all training samples needs to be calculated during classification. The classification result is the category to which the training sample nearest to x belongs. KNN is a generalization of 1NN, that is, k-nearest neighbors of x are selected during classification. To see which category the majority of the k neighbors belong to, classify x into which category [10].

Support vector machine is a very widely used algorithm in machine learning, derived from the structural risk minimization principle of statistical learning theory and VC dimension theory. The main idea is to map the original space to a high-dimensional feature space through a non-linear transformation, to find the optimal hyperplane in the new feature space, to maximize the classification boundary distance, and to construct an optimization problem to obtain the optimal classification decision function formula. After the feature space is transformed into a low-dimensional feature space, the inner product operation is completed by using a kernel function to solve this type of optimization problem [11].

Figure 8. Time complexity versus line chart.

According to the respective characteristics of KNN and SVM, because KNN is based on traditional statistical theory, it needs a relatively large training set. SVM is based on statistical learning. The samples near the interface are basically support vectors, and SVM can be regarded as each class has only a Nearest Neighbor classifier representing the points, so the training set it needs can obtain the global optimal solution in a small-scale case. In the process of gesture classification and recognition based on Leap Motion, according to the characteristics of KNN and SVM, a method of combining KNN and SVM is proposed here. In the process of gesture classification and recognition, the SVM-NN combination algorithm is used to obtain each. When the distance between the sample and the SVM optimal hyperplane is greater than a given threshold, the sample is far away from the interface. Then use SVM to classify, otherwise use KNN to classify the test samples.

The algorithm needs to match the gesture categories collected by Leap Motion with the characteristics of the action model set in advance. According to the unique three-dimensional spatial coordinates of each gesture category, the collected information mainly includes Hand, Finger, Tool, Vector obtained from the frame data. Gesture and other types of data, the eigenvalues required here are thumb direction vector thumb_Direction, index finger direction vector index_Direction, middle finger direction vector middle_Direction, ring finger direction vector ring_Direction, pinky direction vector pinky_Direction, palm normal vector palm_Direction. As shown in Figure 9, the vector data information of the hand is shown in Table 3.

Since Leap Motion generates 200 frames of data per second, the rate is too fast, and most of them are repetitive and unstable. In order to make the collected data more stable and effectively processed by the algorithm, the average of 4 frames of data needs to be processed [12], which uses the following formula:

A k ¯ = a = b a = b + 3 A k b 4 , ( b = 0 , 1 , , 59 ; k = p , t , i , m , r , l ) (5)

Figure 9. Vector illustration of human hands.

Table 3. Vector data information of the hand.

Akb indicates the hand feature vector. b indicates the number of frames of data. It is precisely because Leap Motion can only call the data of 60 frames in real time, so the value of b is 0 - 59, p, t, i, m, r, and l respectively represent palm normal vector, thumb fingertip direction vector, index fingertip direction vector, middle fingertip direction vector, ring finger tip direction vector, pinky fingertip direction vector.

First, using the SVM algorithm to obtain its support vector, coefficients, and constant b. Let the test set be T, the support vector set is Ts, the number of KNN is k, and the classification threshold ε is usually set to about 1. SVM-NN is a simple SVM algorithm when ε is 0. The specific algorithm execution process flow chart is shown in Figure 10.

In Figure 10, when the sample x to be tested belongs to the test set, the first step is to execute the SVM algorithm to obtain the function interval from the hyperplane equation g(x), that is the distance difference between x and the two types of support vector representative points. In the second step, comparing the absolute value of g(x) with the set threshold value to determine whether to use SVM or KNN. In the third step, the step function f(x) determines the function interval Signs to determine the results of the classification.

The algorithm takes the stabilized processed frame data as input to perform the data training process, loops the process according to the characteristics of each input frame data, and finally obtains the determined gesture category.

In this paper, through the study of the functional rehabilitation system of the opponent, a standardized assessment method for rehabilitation movements is studied to evaluate the rehabilitation effect of patients in real time.

According to Table 2 in Chapter 3, when the human hand is undergoing rehabilitation training, the feature vector Ap of the palm position and the feature vector Ai of the index finger direction occupy a large proportion of the original data information, and their respective variance contribution rates are 55.06% and 35.45%, so the following uses the index vector’s feature vector information as an example to reflect the accuracy and stability of the patient’s hand function rehabilitation. In the first step, the included angle θ (range of motion) between

Figure 10. Algorithm flowchart.

the second and third phalanxes of the right index finger is set as the evaluation characteristic value; in the second step, the evaluation characteristic value is divided according to the index finger from perpendicular to the horizontal plane. There are 4 states, that is, 3 processes. As shown in Figure11, Figure(a): θ = 0, Figure(b): 0 < θ < 45˚, Figure(c): 45˚ < θ < 90˚, and Figure(d): θ > 90˚; the third step, the average angular velocity ωa and its standard deviation Sω, maximum angle ωmax, the elapsed time ti and its standard deviation St.

After determining the evaluation indicators, this paper uses the idea of AHP [13] to analyze the relative weights of the indicators, so as to obtain the final evaluation score of the gesture rehabilitation action, which will help the attending doctor to track the patient’s later rehabilitation process in real time. In this article, the average angular velocity ωa and its standard deviation Sω, the maximum angle ωmax, the elapsed time ti, and its standard deviation St at each stage in the index finger movement process are used as evaluation indicators for gesture rehabilitation actions, and used as the judgment matrix B in the AHP. The main steps of the method are as follows:

1) The relationship between each level of indicators is determined by the “1-9 Scale Method”, and the evaluation index system of the evaluation system is formulated. The specific scoring system of each evaluation index is shown in Table 4.

Figure 11. Classification of the eigenvalues.

Table 4. Evaluation indicators.

2) Using the “1-9 Scale Method” to make a pairwise comparison of the indicators at each level, and then obtain a judgment matrix B = { b i j } .

3) Multiplying the elements of each row in the judgment matrix B to obtain the product Ci of each row, as shown in the following formula:

C i = j = 1 n a i j , i = 1 , 2 , 3 , , n (6)

Taking the nth root of each row of Ci:

d i = C i n , i = 1 , 2 , 3 , , n (7)

Normalize the vector [ D 1 D 2 D 3 D n ] T , as shown in the following formula:

D i = d i i = 1 n d i (8)

Di is the weight value of the requested index.

According to the above steps, perform real-time feedback on the stability and accuracy of the patient’s rehabilitation effect. During the preset index finger movement, the time limit of the three stages is set to 5 seconds, if it is not completed in the first stage, it was judged as failing. The average angular velocity, maximum angular velocity, and maximum angle of each stage are used to reflect the accuracy of the rehabilitation effect. Reflect the stability of the rehabilitation effect according to the time, time standard deviation, and average angular velocity standard deviation of each stage. According to the three rehabilitation stages set by the system, three threshold points of 60 points (passing points), 80 points (excellent points), and 100 points (health points) are set for doctors’ reference, and the six evaluation indicators included in the evaluation system are calculated. Let the score of each evaluation index be F = { F 11 , F 12 , F 13 , F 14 , F 21 , , F 05 , F 06 } . The scores corresponding to each evaluation index are shown in Table 5.

The difficulty level of each rehabilitation stage is determined by the activity characteristics of the human hand. The stage difficulty ratio set here is 1:2:3. In summary, the comprehensive evaluation of the index finger during the rehabilitation process is:

F i = 1 6 ( F 11 + F 12 + F 13 + F 14 ) + 2 6 ( F 21 + F 22 + F 23 + F 24 ) + 3 6 ( F 31 + F 32 + F 33 + F 34 ) (9)

F i is just the index of rehabilitation evaluation of the index finger. The overall index of rehabilitation evaluation needs to be the sum of the evaluation scores of the remaining feature vectors in the same way.

5. Experiments and Analysis

The experimental environment of this experiment includes a computer, the processor is Intel (R) Core (TM) i7-4790; the memory is 8 GB; the graphics card is NVIDIA GeForce GT 720; the computer system is Windows 10 Education Edition 64-bit system; a Leap Motion and its Leap_Motion_SDK_Windows_2.3.1; the development platform is Unity 2017.1.0f3 (64-bit).

This experiment uses Leap Motion to make accurate classification and recognition of gestures. Here author uses 3 gestures as examples. Each gesture collects 100 sets of gesture data, of which training group and test group are divided into according to a 1:1 ratio. An example of a gesture diagram used for selection is shown in Figure 12. The three gestures in the figure correspond to the three gestures of flipping the cube in the rehabilitation system, which are “grab”, “rotate”, and “release”. The algorithm uses the eigenvalue variables of the gestures mentioned above: the thumb direction vector At, the index finger direction vector Ai, the middle finger direction vector Am, the ring finger direction vector Ar, the little finger direction vector Al, and the palm normal vector Ap. KNN, SVM, and SVM-NN, this three algorithms are tested separately, and the corresponding recognition accuracy and average accuracy are obtained.

Table 5. Evaluation indicator scores.

From the data in Tables 6-8, it can be concluded that the average recognition rate of KNN when used alone is low, only 80%; SVM has a significant improvement over KNN, with an average recognition of 90.67% Rate; the average recognition rate of the SVM-NN algorithm proposed in this paper reaches 98%, and even some gestures can reach 100% recognition accuracy, which also verifies that the algorithm in this paper is indeed more effective than the KNN and SVM algorithms when used alone practical.

In addition, using the AM-PCA algorithm in this paper to solve the problem of missing key information due to mutual occlusion between fingers, and the phenomenon of non-contrasting parts also played a good role, as shown in Figure 13.

Figure 12. Gesture legend.

Figure 13. Comparison chart after algorithm optimization.

Table 6. KNN algorithm’s test results.

Table 7. SVM algorithm’s test results.

Table 8. SVM-NN algorithm’s test results.

This picture is an effect picture collected without the optimization algorithm proposed in this paper. There is a phenomenon of lack of information and unmatched arms in another arm. The right picture is a real-time effect picture collected after the optimization algorithm proposed in this article. The missing information has been repaired to a great extent, and the comparison of the arms has been improved to a greater extent than before. It also verifies the accuracy and effectiveness of the optimization algorithm in this paper.

6. Conclusions

In practice, by optimizing the traditional fusion algorithms PCA, KNN, and SVM, Leap Motion is used as a platform to show good results in real time, which greatly helps patients with hand function rehabilitation. In the hand function rehabilitation training system proposed in this paper, the medical staff can refer to the patient’s difficulty score in the rehabilitation training system to give an evaluation, achieved better human-computer interaction, being very useful for patients’ subsequent treatment, and lay a solid foundation for the combination of medical development and human-computer interaction.

Although this paper has completed the key technology research of the hand function rehabilitation training evaluation system based on Leap Motion, it still lacks authenticity and aesthetics in the design of the virtual game scene for hand function patient rehabilitation, and there is still some room for optimization. The PCA and SVM optimization algorithms proposed in this paper also have some room for improvement in accuracy and real-time performance. Finally, the idea of AHP is used as the evaluation method of the rehabilitation evaluation system. There are certain subjective factors in it. More experiments will be needed to demonstrate it to promote the development of human-computer interaction in the future.

Acknowledgements

This work was supported in part by the Jilin Science and Technology Development Plan Project under Grant 20200404221YY.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Navya, S.G., Sreeja, V., Marka, S. and Rama, K. (2019) Patient Outcomes and Usage Pattern of Drugs in Ischemic Stroke—A Case Series. European Journal of Pharmaceutical and Medical Research, 6, 581-583.
[2] Zhu, Z.W. and Zhu, L. (2019) Research on Gesture Recognition Technology Based on BS-HMM and Pashto Distance. Computer Applications and Software, 36, 163-166, 253.
[3] Sun, P. (2019) Gesture Recognition Based on PCA + SVM Algorithm. China New Communications, 21, 117.
[4] Tang, C.X. and Wang, Z.H. (2017) Manipulator Gesture Control System Based on Multiple Leap Motion Sensors. Tianjin Polytechnic University, Tianjin.
[5] Liu, Z.H. and Mo, W.P. (2016) Leap Motion-Based Active Motion System for Stroke Rehabilitation of Upper Limbs. Donghua University, Shanghai.
[6] Hu, J.T. and Fan, C.X. (2015) Research and System Implementation of Gesture Behavior Analysis Based on Leap Motion. Beijing University of Posts and Telecommunications, Beijing.
[7] Weichert, F., Bachmann, D., Ruadk, B. and Fisseler, D. (2013) Analysis of Accuracy and Robustness of the Leap Motion Controller. Sensors, 13, 6380-6393.
https://doi.org/10.3390/s130506380
[8] Zhou, K.D., Xie, J. and Luo, J.X. (2017) Research on Gesture Extraction and Recognition Technology Based on Leap Motion Fingertip Position. Microcomputer and Application, 36, 48-51.
[9] Ye, S.F. (2001) Improvement of Comprehensive Evaluation on Principal Component Analysis. Mathematical Statistics and Management, 20, 52-55, 61.
[10] Li, C.X. (2006) Application of SVM-KNN Combination Improved Algorithm in Patent Text Classification. South China University of Technology, Guangzhou.
[11] Ning, Y.N. (2017) Application Research of Leap Motion-Based Gesture Recognition in Virtual Sand Painting. North University of China, Taiyuan.
[12] Tian, Y.D. and Zhang, Y. (2018) Research and Application of Leap Motion-Based Gesture Recognition in Virtual Sand Painting. North University of China, Taiyuan.
[13] Xu, W.H. (2017) Rehabilitation Training and Evaluation System Based on Virtual Driving Experience. Guangdong University of Technology, Guangzhou.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.