Learning Based Falling Detection Using Multiple Doppler Sensors

Abstract

Automated falling detection is one of the important tasks in this ageing society. Such systems are supposed to have little interference on daily life. Doppler sensors have come to the front as useful devices to detect human activity without using any wearable sensors. The conventional Doppler sensor based falling detection mechanism uses the features of only one sensor. This paper presents falling detection using multiple Doppler sensors. The resulting data from sensors are combined or selected to find out the falling event. The combination method, using three sensors, shows 95.5% accuracy of falling detection. Moreover, this method compensates the drawbacks of mono Doppler sensor which encounters problems when detecting movement orthogonal to irradiation directions.

Share and Cite:

S. Tomii and T. Ohtsuki, "Learning Based Falling Detection Using Multiple Doppler Sensors," Advances in Internet of Things, Vol. 3 No. 2A, 2013, pp. 33-43. doi: 10.4236/ait.2013.32A005.

1. Introduction

In these days, the elderly population has been growing thanks to advances in the medical field. Healthy, safe and secure life is important particularly for the elderly. However, we are faced with problem of increasing the old-age dependency ratio. The old-age dependency ratio is the ratio of the sum of the population aged 65 years or over to the population aged 20 - 64. The ratio is presented as the number of dependents per 100 persons of working age (20 - 64). According to estimates of the United Nations, for about 30 countries, this ratio is projected to reach 30% in 2020 [1]. In particular, it is expected to reach 52% in Japan. There is an urgent need to develop automated health care systems to detect some accidents for the elderly.

Falling detection is one of the most important tasks to prevent the elderly from having crucial accidents. Yu [2] and Hijaz et al. [3] classified falling detection systems into three groups, wearable device approach, ambient sensors approach, and cameras approach. Wearable devices are easy to set up and operate. Devices can be attached to chest, waist, armpit, and the back [4]. The shortcomings of these devices are that they are easily broken, and that they are intrusive. Furthermore, the older we become, the more forgetful we become. Therefore, no matter how sophisticated the algorithm implemented on wearable devices is, there is no meaning if they fail to wear them. On the other hand, ambient sensors such as pressure and acoustic sensors can also be used. These sensors are cheap and non-intrusive. Moreover, they are not prone to privacy issues. However, pressure sensors cannot discern whether pressure is from the user’s weight, while acoustic sensors show high false alarm rate in a situation of loud noise [5]. Cameras enable remote visual verification, and multiple persons can be monitored using a single setup. However, in private spaces such as bath and restroom, cameras are prohibited. Also in living room, many people do not want to be monitored by cameras.

Doppler sensor is an inexpensive, palm-sized device. It is capable of detecting moving targets like humans. Using this sensor, we can construct passive, non-intrusive, and noise tolerant systems. Activity recognition using Doppler sensor has been actively studied recently. Kim et al. proposed classification of seven different activities based on micro-Doppler signature characterized by arms and legs with periodic and active motion [7]. Subjects act toward sensor. An accuracy performance above 90% is achieved by using support vector machine (SVM). Tivive et al. [8] classified three types of motion, free arm-motion, partial arm-motion, and no arm-motion. Based on arm-motion, they describe human activity status. Liu et al. [9] show automatic falling detection. They use two sensors, which are positioned 1.8 m and 3.7 m away from the point of falling. The data of each sensor is independently processed. Subjects act forward, back, left-side, and right-side fall. The directions of activities include between two sensors, toward a sensor, and away from a sensor.

Doppler sensor is sensitive to the objects moving along irradiation directions; however, less sensitive to movements orthogonal to irradiation directions. For the practical use of Doppler sensors, we propose falling detection using multiple Doppler sensors to alleviate the moving direction dependency. By using sensors that have different irradiation directions, each sensor complements less sensitive directions of the other sensors. Sensor data are processed by feature combination or selection methods. In the combination method, features of multiple sensors are simply combined. In the selection method, the sensor is selected based on the power spectral density of the particular bandwidth, which characterizes the falling activity. After the process of each method, features are classified by using SVM or k-nearest neighbors (kNN). We evaluate both methods in terms of the number of features, the number of sensors, and the type of classifier. We also discuss the accuracy of each activity direction and the viability of these methods for the practical use.

The remainder of this paper is organized as follows. In Section 2, we introduce basic Doppler sensor system, how we can determine target velocity from Doppler shift. In Section 3, we explain about flow of the proposed  falling detection algorithm using multiple Doppler sensors. In Section 4, the sensor setup of the proposed method and the type of tested activities are explained. Our methods are evaluated by comparing them to the one sensor method. We discuss the accuracy of falling detection for each activity direction, and the viability of the proposed feature combination and selection methods in terms of the practical use. In Section 5, we draw conclusion.

2. Doppler Sensor

In this section, we discuss the basic information about Doppler sensor. Doppler sensor transmits a continuous wave and receives the reflected wave which has its frequency shifted the moving object. The Doppler shift is defined as

(1)

where v is the target velocity, c is the light velocity, and fc is the carrier frequency. In Equation (1), since, the target velocity is represented as c

(2)

fc and c are the given values. Only by observing the Doppler shift fd, we can determine the target velocity v.

3. Falling Detection Algorithm Using Multiple Doppler Sensors

In this section, we show the proposed falling detection algorithm using multiple Doppler sensors. Figure 1 depicts the algorithm of falling detection. Our approach involves four phases: 1) Decision of extraction time range, 2) Feature extraction, 3) Feature combination/selection, 4)Training and classification.

3.1. Decision of Extraction Time Range

This process is aimed at deciding the timing for extracting 4 second features from the voltage data of the sensors. Firstly, we compute spectrogram by using short time Fourier transform (STFT). It is reported that 25 - 50 Hz bandwidth features are suitable to distinguish falling and non-falling when the carrier frequency is 5 GHz [9]. As shown in Equation (2), Doppler shift is proportional to carrier frequency on the condition of the same target velocity. Our experiment uses 24 GHz carrier frequency so that bandwidth should be expanded by 4.8 times, i.e. to within 120 - 240 Hz. On each time bin, which is decided by discrete Fourier transform (DFT) points and window overlap, we calculate the power spectral density (PSD) of 120 - 240 Hz. tmax, the time that the PSD of 120 - 240 Hz becomes maximum in 12 second experiment duration, indicates the time that remarkable event happens. Remarkable events mean activities involving a sudden quick movement using whole body. We specify the 4 second voltage data centered at tmax, and then extract features. Figures 2 and 3 show STFT spectrogram and PSD of 120 - 240 Hz of experienced activities, respectively. Subjects act at about time 7 second.

Figure 1. The proposed falling detection.

(a) Standing - Falling

(b) Walking - Falling

(c) Standing up - Falling

Figure 2. Spectrogram (left) and PSD of 120 Hz - 240 Hz (right) of Falling.

3.2. Feature Extraction

Using the 4 second voltage data centered at tmax, we compute cepstral coefficients. Mel-frequency cepstral coefficients (MFCC) are applied in [9]. Mel-frequency is the scale definition that emphasizes lower frequency 0 - 1000 Hz and compresses higher frequency. MFCC is basically applied to the analysis of voice up to about 16 kHz. On sensing falling motion, we found empirically that up to 500 Hz is enough to observe human activities on condition of 24 GHz carrier frequency. To compute MFCC, 0 - 1000 Hz frequency band is divided into linearly spaced blocks, which are called filter banks. Sampling frequency is 1024 Hz so that there is almost no process to compress higher frequency. Strictly speaking, instead of MFCC, cepstral coefficients analysis is applied. To calculate cepstral coefficients, we use the Auditory Toolbox [10]. The method is as follows.

1) Divide amplitude spectrogram into 13 linearly spaced filter banks.

2) Compute fast Fourier transform (FFT) of amplitude spectrum of each filter bank.

3) Compute discrete cosine transform (DCT) of the obtained data above. The result is called cepstrum.

4) We use C1-C6 coefficients, where C0 is directcurrent component. C7-C12 come from latter half of 0 - 1024 Hz, which is not focused on to observe human activity.

Cepstral coefficient features are computed for each set of 256 DFT points which is called window. The window update frequency is defined as frame rate. As the frame rate becomes higher, the number of features increases.

3.3. Feature Combination/Selection

In our proposal, at most three sensors are used. We employ two methods to make features using multiple Doppler sensors, a combination method and a selection method. In the combination method, cepstral coefficients of the sensors are simply associated. Figure 4(a) shows the example of feature structure using two sensors. “label” represents the type of activity. In the selection method, the PSD of 120 - 240 Hz at tmax are compared among sensors before computing cepstral coefficients.

(a) Walking

(b) Standing – Lying down

(c) Picking up

(d) Sitting on a chair

Figure 3. Spectrogram (left) and PSD of 120 Hz - 240 Hz (right) of Falling.

The sensor that has the largest PSD of 120 - 240 Hz at tmax is selected for feature extraction. The selected sensor is assumed to catch human motion better than the other sensors.

3.4. Training and Classification

To train and classify the features, we use SVM and k-NN. For classification by using SVM on MATLAB, LIBSVM [11] is available. SVM has a kernel function that decides boundaries of groups. As a kernel function, linear, polynomial, radial basis function (RBF), and sigmoid are able to be used on LIBSVM. We exploit the RBF kernel. A linear kernel is the special case of RBF [12], and sigmoid kernel behaves like RBF with some parameters [13]. Polynomial kernel has numerical difficulty [14] so that RBF is the most suitable kernel in general. Kernel has several parameters and they should be tuned by changing each parameter. When we classify by using k-NN, Euclidean distance between the features is used.

We use four persons (A, B, C, D), who are men from 20’s to 30’s, as training and test subjects as shown in Table 1, and apply cross validation. This process generalizes the results of SVM and k-NN. In addition, features are normalized to prevent the greater values from having stronger effect on the results than the others.

4. Performance Evaluation

Figure 5 shows contents of the multiple Doppler sensors. They include client units, a base unit, and a PC. Client units receive reflected microwave at Doppler module and CPU (MSP430F2618, Texas Instruments) outputs data to base unit. The connection between base unit and each client unit is connected by LAN cable. The collected data of each Doppler sensor are sent to PC through USB port. The data are processed MATLAB.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Department of Economic and Social Affairs, “Population Division: World Population Prospects: The 2010 Revision,” United Nations, Department of Economic and Social Affairs, 2011.
[2] X. Yu, “Approaches and Principles of Fall Detection for Elderly Andpatient,” Proceedings of the 10th International Conference of the IEEE HealthCom, Singapore, 7-9 July 2008, pp. 42-47.
[3] F. Hijaz, N. Afzal, T. Ahmad and O. Hasan, “Survey of Fall Detectionand Daily Activity Monitoring Techniques,” Proceedings of International Conference on Information and Emerging Technologies, ICIET, Pakistan, 14-16 June 2010, pp. 1-6.
[4] N. Noury, A. Fleury, P. Rumeau, A. Bourke, G. Laighin, V. Rialle and J. Lundy, “Fall Detection-Principles and Methods,” Proceedings of the 29th Annual International Conference of the IEEE EMBS, Paris, 22-26 August 2007, pp. 1663-1666.
[5] J. Perry, S. Kellog, S. Vaidya, J. H. Youn, H. Ali and H. Sharif, “Surveyand Evaluation of Real-Time Fall Detection Approaches,” Proceedings of the 6th International Symposium of HONET, Egypt, 28-30 December 2009, pp. 158-164.
[6] S. Ram, C. Christianson, Y. Kim and H. Ling, “Simulation and Analysis of Human Micro-Dopplers in through-Wall Environments,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 48, No. 4, 2010, pp. 2015-2023. doi:10.1109/TGRS.2009.2037219
[7] Y. Kim and H. Ling, “Human Activity Classification Based on Microdopplersignatures Using a Support Vector Machine,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 47, No. 5, 2009, pp. 1328-1337. doi:10.1109/TGRS.2009.2012849
[8] F. Tivive, A. Bouzerdoum and M. Amin, “Automatic Human Motion Classification from Doppler Spectrograms,” Proceedings of the 2nd International Workshop of CIP, Elba Island, 14-16 June 2010, pp. 237-242.
[9] L. Liu, M. Popescu, M. Skubic, M. Rantz, T. Yardibi and P. Cuddihy, “Automatic Fall Detection Based on Doppler Radar Motion Signature,” Proceedings of the 5th International Conference of Pervasive Health, Dublin, 23-26 May 2011, pp. 222-225.
[10] M. Slaney, “Auditory Toolbox Version 2”. https://engineering.purdue.edu/~malcolm/interval/1998-010/
[11] C. Chang and C. Lin, “LIBSVM: A Library for Support Vector Machines.” http://www.csie.ntu.edu.tw/~cjlin/libsvm/
[12] S. S. Keerthi and C.-J. Lin, “Asymptotic Behaviors of Support Vector Machines with Gaussian Kernel,” MIT Press Journals, Vol. 15, No. 7, 2003, pp. 1667-1689.
[13] C. J. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Mining and Knowledge Discovery, Vol. 2, No. 2, 1998, pp. 121-167. doi:10.1023/A:1009715923555
[14] V. N. Vapnik, “The Nature of Statistical Learning Theory,” 2nd Edition, Springer, New York, 1999.
[15] H. Zhang, A. Berg, M. Maire and J. Malik, “SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition,” Proceedings of the IEEE Conference of CVPR, New York, 17-22 June 2006, pp. 2126- 2136.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.