Hand Gesture Recognition Approach for ASL Language Using Hand Extraction Algorithm

Abstract

In a general overview, signed language is a technique used for communicational purposes by deaf people. It is a three-dimensional language that relies on visual gestures and moving hand signs that classify letters and words. Gesture recognition has been always a relatively fearful subject that is adherent to the individual on both academic and demonstrative levels. The core objective of this system is to produce a method which can identify detailed humanoid nods and use them to either deliver ones thoughts and feelings, or for device control. This system will stand as an effective replacement for speech, enhancing the individual’s ability to express and intermingle in society. In this paper, we will discuss the different steps used to input, recognize and analyze the hand gestures, transforming them to both written words and audible speech. Each step is an independent algorithm that has its unique variables and conditions.

Share and Cite:

Akoum, A. and Mawla, N. (2015) Hand Gesture Recognition Approach for ASL Language Using Hand Extraction Algorithm. Journal of Software Engineering and Applications, 8, 419-430. doi: 10.4236/jsea.2015.88041.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Kurtenbach, G. and Hulteen, E.A. (1990) Gestures in Human-Computer Communication. In: Laurel, B., Ed., The Art of Human-Computer Interface Design, Addison-Wesley Publishing Company, Inc., New York.
[2] Ibraheem, N.A. and Khan, R.Z. (2012) Vision Based Gesture Recognition Using Neural Networks Approaches: A Review. International Journal of Human Computer Interaction (IJHCI), 3, 1-14.
[3] Ferdousi, Z. (2008) Design and Development of a Real-Time Gesture Recognition System. U.M.I. Publishers.
[4] Kendon, A. (2004) Gesture: Visible Action as Utterance. Cambridge University Press, Cambridge.
[5] Yang, M.-H. and Narendra, A.(2001) Face Detection and Gesture Recognition for Human-Computer Interaction. Springer, US. http://dx.doi.org/10.1007/978-1-4615-1423-7
[6] Khan, R.Z. and Ibraheem, N.A. (2012) Hand Gesture Recognition, a Literature Review. International Journal of Artificial Intelligence & Applications (IJAIA), 3, 161.
[7] Rokade, U.S., Doye, D. and Kokare, M. (2009) Hand Gesture Recognition Using Object Based Key Fram Selection. Proceedings of the 2009 International Conference on Digital Image Processing, Bang-kok, 7-9 March 2009, 288-291.
[8] Nachamai, M. (2013) Alphabet Recognition of American Sign Language a Hand Gesture Recognition Approach Using Sift Algorithm. International Journal of Artificial Intelligence & Applications (IJAIA), 4, 105-115.
[9] Castelli, V. and Bergman, L.D. (2002) Image Databases: Search and Retrieval of Digital Imagery. John Wiley & Sons, Hoboken.
[10] Szmurlo, M. (1995) A Comparative Study of Statistically Classifiable Features Used within the Field of Optical Character Recognition. Master’s Thesis, Image Processing Laboratory, Oslo.
[11] Mohamad, H.S. and Mahmood, F. (2008) Real-Time Background Modeling Subtraction Using Two-Layer Codebook Model. Proceedings of the International Multi Conference of Engineers and Computer Scientists, Hong Kong, 19 March 2008, 978-988.
[12] Bauer, B. and Hienz, H. (2000) Relevant Features for Video-Based Continuous Sign Language Recognition. Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, Washington DC, 26-30 March 2000, 440. http://dx.doi.org/10.1109/afgr.2000.840672
[13] Moeslund, T. (2009) Canny Edge Detection.
[14] Wright, O. and Wright, W. (2013) Flying-Machine. International Journal of Advanced Trends in Comp-uter Science and Engineering, 2, 269-278.

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.