Application of Earth Mover’s Distance Algorithm for Gesture Recognition of American Sign Language Hand Gesture
DOI:
https://doi.org/10.70922/7txkfp49Keywords:
American Sign Language, Dynamic gesture, Earth Mover’s DistanceAbstract
This study utilizes computer vision in the interpretation of static and dynamic human gestures for the American Sign Language. This is another way of communicating by people who understands and do not understand American Sign Language. They propose the application of Earth Mover’s Distance, which is define as distance between two feature descriptors by the minimal amount of work needed to transform one into the other. It is use for the recognition of static and dynamic gesture for American Sign Language of users with different hand shape and orientation.
Downloads
References
Vaishali, S., & Kulkarni, P. (2010). Appearance based recognition of American Sign Language using gesture segmentation. International Journal on Computer Science and Engineering, 2(3), 560-565.
Yang, H.-D. (2015). Sign language recognition with the Kinect sensor based on conditional random fields. Multidisciplinary Digital Publishing Institute, 10(6), 123-135.
Abualola, H., Ghothani, H. A., & Eddin, A. N. (2016). Flexible gesture recognition using wearable inertial sensors. In Circuits and Systems (pp. 123-130). Abu Dhabi, United Arab Emirates.
Abhishek, K. S., Qubeley, L. C. F., & Ho, D. (2016). Glove-based hand gesture recognition sign language translator using capacitive touch sensor. In Electron Devices and Solid-State Circuits (pp. 200-205). Hong Kong, China.
Dong, C., Leu, M. C., & Yin, Z. (2015). American Sign Language alphabet recognition using Microsoft Kinect. In Computer Vision and Pattern Recognition Workshops (pp. 124-130). Boston, MA, USA.
Funasaka, M., Ishikawa, Y., Takata, M., & Joe, K. (2017). Sign language recognition using Leap Motion Controller. In Parallel and Distributed Processing Techniques and Applications (pp. 98-104). Nevada, USA.
Taskiran, M., Killioglu, M., & Kahraman N. (2018). A real-time system for recognition of American sign language by using deep learning. Proceedings of the 41st international conference on telecommunications and signal processing (pp. 1-5). Athens, Greece,
Bantupalli K. and Xie Y., (2018). "American Sign Language Recognition using Deep Learning and Computer Vision," 2018 IEEE International Conference on Big Data (pp. 4896-4899). Seattle, WA, USA,
Bin, L.Y., Huann, G.Y., Yun, L.K.(2019) Study of Convolutional Neural Network in Recognizing Static American Sign Language, Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications, ICSIPA 2019
Kadhim R.A., Khamees M., (2020). A real-time american sign language recognition system using convolutional neural network for real datasets, TEM Journal. Volume 9, Issue 3, Pages 937-943
Saleh Y., Issa G.F., (2020). Arabic sign language recognition through deep neural networks fine-tuning, International Journal of Online and Biomedical Engineering, Vol.16 Issue 5, pp. 71-83
Lee C.K.M., Ng K.H., Chen C.H., Lau H.C.W., Chung S.Y., Tsoi T., (2021). American sign language recognition and training method with recurrent neural network, Expert Systems with Applications, Vol.167, Article 114403
Downloads
Published
Issue
Section
License
Copyright (c) 2026 PUP Journal of Science & Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




