Bridging Communication Gap Among People with Hearing Impairment: An Application of Image Processing and Artificial Neural Network
International Journal of Information and Communication Sciences
Volume 3, Issue 1, March 2018, Pages: 11-18
Received: Feb. 21, 2018;
Accepted: Mar. 8, 2018;
Published: Mar. 30, 2018
Views 1639 Downloads 84
Ogunsanwo Gbenga Oyewole, Computer Science Department, Babcock University, Ilisan Remo, Nigeria
Goga Nicholas, Computer Science Department, Babcock University, Ilisan Remo, Nigeria
Awodele Oludele, Computer Science Department, Babcock University, Ilisan Remo, Nigeria
Okolie Samuel, Computer Science Department, Babcock University, Ilisan Remo, Nigeria
Follow on us
Before the present study, no sign language recognition system for the Nigeria indigenous sign language particularly Yoruba language has been developed. As a result, this research endeavors at introducing a Yoruba Sign Language recognition system using image processing and Artificial Neural Network (ANN).The proposed system (YSLRS) was implemented and tested. 600 images from 60 different signers were gathered. The images were acquired using vision based method, the different signers were asked to stand in front of a laptop’s camera make sign number from one to ten with their fingers in three different times and the images were stored in a folder. The image dataset was pre-processed for proper presentation for de-noising, segmentation and feature extraction. Thereafter, pattern recognition was done using feed forward back propagation ANN. The study revealed that Median filter with higher PSNR of 47.7 a lower MSE of 1.11, performed better than the Gaussian filter. Furthermore, the efficiency of the developed system was determined using mean square error and the best validation performance occurred at 25 epochs with a MSE of 0.004052, implying than ANN was able to adequately recognize the pattern of the Yoruba signs. Histogram was also used to determine the efficiency of the system, it can be seen that the histogram of the trained, tested and validated error bars were close to zero error, implying that the ANN and Receiver Operating Characteristic (ROC) was used to evaluate the performance of ANN in matching the features of the Yoruba Signs, which shows that ANN performed efficiently, having a high true positive rate and a minimum false positive rate. Finally, YSLRS developed in the study would reduce negative attitudes of victimizations suffered by the hearing-impaired individuals, by bridging communication gap among Nigerian PWD with hearing impairment.
Image Processing, Machine Learning Techniques, Recognition System, Sign Language Yoruba
To cite this article
Ogunsanwo Gbenga Oyewole,
Bridging Communication Gap Among People with Hearing Impairment: An Application of Image Processing and Artificial Neural Network, International Journal of Information and Communication Sciences.
Vol. 3, No. 1,
2018, pp. 11-18.
Copyright © 2018 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/
) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Asha, T and Dixit, S. (2015) Sign Language Alphabets Recognition Using Wavelet Transform. Proceedings of Int'l Conference on Intelligent Computing, Electronics Systems and Information Technology (ICESIT-15) Aug 25-26, 2015 Kuala Lumpur (Malaysia).
Assaleh. K and Al-Rousan M, (2005) Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers,‖ EURASIP Journal on Applied Signal Processing, 2005, vol., no. 13, pp. 2136-2145.
Fu, L., Neural Networks in Computer Intelligence, New York, NY: McGraw-Hill, Inc., 1994.
Gani, E. and Kika, A. (2016). Albanian Sign Language (AlbSL) Number Recognition from Both Hand’s Gestures Acquired by Kinect Sensors. International Journal of Advanced Computer Science and Applications (IJACSA). 7(7): 216-220.
Hasan, M.M. and P.K. Mishra, (2010). HSV brightness factor matching for gesture recognition system. QInt. J. Image Process., 4: 456-467.
Khaled Assaleh, Tamer Shanableh, Mustafa Fanaswala, Harish Bajaj, and Farnaz Amin, (2008).” Vision-based system for Continuous Arabic Sign Language Recognition in user dependent mode”, Proceeding of the 5th International Symposium on Mechatronics and its Applications (ISMA08), Amman, Jordan, May 27-29, 2008.
Kim, J., Jang, W. & Bien, Z. (1996). A dynamic gesture recognition system for the Korean sign language (KSL). IEEE Trans Syst Man Cybern B Cybern. 1996;26(2):354-9
Kumar, R.: Chauerjee, C.; Singh. R. D.; Lohani, A. K.; Kumar. S. 2004. GIUH based Clark and Nash Models for runoff estimation for an ungauged basin and their uncertainty analysis. International Journal of River Basin Management 2(4): 281-290.
Ramapulana, N. (2011). Artificial Neural Network Modelling Of Flood Prediction and Early Warning. MS.c Dissertation submitted to the faculty of natural and agricultural science. University of Fee State Bloemfontein.
Solís, F., Martínez, D., and Espinoza, O. (2016). Automatic Mexican Sign Language Recognition Using Normalized Moments and Artificial Neural Networks. Engineering. 8: 733-740.
Subha Rajam P., and Balakrishnan, G. (2011). Recognition of Tamil Sign Language Alphabet using Image Processing to aid Deaf-Dumb People”, International Conference on Communication Technology and System Design.
Wang C., Gao W., & Xuan Z. (2001) A Real-Time Large Vocabulary Continuous Recognition System for Chinese Sign Language,‖ Proc. IEEE Pacific Rim Conf. Multimedia, , pp. 150-157.