A Model for sign language recognition for Kenyan sign language
Date
2023
Authors
Wanjala, G. K.
Journal Title
Journal ISSN
Volume Title
Publisher
Strathmore University
Abstract
Computer vision aids in increasing tech accessible for communities that are underserved, such as the disabled community. This study demonstrates how artificial intelligence via computer vision helps to bridge the communication gap between those with hearing problems and the general population. The purpose of this paper is to bring forth an artificial intelligence solution to cater to this targeted group to aid in communication. Artificial intelligence has come a long way to solve this problem of enabling sign language notations to be translated into readable form that can be easily understood. This is in accordance with the fact that there is a collective duty to ensure that the deaf can be part of our society on an equal basis with others, free from discrimination even when it comes to speech and communication. There is a great need for this interpretation so that communication is sped up through translation. Understanding between the deaf and hearing people can be fostered as well as costs associated with training individuals in sign language communication in sign language training centers are minimized. To answer this question, the research work collected and analyzed photos and videos in a quasi-experiment consisting of target photos of Kenyan sign language notations. The model is trained on 9100 Kenyan sign language (KSL) notations of varied gestures spanning from health and wellness to common day to day basic notations such as greetings, expressing feelings among others. Transfer learning through Tensor Flow object detection model, Open CV framework for image processing and python was used to actualize this sign language translation model in this research work. A trained machine learning model organizes the input photos and videos, analyses them and produces text that maps to the corresponding sign language notation used. Individual users can use the model to translate Kenyan sign language notations into readable English text. The model gave performance levels of 85% accuracy on a 20,000 training steps for 40 epochs. This gave a perfect balance of training duration and accuracy levels on the dataset given. One of the notable findings was that notations that involved movement of the hands and other body parts to express gestures were harder to detect and translate due to the motions involved. A lot of training data on such notations is needed to train the model further in detecting them.
Keywords: Artificial Intelligence, computer vision, machine learning, sign language, disability
Description
Full- text thesis
Keywords
Citation
Wanjala, G. K. (2023). A Model for sign language recognition for Kenyan sign language [Strathmore University]. http://hdl.handle.net/11071/13530