TY - JOUR
T1 - Azerbaijani sign language recognition using machine learning approach
AU - Aliyev, Samir
AU - Almisreb, Ali Abd
AU - Turaev, Sherzod
N1 - Funding Information:
The authors would like to thank the United Arab Emirates University for funding this work under Start-Up grant G00003321.
Publisher Copyright:
© Published under licence by IOP Publishing Ltd.
PY - 2022/5/17
Y1 - 2022/5/17
N2 - Sign Language recognition is one of the essential and focal areas for researchers in terms of improving the integration of speech and hearing-impaired people into common society. The main idea is to detect the hand gestures of impaired people and convert them to understandable formats, such as text by leveraging advanced approaches. In this paper, we present our contribution to the improvement of Azerbaijani Sign Language (AzSL). We worked on AzSL Alphabet static signs real-time recognition. The method applied in this work is Object Classification and Recognition by leveraging pre-trained lightweight Convolutional Neural Networks models. At first, a dataset containing near to 1000 images has been collected, then interesting objects on images have been labeled with bounding boxing option. To build, train, evaluate and deploy the relevant model, TensorFlow Object Detection API with Python has been employed. MobileNet v2 pre-trained model has been leveraged for this task. In the trial experiment with four sign classes (A, B, C, E) and 5000 step numbers 15.2% training loss and 83% evaluation mean average precision results have been obtained. In the next step of model deployment experiments with all 24 static signs of AzSL, 49700 and 27700 steps (180 and 100 epochs, respectively) 6.4% and 18.2% training losses, 66.5% and 71.6% mAP outcomes gained, respectively.
AB - Sign Language recognition is one of the essential and focal areas for researchers in terms of improving the integration of speech and hearing-impaired people into common society. The main idea is to detect the hand gestures of impaired people and convert them to understandable formats, such as text by leveraging advanced approaches. In this paper, we present our contribution to the improvement of Azerbaijani Sign Language (AzSL). We worked on AzSL Alphabet static signs real-time recognition. The method applied in this work is Object Classification and Recognition by leveraging pre-trained lightweight Convolutional Neural Networks models. At first, a dataset containing near to 1000 images has been collected, then interesting objects on images have been labeled with bounding boxing option. To build, train, evaluate and deploy the relevant model, TensorFlow Object Detection API with Python has been employed. MobileNet v2 pre-trained model has been leveraged for this task. In the trial experiment with four sign classes (A, B, C, E) and 5000 step numbers 15.2% training loss and 83% evaluation mean average precision results have been obtained. In the next step of model deployment experiments with all 24 static signs of AzSL, 49700 and 27700 steps (180 and 100 epochs, respectively) 6.4% and 18.2% training losses, 66.5% and 71.6% mAP outcomes gained, respectively.
KW - Azerbaijan Sign Language
KW - Convolutional Neural Networks
KW - MobileNet
KW - TensorFlow
UR - http://www.scopus.com/inward/record.url?scp=85131209555&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131209555&partnerID=8YFLogxK
U2 - 10.1088/1742-6596/2251/1/012007
DO - 10.1088/1742-6596/2251/1/012007
M3 - Conference article
AN - SCOPUS:85131209555
SN - 1742-6588
VL - 2251
JO - Journal of Physics: Conference Series
JF - Journal of Physics: Conference Series
IS - 1
M1 - 012007
T2 - 2nd International Conference on Robotics and Artificial Intelligence, RoAI 2021
Y2 - 29 November 2021 through 30 November 2021
ER -