TY - GEN
T1 - An Image-based Human Physical Activities Recognition in an Indoor Environment
AU - Ullah, Farman
AU - Iqbal, Asif
AU - Khan, Ajmal
AU - Khan, Rida Gul
AU - Malik, Laraib
AU - Kwak, Kyung Sup
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/10/21
Y1 - 2020/10/21
N2 - In this paper, we propose real-time image-based recognition of human activities from series of images considering different human actions performed in an indoor environment.The proposed image-based human activity recognition(IHAR)system can be utilized for assisting the life of disabled persons, surveillance and human tracking, human computer interaction, and efficient resource utilization. The proposed IHAR system consists of closed-circuit television (CCTV) camera based image acquisitioning, various filtering based image enhancement, principle component analysis(PCA) based features extraction, and various machine learning algorithms for recognition accuracy performance comparison. We collected dataset of 10 different activities such as walking, sitting down and standing up consists of 35, 530 images. The dataset is divided into(90%, 10%), (80%, 20%), and(70%, 30%)training and testing respectively and evaluated three classifier K-nearest neighbors (KNN), Random Forest (RF), and Decision Tree(DT). The experimental results show the accuracy of 95%, 97%, and 90% by KNN, RF, and DT respectively.
AB - In this paper, we propose real-time image-based recognition of human activities from series of images considering different human actions performed in an indoor environment.The proposed image-based human activity recognition(IHAR)system can be utilized for assisting the life of disabled persons, surveillance and human tracking, human computer interaction, and efficient resource utilization. The proposed IHAR system consists of closed-circuit television (CCTV) camera based image acquisitioning, various filtering based image enhancement, principle component analysis(PCA) based features extraction, and various machine learning algorithms for recognition accuracy performance comparison. We collected dataset of 10 different activities such as walking, sitting down and standing up consists of 35, 530 images. The dataset is divided into(90%, 10%), (80%, 20%), and(70%, 30%)training and testing respectively and evaluated three classifier K-nearest neighbors (KNN), Random Forest (RF), and Decision Tree(DT). The experimental results show the accuracy of 95%, 97%, and 90% by KNN, RF, and DT respectively.
KW - CCTV
KW - Decision Tree
KW - Human Activity Recognition
KW - Principal Component Analysis
KW - Random Forest
UR - http://www.scopus.com/inward/record.url?scp=85098997026&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098997026&partnerID=8YFLogxK
U2 - 10.1109/ICTC49870.2020.9289314
DO - 10.1109/ICTC49870.2020.9289314
M3 - Conference contribution
AN - SCOPUS:85098997026
T3 - International Conference on ICT Convergence
SP - 588
EP - 593
BT - ICTC 2020 - 11th International Conference on ICT Convergence
PB - IEEE Computer Society
T2 - 11th International Conference on Information and Communication Technology Convergence, ICTC 2020
Y2 - 21 October 2020 through 23 October 2020
ER -