Attention induced multi-head convolutional neural network for human activity recognition

Zanobya N. Khan, Jamil Ahmad

Research output: Contribution to journalArticlepeer-review

157 Citations (Scopus)

Abstract

Deep neural networks, including convolutional neural networks (CNNs), have been widely adopted for human activity recognition in recent years. They have attained significant performance improvement over traditional techniques due to their strong feature representation capabilities. Some of the challenges faced by the HAR community is the non-availability of a substantial amount of labeled training samples, and the higher computational cost and system resources requirements of deep learning architectures as opposed to shallow learning algorithms. To address these challenges, we propose an attention-based multi-head model for human activity recognition (HAR). This framework contains three lightweight convolutional heads, with each head designed using one-dimensional CNN to extract features from sensory data. The lightweight multi-head model is induced with attention to strengthen the representation ability of CNN, allowing for automatic selection of salient features and suppress unimportant ones. We conducted ablation studies and experiments on two publicly available benchmark datasets: WISDM and UCI HAR, to evaluate our model. The experimental outcome demonstrates the effectiveness of the proposed framework in activity recognition and achieves better accuracy while ensuring computational efficiency.

Original languageEnglish
Article number107671
JournalApplied Soft Computing
Volume110
DOIs
Publication statusPublished - Oct 2021
Externally publishedYes

Keywords

  • Attention mechanism
  • Convolutional neural network
  • Human activity recognition
  • Inertial sensors
  • Squeeze-and-excitation module

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Attention induced multi-head convolutional neural network for human activity recognition'. Together they form a unique fingerprint.

Cite this