Abstract
Recurrent neural network (RNN) and long short-Term memory (LSTM) have achieved great success in processing sequential multimedia data and yielded the state-of-The-Art results in speech recognition, digital signal processing, video processing, and text data analysis. In this paper, we propose a novel action recognition method by processing the video data using convolutional neural network (CNN) and deep bidirectional LSTM (DB-LSTM) network. First, deep features are extracted from every sixth frame of the videos, which helps reduce the redundancy and complexity. Next, the sequential information among frame features is learnt using DB-LSTM network, where multiple layers are stacked together in both forward pass and backward pass of DB-LSTM to increase its depth. The proposed method is capable of learning long term sequences and can process lengthy videos by analyzing features for a certain time interval. Experimental results show significant improvements in action recognition using the proposed method on three benchmark data sets including UCF-101, YouTube 11 Actions, and HMDB51 compared with the state-of-The-Art action recognition methods.
| Original language | English |
|---|---|
| Pages (from-to) | 1155-1166 |
| Number of pages | 12 |
| Journal | IEEE Access |
| Volume | 6 |
| DOIs | |
| Publication status | Published - Nov 27 2017 |
| Externally published | Yes |
Keywords
- Action recognition
- convolution neural network
- deep bidirectional long short-Term memory
- deep learning
- recurrent neural network
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering