Speech Emotion Recognition by Late Fusion for Bidirectional Reservoir Computing with Random Projection

Hemin Ibrahim, Chu Kiong Loo, Fady Alnajjar

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)


Many researchers are inspired by studying Speech Emotion Recognition (SER) because it is considered as a key effort in Human-Computer Interaction (HCI). The main focus of this work is to design a model for emotion recognition from speech, which has plenty of challenges within it. Due to the time series and sparse nature of emotion in speech, we have adopted a multivariate time series feature representation of the input data. The work has also adopted the Echo State Network (ESN) which includes reservoir computing as a special case of the Recurrent Neural Network (RNN) to avoid model complexity because of its untrained and sparse nature when mapping the features into a higher dimensional space. Additionally, we applied dimensionality reduction since it offers significant computational advantages by using Sparse Random Projection (SRP). Late fusion of bidirectionality input has been applied to capture additional information independently of the input data. The experiments for speaker-independent and/or speaker-dependent were performed on four common speech emotion datasets which are Emo-DB, SAVEE, RAVDESS, and FAU Aibo Emotion Corpus. The results show that the designed model outperforms the state-of-the-art with a cheaper computation cost.

Original languageEnglish
Pages (from-to)122855-122871
Number of pages17
JournalIEEE Access
Publication statusPublished - 2021


  • Speech emotion recognition
  • random projection
  • recurrent neural network
  • reservoir computing
  • time series classification

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering


Dive into the research topics of 'Speech Emotion Recognition by Late Fusion for Bidirectional Reservoir Computing with Random Projection'. Together they form a unique fingerprint.

Cite this