MemoCMT: multimodal emotion recognition using cross-modal transformer-based feature fusion

Mustaqeem Khan, Phuong Nam Tran, Nhat Truong Pham, Abdulmotaleb El Saddik, Alice Othmani

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Speech emotion recognition has seen a surge in transformer models, which excel at understanding the overall message by analyzing long-term patterns in speech. However, these models come at a computational cost. In contrast, convolutional neural networks are faster but struggle with capturing these long-range relationships. Our proposed system, MemoCMT, tackles this challenge using a novel “cross-modal transformer” (CMT). This CMT can effectively analyze local and global speech features and their corresponding text. To boost efficiency, MemoCMT leverages recent advancements in pre-trained models: HuBERT extracts meaningful features from the audio, while BERT analyzes the text. The core innovation lies in how the CMT component utilizes and integrates these audio and text features. After this integration, different fusion techniques are applied before final emotion classification. Experiments show that MemoCMT achieves impressive performance, with the CMT using min aggregation achieving the highest unweighted accuracy (UW-Acc) of 81.33% and 91.93%, and weighted accuracy (W-Acc) of 81.85% and 91.84% respectively on benchmark IEMOCAP and ESD corpora. The results of our system demonstrate the generalization capacity and robustness for real-world industrial applications. Moreover, the implementation details of MemoCMT are publicly available at https://github.com/tpnam0901/MemoCMT/ for reproducibility purposes.

Original languageEnglish
Article number5473
JournalScientific reports
Volume15
Issue number1
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Cross-modal transformer
  • Deep learning
  • Feature fusion
  • Multimodal emotion recognition
  • Speech emotion recognition

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'MemoCMT: multimodal emotion recognition using cross-modal transformer-based feature fusion'. Together they form a unique fingerprint.

Cite this