Feature fusion based on joint sparse representations and wavelets for multiview classification

Younes Akbari, Omar Elharrouss, Somaya Al-Maadeed

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Feature-level-based fusion has attracted much interest. Generally, a dataset can be created in different views, features, or modalities. To improve the classification rate, local information is shared among different views by various fusion methods. However, almost all the methods use the views without considering their common aspects. In this paper, wavelet transform is considered to extract high and low frequencies of the views as common aspects to improve the classification rate. The fusion method for the decomposed parts is based on joint sparse representation in which a number of scenarios can be considered. The presented approach is tested on three datasets. The results obtained by this method prove competitive performance in terms of the datasets compared to the state-of-the-art results.

Original languageEnglish
Pages (from-to)645-653
Number of pages9
JournalPattern Analysis and Applications
Volume26
Issue number2
DOIs
Publication statusPublished - May 2023
Externally publishedYes

Keywords

  • Feature extraction
  • Fusion method
  • Wavelet transform

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Feature fusion based on joint sparse representations and wavelets for multiview classification'. Together they form a unique fingerprint.

Cite this