Abstract
The cross-modal feature matching has gained much attention in recent years, which has many practical applications, such as the text-to-image retrieval. The most difficult problem of cross-modal matching is how to eliminate the heterogeneity between modalities. The existing methods (e.g., CCA and PLS) try to learn a common latent subspace, where the heterogeneity between two modalities is minimized so that cross-matching is possible. However, most of these methods require fully paired samples and suffer difficulties when dealing with unpaired data. Besides, utilizing the class label information has been found as a good way to reduce the semantic gap between the low-level image features and high-level document descriptions. Considering this, we propose a novel and effective supervised algorithm, which can also deal with the unpaired data. In the proposed formulation, the basis matrices of different modalities are jointly learned based on the training samples. Moreover, a local group-based priori is proposed in the formulation to make a better use of popular block based features (e.g., HOG and GIST). Extensive experiments are conducted on four public databases: Pascal VOC2007, LabelMe, Wikipedia, and NUS-WIDE. We also evaluated the proposed algorithm with unpaired data. By comparing with existing state-of-the-art algorithms, the results show that the proposed algorithm is more robust and achieves the best performance, which outperforms the second best algorithm by about 5% on both the Pascal VOC2007 and NUS-WIDE databases.
Original language | English |
---|---|
Article number | 7006724 |
Pages (from-to) | 370-381 |
Number of pages | 12 |
Journal | IEEE Transactions on Multimedia |
Volume | 17 |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 1 2015 |
Externally published | Yes |
Keywords
- Cross-modal matching
- Documents and images
- Multimedia
- Retrieval
ASJC Scopus subject areas
- Signal Processing
- Media Technology
- Computer Science Applications
- Electrical and Electronic Engineering