TY - GEN
T1 - Color models and weighted covariance estimation for person re-identification
AU - Yang, Yang
AU - Liao, Shengcai
AU - Lei, Zhen
AU - Yi, Dong
AU - Li, Stan Z.
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/12/4
Y1 - 2014/12/4
N2 - Due to illumination changes, partial occlusions, and object scale differences, person re-identification over disjoint camera views becomes a challenging problem. To address this problem, a variety of image representations have been put forward. In this paper, the illumination invariance and distinctiveness of different color models including the proposed color model are firstly evaluated. Since color distribution is robust to image scales and partial occlusions, color distributions based on different color models are then calculated and fused in the stage of feature extraction. Different color models obtain robustness to different types of illumination and thus fusing them can compensate each other and contribute to better performance. In the stage of feature matching, a weighted KISSME is presented to learn a better distance metric than the original KISSME. Experimental results demonstrate its feasibility and effectiveness. Finally, image pairs are matched based on the learned distance metric. Experiments conducted on two public benchmark datasets (VIPeR and PRID 450S) show that the proposed algorithm outperforms the state-of-the-art methods.
AB - Due to illumination changes, partial occlusions, and object scale differences, person re-identification over disjoint camera views becomes a challenging problem. To address this problem, a variety of image representations have been put forward. In this paper, the illumination invariance and distinctiveness of different color models including the proposed color model are firstly evaluated. Since color distribution is robust to image scales and partial occlusions, color distributions based on different color models are then calculated and fused in the stage of feature extraction. Different color models obtain robustness to different types of illumination and thus fusing them can compensate each other and contribute to better performance. In the stage of feature matching, a weighted KISSME is presented to learn a better distance metric than the original KISSME. Experimental results demonstrate its feasibility and effectiveness. Finally, image pairs are matched based on the learned distance metric. Experiments conducted on two public benchmark datasets (VIPeR and PRID 450S) show that the proposed algorithm outperforms the state-of-the-art methods.
KW - Color models
KW - Illumination invariance
KW - Metric learning
KW - Person re-identification
UR - http://www.scopus.com/inward/record.url?scp=84919913879&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84919913879&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2014.328
DO - 10.1109/ICPR.2014.328
M3 - Conference contribution
AN - SCOPUS:84919913879
T3 - Proceedings - International Conference on Pattern Recognition
SP - 1874
EP - 1879
BT - Proceedings - International Conference on Pattern Recognition
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd International Conference on Pattern Recognition, ICPR 2014
Y2 - 24 August 2014 through 28 August 2014
ER -