TY - GEN
T1 - Clustering and dynamic sampling based unsupervised domain adaptation for person re-identification
AU - Wu, Jinlin
AU - Liao, Shengcai
AU - Lei, Zhen
AU - Wang, Xiaobo
AU - Yang, Yang
AU - Li, Stan Z.
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - Person Re-Identification (Re-ID) has witnessed great improvements due to the advances of the deep convolutional neural networks (CNN). Despite this, existing methods mainly suffer from the poor generalization ability to unseen scenes because of the different characteristics between different domains. To address this issue, a Clustering and Dynamic Sampling (CDS) method is proposed in this paper, which tries to transfer the useful knowledge of existing labeled source domain to the unlabeled target one. Specifically, to improve the discriminability of CNN model on source domain, we use the commonly shared pedestrian attributes (e.g., gender, hat and clothing color etc.) to enrich the information and resort to the margin-based softmax (e.g., A-Softmax) loss to train the model. For the unlabeled target domain, we iteratively cluster the samples into several centers and dynamically select informative ones from each center to fine-tune the source-domain model. Extensive experiments on DukeMTMC-reID and Market-1501 datasets show that the proposed method greatly improves the state of the arts in unsupervised domain adaptation.
AB - Person Re-Identification (Re-ID) has witnessed great improvements due to the advances of the deep convolutional neural networks (CNN). Despite this, existing methods mainly suffer from the poor generalization ability to unseen scenes because of the different characteristics between different domains. To address this issue, a Clustering and Dynamic Sampling (CDS) method is proposed in this paper, which tries to transfer the useful knowledge of existing labeled source domain to the unlabeled target one. Specifically, to improve the discriminability of CNN model on source domain, we use the commonly shared pedestrian attributes (e.g., gender, hat and clothing color etc.) to enrich the information and resort to the margin-based softmax (e.g., A-Softmax) loss to train the model. For the unlabeled target domain, we iteratively cluster the samples into several centers and dynamically select informative ones from each center to fine-tune the source-domain model. Extensive experiments on DukeMTMC-reID and Market-1501 datasets show that the proposed method greatly improves the state of the arts in unsupervised domain adaptation.
KW - A-softmax
KW - Clustering
KW - Dynamic sampling
KW - Pedestrian attributes
UR - http://www.scopus.com/inward/record.url?scp=85071049978&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071049978&partnerID=8YFLogxK
U2 - 10.1109/ICME.2019.00157
DO - 10.1109/ICME.2019.00157
M3 - Conference contribution
AN - SCOPUS:85071049978
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
SP - 886
EP - 891
BT - Proceedings - 2019 IEEE International Conference on Multimedia and Expo, ICME 2019
PB - IEEE Computer Society
T2 - 2019 IEEE International Conference on Multimedia and Expo, ICME 2019
Y2 - 8 July 2019 through 12 July 2019
ER -