discrete and static dilation rate. Extensive experiments illustrate that our
approach has achieved a consistent improvement on four challenging
benchmarks. Especially, our approach achieves better performance than the
state-of-the-art methods on all benchmark datasets.
9. Camera On-Boarding for Person Re-Identification Using Hypothesis Transfer
Learning
Abstract:Most of the existing approaches for person re-identification
consider a static setting where the number of cameras in the network is fixed. An
interesting direction, which has received little attention, is to explore the dynamic
nature of a camera network, where one tries to adapt the existing re-identification
models after on-boarding new cameras, with little additional effort. There have
been a few recent methods proposed in person re-identification that attempt to
address this problem by assuming the labeled data in the existing network is still
available while adding new cameras. This is a strong assumption since there may
exist some privacy issues for which one may not have access to those data.
Rather, based on the fact that it is easy to store the learned re-identifications
models, which mitigates any data privacy concern, we develop an efficient model
adaptation approach using hypothesis transfer learning that aims to transfer the
knowledge using only source models and limited labeled data, but without using
any source camera data from the existing network. Our approach minimizes the
effect of negative transfer by finding an optimal weighted combination of multiple
source models for transferring the knowledge. Extensive experiments on four
challenging benchmark datasets with variable number of cameras well
demonstrate the efficacy of our proposed approach over state-of-the-art methods.
10. Hierarchical Clustering With Hard-Batch Triplet Loss for Person Re-Identification
Abstract:For clustering-guided fully unsupervised person reidentification (re-
ID) methods, the quality of pseudo labels generated by clustering directly
decides the model performance. In order to improve the quality of pseudo
labels in existing methods, we propose the HCT method which combines
hierarchical clustering with hard-batch triplet loss. The key idea of HCT is to
make full use of the similarity among samples in the target dataset through
hierarchical clustering, reduce the influence of hard examples through hard-
batch triplet loss, so as to generate high quality pseudo labels and improve
model performance. Specifically, (1) we use hierarchical clustering to
generate pseudo labels, (2) we use PK sampling in each iteration to generate
a new dataset for training, (3) we conduct training with hard-batch triplet loss
and evaluate model performance in each iteration. We evaluate our model on
Market-1501 and DukeMTMC-reID. Results show that HCT achieves 56.4%
mAP on Market-1501 and 50.7% mAP on DukeMTMC-reID which surpasses
state-of-the-arts a lot in fully unsupervised re-ID and even better than most
unsupervised domain adaptation (UDA) methods which use the labeled
source dataset. Code will be released soon on
https://github.com/zengkaiwei/HCT.