Person re-identification(person re-id) aims to match observations on pedestrians from different cameras.It is a challenging task in real word surveillance systems and draws extensive attention from the community.Most existing methods are based on supervised learning which requires a large number of labeled data. In this paper, we develop a robust unsupervised learning approach for person re-id. We propose an improved Bag-of-Words(i Bo W) model to describe and match pedestrians under different camera views. The proposed descriptor does not require any re-id labels, and is robust against pedestrian variations. Experiments show the proposed i Bo W descriptor outperforms other unsupervised methods. By combination with efficient metric learning algorithms, we obtained competitive accuracy compared to existing state-of-the-art methods on person re-identification benchmarks, including VIPe R, PRID450 S, and Market1501.
行人再识别(person re-identification,Person ReID)指利用计算机视觉技术对在一个摄像头的视频图像中出现的某个确定行人在其他时间、不同位置的摄像头中再次出现时能够辨识出来,或在图像或视频库中检索特定行人。行人再识别研究具有强烈的实际需求,在公共安全、新零售以及人机交互领域具有潜在应用,具备显著的机器学习和计算机视觉领域的理论研究价值。行人成像存在复杂的姿态、视角、光照和成像质量等变化,同时也有一定范围的遮挡等难点,因此行人再识别面临着非常大的技术挑战。近年来,学术界和产业界投入了巨大的人力和资源研究该问题,并取得了一定进展,在多个数据集上的平均准确率均值(mean average precision,mAP)有了较大提升,并部分开始实际应用。尽管如此,当前行人再识别研究主要还是侧重于服装表观的特征,缺乏对行人表观显式的多视角观测和描述,这与人类观测的机理不尽相符。本文旨在打破现有行人再识别任务的设定,形成对行人综合性观测描述。为推进行人再识别研究的进展,本文在前期行人再识别研究的基础上提出了人像态势计算的概念(ReID2.0)。人像态势计算以像态、形态、神态和意态这4态对人像的静态属性和似动状态进行多视角观测和描述。构建了一个新的基准数据集Portrait250K,包含250000幅人像和对应8个子任务的手动标记的8种标签,并提出一个新的评价指标。提出的人像态势计算从多视角表观信息对行人形成综合性的观测描述,为行人再识别2.0以及类人智能体的进一步研究提供了参考。