论文标题
对抗性的多尺度功能学习人员重新识别
Adversarial Multi-scale Feature Learning for Person Re-identification
论文作者
论文摘要
人重新识别(REID)是智能监视和计算机视觉的重要主题。它旨在准确测量人图像之间的视觉相似性,以确定两个图像是否与同一个人相对应。准确衡量视觉相似性的关键是学习判别特征,这不仅可以捕获不同空间尺度的线索,而且还可以在多个尺度上共同推断,并能够确定每个线索的可靠性和ID余量。为了实现这些目标,我们建议从两个角度提高人REID系统的性能:\ textbf {1)。}多尺度特征学习(MSFL),由跨级信息传播(CSIP)和多尺度特征融合(MSFF)组成,并动态地融合不同的范围。与ID相关的因素,并以对抗性方式忽略了无关因素。将MSFL和MSGR结合起来,我们的方法可以在四个常用的人reid数据集上获得最新的性能以及可忽视的测试时间计算开销。
Person Re-identification (Person ReID) is an important topic in intelligent surveillance and computer vision. It aims to accurately measure visual similarities between person images for determining whether two images correspond to the same person. The key to accurately measure visual similarities is learning discriminative features, which not only captures clues from different spatial scales, but also jointly inferences on multiple scales, with the ability to determine reliability and ID-relativity of each clue. To achieve these goals, we propose to improve Person ReID system performance from two perspective: \textbf{1).} Multi-scale feature learning (MSFL), which consists of Cross-scale information propagation (CSIP) and Multi-scale feature fusion (MSFF), to dynamically fuse features cross different scales.\textbf{2).} Multi-scale gradient regularizor (MSGR), to emphasize ID-related factors and ignore irrelevant factors in an adversarial manner. Combining MSFL and MSGR, our method achieves the state-of-the-art performance on four commonly used person-ReID datasets with neglectable test-time computation overhead.