论文标题
针对基于文本的人搜索的特定图像信息抑制和隐性本地对齐
Image-Specific Information Suppression and Implicit Local Alignment for Text-based Person Search
论文作者
论文摘要
基于文本的人搜索(TBP)是一项具有挑战性的任务,旨在从给定查询文本的图像库中搜索具有相同身份的行人图像。近年来,TBP通过学习图像和文本之间的局部细粒度对应关系,取得了显着的进步,最新方法实现了卓越的性能。但是,大多数现有的方法都依赖于明确生成的本地零件来建模模式之间的细颗粒对应关系,由于缺乏上下文信息或潜在的噪声引入,这是不可靠的。此外,现有方法很少考虑由图像特定信息引起的方式之间的信息不平等问题。为了解决这些限制,我们为TBP提出了一个有效的联合多层对齐网络(MANET),该网络可以学习多个级别的模态之间的对齐图像/文本特征表示,并实现快速有效的人搜索。具体而言,我们首先设计一个特定图像的信息抑制模块,该模块分别通过关系引导的定位和通道注意过滤来抑制图像背景和环境因素。该模块有效地减轻了信息不平等问题,并意识到图像和文本之间的信息量的对齐。其次,我们提出了一个隐式局部对齐模块,以将图像/文本的所有像素/单词特征自适应地汇总到一组模态共享的语义主题中心,并隐式地学习无需其他监督和交叉模式相互作用的模态之间的局部细粒度对应关系。并引入了全球一致性作为当地观点的补充。全球和地方对齐模块的合作可以使模式之间更好的语义一致性。在多个数据库上进行了广泛的实验,证明了我们的马奈的有效性和优势。
Text-based person search (TBPS) is a challenging task that aims to search pedestrian images with the same identity from an image gallery given a query text. In recent years, TBPS has made remarkable progress and state-of-the-art methods achieve superior performance by learning local fine-grained correspondence between images and texts. However, most existing methods rely on explicitly generated local parts to model fine-grained correspondence between modalities, which is unreliable due to the lack of contextual information or the potential introduction of noise. Moreover, existing methods seldom consider the information inequality problem between modalities caused by image-specific information. To address these limitations, we propose an efficient joint Multi-level Alignment Network (MANet) for TBPS, which can learn aligned image/text feature representations between modalities at multiple levels, and realize fast and effective person search. Specifically, we first design an image-specific information suppression module, which suppresses image background and environmental factors by relation-guided localization and channel attention filtration respectively. This module effectively alleviates the information inequality problem and realizes the alignment of information volume between images and texts. Secondly, we propose an implicit local alignment module to adaptively aggregate all pixel/word features of image/text to a set of modality-shared semantic topic centers and implicitly learn the local fine-grained correspondence between modalities without additional supervision and cross-modal interactions. And a global alignment is introduced as a supplement to the local perspective. The cooperation of global and local alignment modules enables better semantic alignment between modalities. Extensive experiments on multiple databases demonstrate the effectiveness and superiority of our MANet.