论文标题

RedFeat:多模式特征学习的重耦检测和描述

ReDFeat: Recoupling Detection and Description for Multimodal Feature Learning

论文作者

Deng, Yuxin, Ma, Jiayi

论文摘要

结合检测和描述结合的基于深度学习的本地特征提取算法在可见图像匹配方面取得了重大进展。但是,由于缺乏对检测的强大监督以及检测和描述之间的不当耦合,因此众所周知,此类框架的端到端培训是不稳定的。该问题在跨模式场景中被放大,其中大多数方法都严重依赖于预训练。在本文中,我们通过相互加权策略弥补了多模式特征学习的独立限制和描述的描述,其中可鲁棒特征的检测概率被迫达到峰值和重复,而在优化过程中则强调具有高检测分数的特征。与以前的作品不同,这些权重从背面传播脱离,因此未直接抑制了未直觉特征的概率,并且训练将更加稳定。此外,我们提出了超级检测器,该检测器具有大型接受场,并配备了可学习的非最大抑制层,以实现苛刻的检测术语。最后,我们构建了一个基准,该基准包括包含交叉可见的,红外,近红外和合成的光圈雷达图像对,以评估功能匹配和图像注册任务中功能的性能。广泛的实验表明,经过重新探测和描述训练的功能,名为Redfeat,在基准测试中超过了先前的最新技术,而该模型可以从头开始训练。

Deep-learning-based local feature extraction algorithms that combine detection and description have made significant progress in visible image matching. However, the end-to-end training of such frameworks is notoriously unstable due to the lack of strong supervision of detection and the inappropriate coupling between detection and description. The problem is magnified in cross-modal scenarios, in which most methods heavily rely on the pre-training. In this paper, we recouple independent constraints of detection and description of multimodal feature learning with a mutual weighting strategy, in which the detected probabilities of robust features are forced to peak and repeat, while features with high detection scores are emphasized during optimization. Different from previous works, those weights are detached from back propagation so that the detected probability of indistinct features would not be directly suppressed and the training would be more stable. Moreover, we propose the Super Detector, a detector that possesses a large receptive field and is equipped with learnable non-maximum suppression layers, to fulfill the harsh terms of detection. Finally, we build a benchmark that contains cross visible, infrared, near-infrared and synthetic aperture radar image pairs for evaluating the performance of features in feature matching and image registration tasks. Extensive experiments demonstrate that features trained with the recoulped detection and description, named ReDFeat, surpass previous state-of-the-arts in the benchmark, while the model can be readily trained from scratch.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源