论文标题

RAP-NET:一个区域和点的加权网络,用于提取室内定位的强大功能

RaP-Net: A Region-wise and Point-wise Weighting Network to Extract Robust Features for Indoor Localization

论文作者

Li, Dongjiang, Miao, Jinyu, Shi, Xuesong, Tian, Yuxin, Long, Qiwei, Cai, Tianyu, Guo, Ping, Yu, Hongfei, Yang, Wei, Yue, Haosong, Wei, Qi, Qiao, Fei

论文摘要

特征提取在视觉定位中起着重要作用。动态对象或重复区域上的不可靠功能将大大干扰功能匹配并挑战室内本地化。为了解决这个问题,我们提出了一个新颖的网络RAP-NET,以同时预测区域的不可分性性和点可靠性,然后通过考虑这两者来提取功能。我们还引入了一个名为Openloris-Location的新数据集,以训练拟议的网络。该数据集包含来自93个室内位置的1553张图像。包括同一位置的图像之间的各种外观变化,可以帮助模型学习典型的室内场景中的不可行性。实验结果表明,经过Openloris-Location数据集训练的RAP网络在功能匹配任务中实现了出色的性能,并且在室内本地化中的最先进算法效果明显胜过。 RAP-NET代码和数据集可在https://github.com/ivipsourcecode/rap-net上找到。

Feature extraction plays an important role in visual localization. Unreliable features on dynamic objects or repetitive regions will interfere with feature matching and challenge indoor localization greatly. To address the problem, we propose a novel network, RaP-Net, to simultaneously predict region-wise invariability and point-wise reliability, and then extract features by considering both of them. We also introduce a new dataset, named OpenLORIS-Location, to train the proposed network. The dataset contains 1553 images from 93 indoor locations. Various appearance changes between images of the same location are included and can help the model to learn the invariability in typical indoor scenes. Experimental results show that the proposed RaP-Net trained with OpenLORIS-Location dataset achieves excellent performance in the feature matching task and significantly outperforms state-of-the-arts feature algorithms in indoor localization. The RaP-Net code and dataset are available at https://github.com/ivipsourcecode/RaP-Net.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源