论文标题
独特的3D本地深层描述符
Distinctive 3D local deep descriptors
论文作者
论文摘要
我们提供了一种简单但有效的方法,用于学习独特的3D局部深度描述符(DIPS),可用于注册点云而无需初始对齐。提取点云斑块,相对于其估计的局部参考框架进行规范化,并通过基于点网的深神经网络编码为旋转不变的紧凑型描述符。倾角可以有效地跨越不同的传感器方式,因为它们是从本地和随机采样点端到端学习的。由于DIPS仅编码本地几何信息,因此它们对杂物,闭塞和缺失区域具有鲁棒性。我们在几个室内和室外数据集上评估和比较倾角与由使用不同传感器重建的点云组成的替代手工制作和深层描述符。结果表明,(i)在RGB-D室内场景(3DMATCH数据集)上取得可比的结果,(ii)在激光扫描室室外场景(ETH DATASET)和(iii)在Indoor Senios the Indoor Scens在Indoor Senips the Visual-Slam System的普遍性上,在激光扫描室室外场景(ETH DATASET)上超过了最先进的效果。源代码:https://github.com/fabiopoiesi/dip。
We present a simple but yet effective method for learning distinctive 3D local deep descriptors (DIPs) that can be used to register point clouds without requiring an initial alignment. Point cloud patches are extracted, canonicalised with respect to their estimated local reference frame and encoded into rotation-invariant compact descriptors by a PointNet-based deep neural network. DIPs can effectively generalise across different sensor modalities because they are learnt end-to-end from locally and randomly sampled points. Because DIPs encode only local geometric information, they are robust to clutter, occlusions and missing regions. We evaluate and compare DIPs against alternative hand-crafted and deep descriptors on several indoor and outdoor datasets consisting of point clouds reconstructed using different sensors. Results show that DIPs (i) achieve comparable results to the state-of-the-art on RGB-D indoor scenes (3DMatch dataset), (ii) outperform state-of-the-art by a large margin on laser-scanner outdoor scenes (ETH dataset), and (iii) generalise to indoor scenes reconstructed with the Visual-SLAM system of Android ARCore. Source code: https://github.com/fabiopoiesi/dip.