论文标题
在点云上进行自我监督的几次学习
Self-Supervised Few-Shot Learning on Point Clouds
论文作者
论文摘要
大量点云的可用性增加,以及它们在诸如机器人技术,形状合成和自动驾驶汽车等广泛应用中的效用,引起了行业和学术界的越来越多的关注。最近,在标记的点云上运行的深度神经网络已显示出有望在分类和细分等监督学习任务上的结果。但是,监督的学习导致了注释点云的繁琐任务。为了解决这个问题,我们提出了两个新颖的自我监督的预训练预训练任务,这些任务使用盖树对点云进行编码层次分配,其中点云子集位于盖树每个级别的不同半径的球内。此外,我们的自我监督学习网络仅限于预先培训(包括稀缺培训示例),用于在几次学习(FSL)设置中训练下游网络。最后,全面训练的自我监督网络的点嵌入是对下游任务网络的输入。我们对我们的下游分类和细分任务的方法进行了全面的经验评估,并表明,通过我们的自我监督学习方法预先训练的监督方法显着提高了最先进方法的准确性。此外,我们的方法还胜过下游分类任务中先前的无监督方法。
The increased availability of massive point clouds coupled with their utility in a wide variety of applications such as robotics, shape synthesis, and self-driving cars has attracted increased attention from both industry and academia. Recently, deep neural networks operating on labeled point clouds have shown promising results on supervised learning tasks like classification and segmentation. However, supervised learning leads to the cumbersome task of annotating the point clouds. To combat this problem, we propose two novel self-supervised pre-training tasks that encode a hierarchical partitioning of the point clouds using a cover-tree, where point cloud subsets lie within balls of varying radii at each level of the cover-tree. Furthermore, our self-supervised learning network is restricted to pre-train on the support set (comprising of scarce training examples) used to train the downstream network in a few-shot learning (FSL) setting. Finally, the fully-trained self-supervised network's point embeddings are input to the downstream task's network. We present a comprehensive empirical evaluation of our method on both downstream classification and segmentation tasks and show that supervised methods pre-trained with our self-supervised learning method significantly improve the accuracy of state-of-the-art methods. Additionally, our method also outperforms previous unsupervised methods in downstream classification tasks.