论文标题

ONERF:从多个视图的无监督3D对象分割

ONeRF: Unsupervised 3D Object Segmentation from Multiple Views

论文作者

Liang, Shengnan, Liu, Yichen, Wu, Shangzhe, Tai, Yu-Wing, Tang, Chi-Keung

论文摘要

我们提出Onerf,该方法可以自动从多视图RGB图像中自动段和重建对象实例,而无需任何其他手动注释。分段的3D对象使用单独的神经辐射场(NERFS)表示,这些辐射场(NERF)允许各种3D场景编辑和新颖的视图渲染。我们方法的核心是使用迭代期望最大化算法的一种无监督的方法,该方法有效地汇总了2D视觉特征和相应的3D线索,用于关节3D对象分割和重建的多视图。与只能处理简单对象的现有方法不同,我们的方法会生成具有复杂形状,拓扑和外观的单个对象的完整3D nerf。分段的ONERF可以启用一系列3D场景编辑,例如对象转换,插入和删除。

We present ONeRF, a method that automatically segments and reconstructs object instances in 3D from multi-view RGB images without any additional manual annotations. The segmented 3D objects are represented using separate Neural Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view rendering. At the core of our method is an unsupervised approach using the iterative Expectation-Maximization algorithm, which effectively aggregates 2D visual features and the corresponding 3D cues from multi-views for joint 3D object segmentation and reconstruction. Unlike existing approaches that can only handle simple objects, our method produces segmented full 3D NeRFs of individual objects with complex shapes, topologies and appearance. The segmented ONeRfs enable a range of 3D scene editing, such as object transformation, insertion and deletion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源