论文标题

场景级跟踪和重建没有对象先验

Scene-level Tracking and Reconstruction without Object Priors

论文作者

Chang, Haonan, Boularias, Abdeslam

论文摘要

我们提出了第一个实时系统,能够单独跟踪和重建给定场景中的每个可见对象,而没有任何形式的对象,纹理存在或对象类别的任何形式的先验。与先前的方法(例如共融合和掩模)相反,将场景首先将场景分割到单个对象中,然后独立处理每个对象,该方法作为跟踪和重建过程的一部分,动态将非刚性场景片段段。当新的测量表明拓扑变化时,重建的模型会实时更新以反映这种变化。我们提出的系统可以实时提供新颖场景中所有可见对象的实时几何形状和变形,这使得可以将无缝集成到许多现有的机器人技术应用程序中,这些应用程序依靠对象模型来掌握和操纵。在包含多个刚性和非韧性对象的具有挑战性的场景中,证明了所提出的系统的功能。

We present the first real-time system capable of tracking and reconstructing, individually, every visible object in a given scene, without any form of prior on the rigidness of the objects, texture existence, or object category. In contrast with previous methods such as Co-Fusion and MaskFusion that first segment the scene into individual objects and then process each object independently, the proposed method dynamically segments the non-rigid scene as part of the tracking and reconstruction process. When new measurements indicate topology change, reconstructed models are updated in real-time to reflect that change. Our proposed system can provide the live geometry and deformation of all visible objects in a novel scene in real-time, which makes it possible to be integrated seamlessly into numerous existing robotics applications that rely on object models for grasping and manipulation. The capabilities of the proposed system are demonstrated in challenging scenes that contain multiple rigid and non-rigid objects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源