论文标题
UCLID-NET:对象空间中的单视图重建
UCLID-Net: Single View Reconstruction in Object Space
论文作者
论文摘要
大多数最先进的深几何学习单期重建方法都取决于输出形状参数化或隐式表示的编码器架构。但是,这些表示形式很少保留3D空间对象的欧几里得结构。在本文中,我们表明,建立一个几何形状,保留3维潜在空间有助于网络同时学习对象协调空间中的全球形状规律性和局部推理,从而促进性能。我们在Shapenet合成图像上演示了这两种图像,这些图像通常用于基准测试目的,以及我们方法优于最先进的图像。此外,单视管道自然扩展到多视图重建,我们也显示。
Most state-of-the-art deep geometric learning single-view reconstruction approaches rely on encoder-decoder architectures that output either shape parametrizations or implicit representations. However, these representations rarely preserve the Euclidean structure of the 3D space objects exist in. In this paper, we show that building a geometry preserving 3-dimensional latent space helps the network concurrently learn global shape regularities and local reasoning in the object coordinate space and, as a result, boosts performance. We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones. Furthermore, the single-view pipeline naturally extends to multi-view reconstruction, which we also show.