论文标题
使用先前的深度信息对动态场景的深度图估计
Depth Map Estimation of Dynamic Scenes Using Prior Depth Information
论文作者
论文摘要
深度信息对于许多应用程序很有用。主动深度传感器具有吸引力,因为它们获得了密集,准确的深度图。但是,由于从功率限制到多传感器干扰的问题,这些传感器不能总是不断使用。为了克服这一限制,我们提出了一种算法,该算法使用同时收集的图像和先前测量的深度图来估算深度图,以便在动态场景中估算摄像机和对象,场景中的相机和对象都可以独立移动。为了估算这些情况下的深度,我们的算法使用独立和僵化的运动对动态场景运动进行建模。然后,它使用先前的深度图有效估计这些刚性运动并获得新的深度图。我们的目标是平衡主动深度传感器和计算之间的深度的获取,而不会产生大量的计算成本。因此,我们利用先前的深度信息来避免计算昂贵的操作,例如密集的光流估计或类似方法中使用的分割。我们的方法可以在标准笔记本计算机上实时(30 fps)时获得密集的深度图,该计算机的数量级比类似方法快。当使用各种动态场景的RGB-D数据集进行评估时,我们的方法估计深度图的平均相对误差为2.5%,同时将主动深度传感器使用量减少90%以上。
Depth information is useful for many applications. Active depth sensors are appealing because they obtain dense and accurate depth maps. However, due to issues that range from power constraints to multi-sensor interference, these sensors cannot always be continuously used. To overcome this limitation, we propose an algorithm that estimates depth maps using concurrently collected images and a previously measured depth map for dynamic scenes, where both the camera and objects in the scene may be independently moving. To estimate depth in these scenarios, our algorithm models the dynamic scene motion using independent and rigid motions. It then uses the previous depth map to efficiently estimate these rigid motions and obtain a new depth map. Our goal is to balance the acquisition of depth between the active depth sensor and computation, without incurring a large computational cost. Thus, we leverage the prior depth information to avoid computationally expensive operations like dense optical flow estimation or segmentation used in similar approaches. Our approach can obtain dense depth maps at up to real-time (30 FPS) on a standard laptop computer, which is orders of magnitude faster than similar approaches. When evaluated using RGB-D datasets of various dynamic scenes, our approach estimates depth maps with a mean relative error of 2.5% while reducing the active depth sensor usage by over 90%.