论文标题
在倾斜的单眼图像中学习地心对象姿势
Learning Geocentric Object Pose in Oblique Monocular Images
论文作者
论文摘要
对象的地理姿势定义为地面上方的高度和对重力方向的方向,是使用RGBD图像的对象检测,分割和本地化任务的现实世界结构的强大表示。对于近距离视觉任务,高度和方向直接来自立体计算的深度,而最近来自深网预测的单眼深度。对于诸如地球观察等远程视觉任务,无法用单眼图像可靠地估计深度。灵感来自于最新的单眼高度上的工作,从地面预测和静态图像的光流预测中,我们开发了地理为中心姿势的编码,以应对这一挑战并训练一个深层网络,以密集地计算由公开可用的机载激子监督的代表。我们利用这些属性来纠正斜图像,并删除观察到的对象视差,以显着提高定位的准确性,并启用从非常不同的倾斜观点拍摄的多个图像的准确对齐。我们通过在斜卫星图像中扩展了两个大规模的公共数据集来证明我们的方法的价值。我们所有的数据和代码均可公开使用。
An object's geocentric pose, defined as the height above ground and orientation with respect to gravity, is a powerful representation of real-world structure for object detection, segmentation, and localization tasks using RGBD images. For close-range vision tasks, height and orientation have been derived directly from stereo-computed depth and more recently from monocular depth predicted by deep networks. For long-range vision tasks such as Earth observation, depth cannot be reliably estimated with monocular images. Inspired by recent work in monocular height above ground prediction and optical flow prediction from static images, we develop an encoding of geocentric pose to address this challenge and train a deep network to compute the representation densely, supervised by publicly available airborne lidar. We exploit these attributes to rectify oblique images and remove observed object parallax to dramatically improve the accuracy of localization and to enable accurate alignment of multiple images taken from very different oblique viewpoints. We demonstrate the value of our approach by extending two large-scale public datasets for semantic segmentation in oblique satellite images. All of our data and code are publicly available.