论文标题

姿势估计回归的对抗转移

Adversarial Transfer of Pose Estimation Regression

论文作者

Chidlovskii, Boris, Sadek, Assem

论文摘要

我们解决了视觉定位中相机姿势估计的问题。当前基于回归的姿势估计方法对现场进行了训练和评估。它们取决于训练数据集的坐标框架,并在场景和数据集之间显示出低概括。我们确定数据集转移了概括的重要障碍,并将转移学习视为更好地重复使用姿势估计模型的另一种方法。我们修改用于分类的域适应技术,并将其扩展到相机姿势估计,这是一项多回归任务。我们开发了一个深层的适应网络,用于学习场景不变的图像表示,并使用对抗性学习来生成此类表示模型转移。我们通过自我监督的学习丰富了网络,并使用适应性理论来验证两个给定场景中图像的场景不变表示存在。我们在两个公共数据集(Cambridge Landmarks和7scene)上评估了我们的网络,证明了它优于几个基线,并与最先进的方法进行了比较。

We address the problem of camera pose estimation in visual localization. Current regression-based methods for pose estimation are trained and evaluated scene-wise. They depend on the coordinate frame of the training dataset and show a low generalization across scenes and datasets. We identify the dataset shift an important barrier to generalization and consider transfer learning as an alternative way towards a better reuse of pose estimation models. We revise domain adaptation techniques for classification and extend them to camera pose estimation, which is a multi-regression task. We develop a deep adaptation network for learning scene-invariant image representations and use adversarial learning to generate such representations for model transfer. We enrich the network with self-supervised learning and use the adaptability theory to validate the existence of scene-invariant representation of images in two given scenes. We evaluate our network on two public datasets, Cambridge Landmarks and 7Scene, demonstrate its superiority over several baselines and compare to the state of the art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源