论文标题

用于自动驾驶的概率3D多模式,多目标跟踪

Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous Driving

论文作者

Chiu, Hsu-kuang, Li, Jie, Ambrus, Rares, Bohg, Jeannette

论文摘要

多对象跟踪是自动驾驶汽车安全导航交通场景的重要能力。当前最新的遵循逐探范例的跟踪,其中现有的轨道通过某个距离度量与检测到的对象相关联。提高跟踪准确性的主要挑战在于数据关联和跟踪生命周期管理。我们提出了一个概率,多模式,多目标跟踪系统,该系统由不同的可训练模块组成,可提供可靠和数据驱动的跟踪结果。首先,我们学习如何从2D图像和3D LiDAR点云中融合功能以捕获对象的外观和几何信息。其次,我们建议在比较数据关联中的轨道和新检测时学习一个结合摩alano虫和特征距离的度量。第三,我们建议学习何时从无与伦比的对象检测中初始化曲目。通过广泛的定量和定性结果,我们表明,当使用相同对象检测时,我们的方法优于Nuscenes和Kitti数据集上的最先进方法。

Multi-object tracking is an important ability for an autonomous vehicle to safely navigate a traffic scene. Current state-of-the-art follows the tracking-by-detection paradigm where existing tracks are associated with detected objects through some distance metric. The key challenges to increase tracking accuracy lie in data association and track life cycle management. We propose a probabilistic, multi-modal, multi-object tracking system consisting of different trainable modules to provide robust and data-driven tracking results. First, we learn how to fuse features from 2D images and 3D LiDAR point clouds to capture the appearance and geometric information of an object. Second, we propose to learn a metric that combines the Mahalanobis and feature distances when comparing a track and a new detection in data association. And third, we propose to learn when to initialize a track from an unmatched object detection. Through extensive quantitative and qualitative results, we show that when using the same object detectors our method outperforms state-of-the-art approaches on the NuScenes and KITTI datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源