论文标题

wheret:宽距离头姿势的实时细粒度估计

WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose

论文作者

Zhou, Yijun, Gregson, James

论文摘要

我们提出了一个端到端的头置估计网络,旨在通过单个RGB图像通过全范围的头部偏航进行预测欧拉角。现有方法可用于额叶视图,但从所有角度来看,几乎没有目标头的姿势。这在自动驾驶和零售业中具有应用。我们的网络以多损失方法为基础,随着损失功能的变化和训练策略的变化,适用于广泛的估计。此外,我们首次从当前的全景数据集中提取前视图的地面真相标记。由此产生的宽头估计网络(WHETET)是第一个适用于全端偏航(因此宽)的细颗粒现代方法,同时还符合或符合额头姿势估计的最先进方法。我们的网络对于移动设备和应用程序是紧凑而有效的。

We present an end-to-end head-pose estimation network designed to predict Euler angles through the full range head yaws from a single RGB image. Existing methods perform well for frontal views but few target head pose from all viewpoints. This has applications in autonomous driving and retail. Our network builds on multi-loss approaches with changes to loss functions and training strategies adapted to wide range estimation. Additionally, we extract ground truth labelings of anterior views from a current panoptic dataset for the first time. The resulting Wide Headpose Estimation Network (WHENet) is the first fine-grained modern method applicable to the full-range of head yaws (hence wide) yet also meets or beats state-of-the-art methods for frontal head pose estimation. Our network is compact and efficient for mobile devices and applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源