论文标题
DeepSM:3D人体运动预测的深层状态空间模型
DeepSSM: Deep State-Space Model for 3D Human Motion Prediction
论文作者
论文摘要
预测未来的人类运动在各种现实生活中的人机相互作用中起着重要作用。统一的配方和多阶建模是分析和代表人类运动的两个关键观点。与先前的工作相反,我们通过构建深层状态空间模型(DEEPSSM)来提高人类运动系统的多阶建模能力,从而更准确地预测。 DeepSSM利用了状态空间理论和深层网络的优势。具体而言,我们将人类运动系统制定为动态系统的状态空间模型,并通过状态空间理论对运动系统进行建模,为各种人类运动系统提供统一的公式。此外,一个新颖的深网旨在参数化该系统,该系统共同建模了状态状态过渡和州观察过渡过程。这样,系统的状态会通过随时间变化的人类运动序列的多阶信息进行更新。多个未来的姿势将通过状态观察过渡递归预测。为了进一步提高系统的模型能力,引入了新颖的损失WT-MPJPE(每关节位置误差的加权时间平均值),以优化模型。拟议的损失鼓励系统通过增加重量到早期步骤来实现更准确的预测。在两个基准数据集(即Human 36m和3dpw)上进行的实验证实,我们的方法以每关节至少至少2.2mm的精度来实现最先进的性能。该代码将在:\ url {https://github.com/lily2lab/deepssm.git}上可用。
Predicting future human motion plays a significant role in human-machine interactions for various real-life applications. A unified formulation and multi-order modeling are two critical perspectives for analyzing and representing human motion. In contrast to prior works, we improve the multi-order modeling ability of human motion systems for more accurate predictions by building a deep state-space model (DeepSSM). The DeepSSM utilizes the advantages of both the state-space theory and the deep network. Specifically, we formulate the human motion system as the state-space model of a dynamic system and model the motion system by the state-space theory, offering a unified formulation for diverse human motion systems. Moreover, a novel deep network is designed to parameterize this system, which jointly models the state-state transition and state-observation transition processes. In this way, the state of a system is updated by the multi-order information of a time-varying human motion sequence. Multiple future poses are recursively predicted via the state-observation transition. To further improve the model ability of the system, a novel loss, WT-MPJPE (Weighted Temporal Mean Per Joint Position Error), is introduced to optimize the model. The proposed loss encourages the system to achieve more accurate predictions by increasing weights to the early time steps. The experiments on two benchmark datasets (i.e., Human3.6M and 3DPW) confirm that our method achieves state-of-the-art performance with improved accuracy of at least 2.2mm per joint. The code will be available at: \url{https://github.com/lily2lab/DeepSSM.git}.