论文标题

动态的未来网络:多样化的人类运动产生

Dynamic Future Net: Diversified Human Motion Generation

论文作者

Chen, Wenheng, Wang, He, Yuan, Yi, Shao, Tianjia, Zhou, Kun

论文摘要

在许多领域,例如计算机图形,视觉和虚拟现实等许多领域,人类运动建模至关重要。由于需要专门的设备和费力的手动后,因此很难获得高质量的骨骼运动,这需要最大程度地利用现有数据综合新数据。然而,由于人类运动动力学的内在运动随机性,这是一个挑战,这在短期和长期中表现出来。在短期内,几个帧中有很强的随机性,例如一帧,然后是多个可能的帧导致不同的运动样式。从长远来看,有非确定性的行动过渡。在本文中,我们提出了一种动态的未来网络,这是一种新的深度学习模型,我们通过在时间随机性中构建具有非平凡建模能力的生成模型来显式地关注上述运动随机性。鉴于数据有限,我们的模型可以产生大量具有任意持续时间的高质量动作,并在空间和时间上产生视觉上的变化。我们在广泛的动作上评估了我们的模型,并将其与最先进的方法进行比较。定性和定量结果都表明了我们方法的优势,其稳健性,多功能性和高质量。

Human motion modelling is crucial in many areas such as computer graphics, vision and virtual reality. Acquiring high-quality skeletal motions is difficult due to the need for specialized equipment and laborious manual post-posting, which necessitates maximizing the use of existing data to synthesize new data. However, it is a challenge due to the intrinsic motion stochasticity of human motion dynamics, manifested in the short and long terms. In the short term, there is strong randomness within a couple frames, e.g. one frame followed by multiple possible frames leading to different motion styles; while in the long term, there are non-deterministic action transitions. In this paper, we present Dynamic Future Net, a new deep learning model where we explicitly focuses on the aforementioned motion stochasticity by constructing a generative model with non-trivial modelling capacity in temporal stochasticity. Given limited amounts of data, our model can generate a large number of high-quality motions with arbitrary duration, and visually-convincing variations in both space and time. We evaluate our model on a wide range of motions and compare it with the state-of-the-art methods. Both qualitative and quantitative results show the superiority of our method, for its robustness, versatility and high-quality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源