论文标题
带有对象运动预测的未来视频综合
Future Video Synthesis with Object Motion Prediction
论文作者
论文摘要
我们提出了一种预测未来视频帧的方法,因为过去的一系列连续视频帧。我们的方法不是直接综合图像,而是通过解开背景场景和移动对象来理解复杂的场景动态。将来场景组件的外观通过对移动对象的背景和仿射变换的非刚性变形来预测。预期的外观被合并为在将来创建合理的视频。通过此过程,与其他方法相比,我们的方法表现出的撕裂或失真伪像更少。 CityScapes和Kitti数据集的实验结果表明,我们的模型在视觉质量和准确性方面优于最先进的结果。
We present an approach to predict future video frames given a sequence of continuous video frames in the past. Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics by decoupling the background scene and moving objects. The appearance of the scene components in the future is predicted by non-rigid deformation of the background and affine transformation of moving objects. The anticipated appearances are combined to create a reasonable video in the future. With this procedure, our method exhibits much less tearing or distortion artifact compared to other approaches. Experimental results on the Cityscapes and KITTI datasets show that our model outperforms the state-of-the-art in terms of visual quality and accuracy.