论文标题

超越视频框架插装器:连续图像过渡的空间解耦学习方法

Beyond a Video Frame Interpolator: A Space Decoupled Learning Approach to Continuous Image Transition

论文作者

Yang, Tao, Ren, Peiran, Xie, Xuansong, Hua, Xiansheng, Zhang, Lei

论文摘要

视频框架插值(VFI)旨在改善视频序列的时间分辨率。现有的大多数基于深度学习的VFI方法采用了现成的光流算法来估计双向流并相应地插入缺失框架。尽管取得了巨大的成功,但这些方法需要大量的人类经验来调整双向流,并且在估计的流不准确时通常会产生不愉快的结果。在这项工作中,我们重新考虑了VFI问题,并将其作为连续的图像转换(CIT)任务,其关键问题是连续将图像从一个空间转换为另一个空间。更具体地说,我们学会将图像隐式地将图像分解为可翻译的流动空间和不可转移的特征空间。前者描绘了给定图像之间的可翻译状态,而后者的目的是重建无法直接翻译的中间特征。这样,我们可以轻松地在流动空间中执行图像插值,并在特征空间中进行中间图像合成,从而获得CIT模型。提出的空间解耦学习(SDL)方法易于实现,而它为VFI以外的各种CIT问题(例如样式传输和图像变形)提供了有效的框架。我们对各种CIT任务的广泛实验证明了SDL对现有方法的优越性。可以在\ url {https://github.com/yangxy/sdl}找到源代码和模型。

Video frame interpolation (VFI) aims to improve the temporal resolution of a video sequence. Most of the existing deep learning based VFI methods adopt off-the-shelf optical flow algorithms to estimate the bidirectional flows and interpolate the missing frames accordingly. Though having achieved a great success, these methods require much human experience to tune the bidirectional flows and often generate unpleasant results when the estimated flows are not accurate. In this work, we rethink the VFI problem and formulate it as a continuous image transition (CIT) task, whose key issue is to transition an image from one space to another space continuously. More specifically, we learn to implicitly decouple the images into a translatable flow space and a non-translatable feature space. The former depicts the translatable states between the given images, while the later aims to reconstruct the intermediate features that cannot be directly translated. In this way, we can easily perform image interpolation in the flow space and intermediate image synthesis in the feature space, obtaining a CIT model. The proposed space decoupled learning (SDL) approach is simple to implement, while it provides an effective framework to a variety of CIT problems beyond VFI, such as style transfer and image morphing. Our extensive experiments on a variety of CIT tasks demonstrate the superiority of SDL to existing methods. The source code and models can be found at \url{https://github.com/yangxy/SDL}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源