论文标题
提示:文本引起的姿势合成
TIPS: Text-Induced Pose Synthesis
论文作者
论文摘要
在计算机视觉中,人类的姿势合成和转移与以前看不见的姿势的概率形象产生与已经可用的观察到该人的概率形象产生。尽管研究人员最近提出了几种实现此任务的方法,但这些技术中的大多数直接从特定数据集中的所需目标图像中得出姿势,从而使基本过程挑战在现实世界情景中应用,因为目标图像的产生是实际的目标。在本文中,我们首先介绍当前姿势转移算法的缺点,然后提出一种新型的基于文本的姿势转移技术来解决这些问题。我们将问题分为三个独立阶段:(a)构成表示形式的文本,(b)姿势细化,(c)姿势渲染。据我们所知,这是开发基于文本的姿势转移框架的首次尝试之一,我们还通过为DeepFashion数据集的图像添加描述性姿势注释,从而引入了新的数据集DF-PASS。所提出的方法在我们的实验中产生了具有明显的定性和定量得分的有希望的结果。
In computer vision, human pose synthesis and transfer deal with probabilistic image generation of a person in a previously unseen pose from an already available observation of that person. Though researchers have recently proposed several methods to achieve this task, most of these techniques derive the target pose directly from the desired target image on a specific dataset, making the underlying process challenging to apply in real-world scenarios as the generation of the target image is the actual aim. In this paper, we first present the shortcomings of current pose transfer algorithms and then propose a novel text-based pose transfer technique to address those issues. We divide the problem into three independent stages: (a) text to pose representation, (b) pose refinement, and (c) pose rendering. To the best of our knowledge, this is one of the first attempts to develop a text-based pose transfer framework where we also introduce a new dataset DF-PASS, by adding descriptive pose annotations for the images of the DeepFashion dataset. The proposed method generates promising results with significant qualitative and quantitative scores in our experiments.