论文标题
BéziersKetch:可扩展矢量草图的生成模型
BézierSketch: A generative model for scalable vector sketches
论文作者
论文摘要
由于草图图像产生与人类绘图过程之间的联系,对人类草图的神经生成模型的研究是一个令人着迷的当代建模问题。具有里程碑意义的Sketchrnn通过将草图作为一系列路点进行了突破。但是,这导致了低分辨率的图像生成,并且无法对长素描进行建模。在本文中,我们介绍了BéziersKetch,这是一种自动可扩展且高分辨率的完全向量草图的新型生成模型。为此,我们首先引入了一种新型的逆图形方法来嵌入中风,该方法嵌入了训练编码器,将每个中风嵌入其最佳拟合的Bézier曲线。这使我们能够将草图视为简短的临时笔触序列,从而训练具有更长绘制能力的复发草图生成器,同时产生可扩展的高分辨率结果。我们在快速抽奖上报告定性和定量结果!基准。
The study of neural generative models of human sketches is a fascinating contemporary modeling problem due to the links between sketch image generation and the human drawing process. The landmark SketchRNN provided breakthrough by sequentially generating sketches as a sequence of waypoints. However this leads to low-resolution image generation, and failure to model long sketches. In this paper we present BézierSketch, a novel generative model for fully vector sketches that are automatically scalable and high-resolution. To this end, we first introduce a novel inverse graphics approach to stroke embedding that trains an encoder to embed each stroke to its best fit Bézier curve. This enables us to treat sketches as short sequences of paramaterized strokes and thus train a recurrent sketch generator with greater capacity for longer sketches, while producing scalable high-resolution results. We report qualitative and quantitative results on the Quick, Draw! benchmark.