论文标题
Clipascene:场景用不同类型和抽象级别的素描
CLIPascene: Scene Sketching with Different Types and Levels of Abstraction
论文作者
论文摘要
在本文中,我们提出了一种使用不同类型和多个抽象级别将给定场景图像转换为草图的方法。我们区分两种类型的抽象。第一个考虑了草图的保真度,从更精确的输入描绘到宽松的描述。第二个是由草图的视觉简单性定义的,从详细的描述到稀疏的草图。将明确的分离分解为两个抽象轴,并且每个级别的多个级别 - 为用户提供了基于其个人目标和偏好选择所需草图的额外控制权。为了在给定的保真度和简化水平上形成草图,我们训练两个MLP网络。第一个网络了解了中风所需的位置,而第二个网络学会了逐渐从草图中删除中风,而不会损害其可识别性和语义。我们的方法能够生成复杂场景的草图,包括具有复杂背景(例如自然和城市环境)和受试者(例如动物和人),同时描绘了以富裕性和简单性来描述输入场景的逐渐抽象。
In this paper, we present a method for converting a given scene image into a sketch using different types and multiple levels of abstraction. We distinguish between two types of abstraction. The first considers the fidelity of the sketch, varying its representation from a more precise portrayal of the input to a looser depiction. The second is defined by the visual simplicity of the sketch, moving from a detailed depiction to a sparse sketch. Using an explicit disentanglement into two abstraction axes -- and multiple levels for each one -- provides users additional control over selecting the desired sketch based on their personal goals and preferences. To form a sketch at a given level of fidelity and simplification, we train two MLP networks. The first network learns the desired placement of strokes, while the second network learns to gradually remove strokes from the sketch without harming its recognizability and semantics. Our approach is able to generate sketches of complex scenes including those with complex backgrounds (e.g., natural and urban settings) and subjects (e.g., animals and people) while depicting gradual abstractions of the input scene in terms of fidelity and simplicity.