论文标题
深层生成模型何时以及如何倒置?
When and How Can Deep Generative Models be Inverted?
论文作者
论文摘要
近年来,深广的生成模型(例如gan和vaes)已经广泛开发。最近,对这种模型的反转(即给定一个(可能损坏的)信号,我们希望恢复产生它的潜在向量。在稀疏表示理论的基础上,我们定义了适用于任何反转算法(梯度下降,深编码器等)的条件,在该算法下,这种生成模型可与独特的解决方案可逆。重要的是,拟议的分析适用于任何受过训练的模型,并且不依赖于高斯I.I.D.权重。此外,我们为训练有素的任意深度生成网络引入了两个层的反转追求算法,并伴随着恢复保证。最后,我们通过数值验证我们的理论结果,并表明我们的方法在倒置此类发电机时(无论是干净和损坏的信号)优于梯度下降。
Deep generative models (e.g. GANs and VAEs) have been developed quite extensively in recent years. Lately, there has been an increased interest in the inversion of such a model, i.e. given a (possibly corrupted) signal, we wish to recover the latent vector that generated it. Building upon sparse representation theory, we define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.), under which such generative models are invertible with a unique solution. Importantly, the proposed analysis is applicable to any trained model, and does not depend on Gaussian i.i.d. weights. Furthermore, we introduce two layer-wise inversion pursuit algorithms for trained generative networks of arbitrary depth, and accompany these with recovery guarantees. Finally, we validate our theoretical results numerically and show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.