论文标题
具有学习阈值的变分稀疏编码
Variational Sparse Coding with Learned Thresholding
论文作者
论文摘要
稀疏的编码策略因其在利用低维结构的数据的简约表示而受到称赞。但是,这些代码的推论通常依赖于优化程序,而在高维问题中计算缩放率很差。例如,在深神经网络(DNNS)的高维中学层中所学的表示形式的稀疏推断需要在每个训练步骤中进行迭代最小化。因此,已经提出了最近的,变异推断的快速方法来通过学习DNN上的代码上的分布来推断稀疏代码。在这项工作中,我们提出了一种新的方法来稀疏编码,使我们能够通过阈值样本学习稀疏分布,避免使用有问题的放松。我们首先通过训练线性生成器来评估和分析我们的方法,表明与其他稀疏分布相比,它具有较高的性能,统计效率和梯度估计。然后,我们使用时尚MNIST和Celeba数据集上使用DNN发电机进行比较
Sparse coding strategies have been lauded for their parsimonious representations of data that leverage low dimensional structure. However, inference of these codes typically relies on an optimization procedure with poor computational scaling in high-dimensional problems. For example, sparse inference in the representations learned in the high-dimensional intermediary layers of deep neural networks (DNNs) requires an iterative minimization to be performed at each training step. As such, recent, quick methods in variational inference have been proposed to infer sparse codes by learning a distribution over the codes with a DNN. In this work, we propose a new approach to variational sparse coding that allows us to learn sparse distributions by thresholding samples, avoiding the use of problematic relaxations. We first evaluate and analyze our method by training a linear generator, showing that it has superior performance, statistical efficiency, and gradient estimation compared to other sparse distributions. We then compare to a standard variational autoencoder using a DNN generator on the Fashion MNIST and CelebA datasets