论文标题

神经分解:具有变异自动编码器的功能方差分析

Neural Decomposition: Functional ANOVA with Variational Autoencoders

论文作者

Märtens, Kaspar, Yau, Christopher

论文摘要

变异自动编码器(VAE)已成为降低维度的流行方法。但是,尽管他们能够识别出嵌入高维数据中的潜在低维结构,但这些潜在表示通常很难自行解释。由于VAE的黑盒性质,其医疗保健和基因组应用的实用性受到限制。在本文中,我们专注于表征条件VAE的变化来源。我们的目标是提供特征级别的方差分解,即通过将潜在变量Z和固定输入C的边际添加效应与其非线性相互作用分开,以分解数据中的变化。我们建议通过所谓的神经分解来实现这一目标 - 对功能性方差分析差异分解概念从经典统计到深度学习模型的改编。我们展示了如何通过对解码器网络边际特性的约束来实现可识别性。我们在一系列合成示例以及高维基因组学数据上证明了神经分解的实用性。

Variational Autoencoders (VAEs) have become a popular approach for dimensionality reduction. However, despite their ability to identify latent low-dimensional structures embedded within high-dimensional data, these latent representations are typically hard to interpret on their own. Due to the black-box nature of VAEs, their utility for healthcare and genomics applications has been limited. In this paper, we focus on characterising the sources of variation in Conditional VAEs. Our goal is to provide a feature-level variance decomposition, i.e. to decompose variation in the data by separating out the marginal additive effects of latent variables z and fixed inputs c from their non-linear interactions. We propose to achieve this through what we call Neural Decomposition - an adaptation of the well-known concept of functional ANOVA variance decomposition from classical statistics to deep learning models. We show how identifiability can be achieved by training models subject to constraints on the marginal properties of the decoder networks. We demonstrate the utility of our Neural Decomposition on a series of synthetic examples as well as high-dimensional genomics data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源