论文标题
您说归一化流我看到贝叶斯网络
You say Normalizing Flows I see Bayesian Networks
论文作者
论文摘要
正常化的流量已成为重要的深神经网络家族,用于建模复杂的概率分布。在本说明中,我们将它们的耦合和自回归转换层作为概率图形模型进行了重新审视,并表明它们将它们还原为具有预定义拓扑的贝叶斯网络,并且在每个节点上都有可学习的密度。从这个新的角度来看,我们提供了三个结果。首先,我们表明,在标准化流中堆叠多个变换会放松独立性假设并纠缠模型分布。其次,我们表明,当仿射流的深度超过3个变换层时,能力的基本飞跃就会出现。第三,无论其深度如何,我们都证明了仿射归一流流量的非宇宙性。
Normalizing flows have emerged as an important family of deep neural networks for modelling complex probability distributions. In this note, we revisit their coupling and autoregressive transformation layers as probabilistic graphical models and show that they reduce to Bayesian networks with a pre-defined topology and a learnable density at each node. From this new perspective, we provide three results. First, we show that stacking multiple transformations in a normalizing flow relaxes independence assumptions and entangles the model distribution. Second, we show that a fundamental leap of capacity emerges when the depth of affine flows exceeds 3 transformation layers. Third, we prove the non-universality of the affine normalizing flow, regardless of its depth.