论文标题
随机神经网络的代表性差异度量空间
Representational dissimilarity metric spaces for stochastic neural networks
论文作者
论文摘要
量化神经表示之间的相似性 - 例如隐藏层激活向量 - 是深度学习和神经科学研究中的多年生问题。现有方法比较确定性响应(例如缺乏随机层的人工网络)或平均响应(例如,生物学数据中试验平均的射击率)。但是,这些_-二等主义_表示相似性的度量忽略了噪声的规模和几何结构,这两者在神经计算中都起着重要作用。为了纠正这一点,我们概括了先前提出的形状指标(Williams等,2021)以量化_stochastic_表示的差异。这些新距离满足了三角形的不平等,因此可以用作许多监督和无监督分析的严格基础。利用这个新颖的框架,我们发现定向视觉光栅和自然主义场景的神经生物学表示的随机几何形状分别类似于未经训练和训练的深层网络表示。此外,我们能够从其在随机(与确定性的)形状空间中的位置更准确地预测某些网络属性(例如训练超参数)。
Quantifying similarity between neural representations -- e.g. hidden layer activation vectors -- is a perennial problem in deep learning and neuroscience research. Existing methods compare deterministic responses (e.g. artificial networks that lack stochastic layers) or averaged responses (e.g., trial-averaged firing rates in biological data). However, these measures of _deterministic_ representational similarity ignore the scale and geometric structure of noise, both of which play important roles in neural computation. To rectify this, we generalize previously proposed shape metrics (Williams et al. 2021) to quantify differences in _stochastic_ representations. These new distances satisfy the triangle inequality, and thus can be used as a rigorous basis for many supervised and unsupervised analyses. Leveraging this novel framework, we find that the stochastic geometries of neurobiological representations of oriented visual gratings and naturalistic scenes respectively resemble untrained and trained deep network representations. Further, we are able to more accurately predict certain network attributes (e.g. training hyperparameters) from its position in stochastic (versus deterministic) shape space.