论文标题
协方差神经网络
coVariance Neural Networks
论文作者
论文摘要
图神经网络(GNN)是一个有效的框架,可以利用图形结构数据中的相互关系进行学习。主成分分析(PCA)涉及有关协方差矩阵本征空间的数据投影,并与GNN中的图形卷积过滤器提出相似之处。在这一观察结果的推动下,我们研究了一种称为协方差神经网络(VNN)的GNN结构,该结构以样品协方差矩阵作为图。从理论上讲,我们建立了VNN对协方差矩阵扰动的稳定性,因此,这意味着比基于标准的PCA数据分析方法具有优势,这些方法由于与近距特征值相关的主要成分而导致不稳定。我们在现实世界数据集上的实验验证了我们的理论结果,并表明VNN性能确实比基于PCA的统计方法更稳定。此外,我们对多分辨率数据集的实验还表明,VNN与不同维度的协方差矩阵相比,VNN可以调节性能。对于基于PCA的方法而言是不可行的。
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning. Principal component analysis (PCA) involves the projection of data on the eigenspace of the covariance matrix and draws similarities with the graph convolutional filters in GNNs. Motivated by this observation, we study a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs. We theoretically establish the stability of VNNs to perturbations in the covariance matrix, thus, implying an advantage over standard PCA-based data analysis approaches that are prone to instability due to principal components associated with close eigenvalues. Our experiments on real-world datasets validate our theoretical results and show that VNN performance is indeed more stable than PCA-based statistical approaches. Moreover, our experiments on multi-resolution datasets also demonstrate that VNNs are amenable to transferability of performance over covariance matrices of different dimensions; a feature that is infeasible for PCA-based approaches.