论文标题
半监督学习的图形神经扩散网络
Graph Neural Diffusion Networks for Semi-supervised Learning
论文作者
论文摘要
图形卷积网络(GCN)是基于图形的半监督学习的开创性模型。但是,GCN在稀疏标记的图上表现不佳。它的两层版本无法有效地将标签信息传播到整个图形结构(即平滑不足的问题),而其深层版本过度平滑,并且很难训练(即,过度光滑的问题)。为了解决这两个问题,我们提出了一个称为GND-NET的新图神经网络(用于图形神经扩散网络),该网络利用了单层顶点的局部和全局邻域信息。利用浅网络可以减轻过度光滑的问题,同时利用本地和全球邻里信息可以减轻平滑的问题。顶点的局部和全局邻域信息的利用是通过称为神经扩散的新图扩散方法实现的,该方法将神经网络整合到常规的线性和非线性图扩散中。神经网络的采用使神经扩散适应于不同的数据集。与最先进的方法相比,对各种稀疏标记的图的广泛实验验证了GND-NETS的有效性和效率。
Graph Convolutional Networks (GCN) is a pioneering model for graph-based semi-supervised learning. However, GCN does not perform well on sparsely-labeled graphs. Its two-layer version cannot effectively propagate the label information to the whole graph structure (i.e., the under-smoothing problem) while its deep version over-smoothens and is hard to train (i.e., the over-smoothing problem). To solve these two issues, we propose a new graph neural network called GND-Nets (for Graph Neural Diffusion Networks) that exploits the local and global neighborhood information of a vertex in a single layer. Exploiting the shallow network mitigates the over-smoothing problem while exploiting the local and global neighborhood information mitigates the under-smoothing problem. The utilization of the local and global neighborhood information of a vertex is achieved by a new graph diffusion method called neural diffusions, which integrate neural networks into the conventional linear and nonlinear graph diffusions. The adoption of neural networks makes neural diffusions adaptable to different datasets. Extensive experiments on various sparsely-labeled graphs verify the effectiveness and efficiency of GND-Nets compared to state-of-the-art approaches.