论文标题
自我增强的GNN:使用模型输出改善图形神经网络
Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs
论文作者
论文摘要
Graph神经网络(GNN)最近受到了很多关注,因为它们在基于图的任务上表现出色。但是,现有对GNN的研究重点是设计更有效的模型,而无需考虑输入数据的质量。在本文中,我们提出了自我增强的GNN(SEG),该GNN(SEG)使用现有GNN模型的输出提高了输入数据的质量,以在半监督节点分类上更好地性能。由于图数据既由拓扑和节点标签组成,因此我们从这两个角度提高了输入数据质量。对于拓扑结构,我们观察到,当阶层间边缘的比率(来自不同类别的节点连接)的比率较低时,可以达到更高的分类精度,并提出拓扑更新以删除类间边缘并添加类内部边缘。对于节点标签,我们提出了培训节点扩展,该培训节点扩大了使用现有GNN模型预测的标签扩大训练集。 SEG是一个通用框架,可以轻松地与现有的GNN型号结合使用。实验结果验证了SEG始终提高不同数据集众所周知的GNN模型(例如GCN,GAT和SGC)的性能。
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks. However, existing research on GNNs focuses on designing more effective models without considering much about the quality of the input data. In this paper, we propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models for better performance on semi-supervised node classification. As graph data consist of both topology and node labels, we improve input data quality from both perspectives. For topology, we observe that higher classification accuracy can be achieved when the ratio of inter-class edges (connecting nodes from different classes) is low and propose topology update to remove inter-class edges and add intra-class edges. For node labels, we propose training node augmentation, which enlarges the training set using the labels predicted by existing GNN models. SEG is a general framework that can be easily combined with existing GNN models. Experimental results validate that SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.