论文标题
BI-GCN:二进制图卷积网络
Bi-GCN: Binary Graph Convolutional Network
论文作者
论文摘要
图神经网络(GNN)在图表学习中取得了巨大的成功。不幸的是,当前的GNN通常依靠将整个属性图加载到网络中进行处理。这种隐式假设可能对有限的内存资源不满意,尤其是当属性图很大时。在本文中,我们开创了提出一个二进制图卷积网络(BI-GCN),该网络将网络参数和输入节点特征二进制。此外,将原始矩阵乘法修改为二进制操作以进行加速。根据理论分析,对于网络参数和输入数据,我们的BI-GCN平均可以将记忆消耗降低约30倍,并在引用网络上平均将推理速度降低约47倍。同时,我们还设计了一种新的基于梯度近似的反向传播方法来训练我们的BI-GCN。广泛的实验表明,与完整精确的基准相比,我们的BI-GCN可以提供可比的性能。此外,我们的二进制方法可以很容易地应用于其他GNN,这已在实验中进行了验证。
Graph Neural Networks (GNNs) have achieved tremendous success in graph representation learning. Unfortunately, current GNNs usually rely on loading the entire attributed graph into network for processing. This implicit assumption may not be satisfied with limited memory resources, especially when the attributed graph is large. In this paper, we pioneer to propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features. Besides, the original matrix multiplications are revised to binary operations for accelerations. According to the theoretical analysis, our Bi-GCN can reduce the memory consumption by an average of ~30x for both the network parameters and input data, and accelerate the inference speed by an average of ~47x, on the citation networks. Meanwhile, we also design a new gradient approximation based back-propagation method to train our Bi-GCN well. Extensive experiments have demonstrated that our Bi-GCN can give a comparable performance compared to the full-precision baselines. Besides, our binarization approach can be easily applied to other GNNs, which has been verified in the experiments.