论文标题

FEDGCN:在图形卷积网络的联合培训中的融合 - 通信权衡取舍

FedGCN: Convergence-Communication Tradeoffs in Federated Training of Graph Convolutional Networks

论文作者

Yao, Yuhang, Jin, Weizhao, Ravi, Srivatsan, Joe-Wong, Carlee

论文摘要

由于这些图的大小以及将数据保留在生成的法规,因此在跨多个客户端分布的图表上的训练模型方法最近越来越受欢迎。但是,跨客户边缘自然存在于客户之间。因此,在单个图上训练模型的分布式方法会产生客户之间的重要沟通开销,或者在培训中丢失了可用信息。我们介绍了联合图形卷积网络(FedGCN)算法,该算法使用联合学习来训练GCN模型,以进行半监视的节点分类,而快速收敛和很少的通信。与在每个培训回合中需要额外沟通的先前方法相比,FedGCN客户仅在一个预培训步骤中与中央服务器进行通信,从而大大降低了通信成本并允许使用同构加密以进一步增强隐私。我们从理论上分析了FedGCN在不同数据分布下的融合率和通信成本之间的权衡。实验结果表明,与先前的工作相比,我们的FedGCN算法以平均而更快的收敛速度达到了更好的模型准确性,而沟通的速度为51.7%。

Methods for training models on graphs distributed across multiple clients have recently grown in popularity, due to the size of these graphs as well as regulations on keeping data where it is generated. However, the cross-client edges naturally exist among clients. Thus, distributed methods for training a model on a single graph incur either significant communication overhead between clients or a loss of available information to the training. We introduce the Federated Graph Convolutional Network (FedGCN) algorithm, which uses federated learning to train GCN models for semi-supervised node classification with fast convergence and little communication. Compared to prior methods that require extra communication among clients at each training round, FedGCN clients only communicate with the central server in one pre-training step, greatly reducing communication costs and allowing the use of homomorphic encryption to further enhance privacy. We theoretically analyze the tradeoff between FedGCN's convergence rate and communication cost under different data distributions. Experimental results show that our FedGCN algorithm achieves better model accuracy with 51.7% faster convergence on average and at least 100X less communication compared to prior work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源