论文标题
在具有当地差异隐私的分散图上进行私人学习
Towards Private Learning on Decentralized Graphs with Local Differential Privacy
论文作者
论文摘要
许多现实世界的网络本质上是分散的。例如,在社交网络中,每个用户都保持社交图的本地视图,例如朋友列表及其个人资料。典型的是收集这些社交图的本地观点并执行图形学习任务。但是,在图表上学习可能会引起隐私问题,因为这些本地观点通常包含敏感信息。 在本文中,我们试图在分散的网络图上确保私有图形学习。为了实现这一目标,我们提出了{\ em Solitude},这是一个基于图形神经网络(GNNS)的新的隐私学习框架,并根据边缘局部差异隐私提供正式隐私。 {\ em Solitude}的关键是一组新的精致机制,可以校准从用户收集的分散图中引入的噪声。校准背后的原理是许多现实图形(例如稀疏性)共享的内在属性。与本地私人GNN上的现有工作不同,我们的新框架可以同时保护节点特征隐私和边缘隐私,并且可以与任何具有隐私性保证的GNN无缝融合。基准测试数据集的广泛实验表明,{\ em Solitude}可以保留学习GNN的概括能力,同时在给定的隐私预算下保留用户的数据隐私。
Many real-world networks are inherently decentralized. For example, in social networks, each user maintains a local view of a social graph, such as a list of friends and her profile. It is typical to collect these local views of social graphs and conduct graph learning tasks. However, learning over graphs can raise privacy concerns as these local views often contain sensitive information. In this paper, we seek to ensure private graph learning on a decentralized network graph. Towards this objective, we propose {\em Solitude}, a new privacy-preserving learning framework based on graph neural networks (GNNs), with formal privacy guarantees based on edge local differential privacy. The crux of {\em Solitude} is a set of new delicate mechanisms that can calibrate the introduced noise in the decentralized graph collected from the users. The principle behind the calibration is the intrinsic properties shared by many real-world graphs, such as sparsity. Unlike existing work on locally private GNNs, our new framework can simultaneously protect node feature privacy and edge privacy, and can seamlessly incorporate with any GNN with privacy-utility guarantees. Extensive experiments on benchmarking datasets show that {\em Solitude} can retain the generalization capability of the learned GNN while preserving the users' data privacy under given privacy budgets.