论文标题
通过自动转换搜索进行隐私的协作学习
Privacy-preserving Collaborative Learning with Automatic Transformation Search
论文作者
论文摘要
由于其对数据隐私保护的好处,协作学习获得了广泛的知名度:参与者可以在不分享其培训集的情况下共同培训深度学习模型。但是,最近的作品发现,对手可以从共享梯度中充分回收敏感的训练样本。这种重建攻击对协作学习构成了严重威胁。因此,迫切需要有效的缓解解决方案。 在本文中,我们建议利用数据扩展来打败重建攻击:通过使用精心选择的转换策略进行预处理敏感图像,对手可以从相应梯度中提取任何有用的信息变得不可避免。我们设计了一种新颖的搜索方法,可以自动发现合格的政策。我们采用两个新指标来量化转换对数据隐私和建模可用性的影响,这可以显着提高搜索速度。全面的评估表明,我们方法发现的政策可以在协作学习中击败现有的重建攻击,并以高效率和对模型性能的影响忽略不计。
Collaborative learning has gained great popularity due to its benefit of data privacy protection: participants can jointly train a Deep Learning model without sharing their training sets. However, recent works discovered that an adversary can fully recover the sensitive training samples from the shared gradients. Such reconstruction attacks pose severe threats to collaborative learning. Hence, effective mitigation solutions are urgently desired. In this paper, we propose to leverage data augmentation to defeat reconstruction attacks: by preprocessing sensitive images with carefully-selected transformation policies, it becomes infeasible for the adversary to extract any useful information from the corresponding gradients. We design a novel search method to automatically discover qualified policies. We adopt two new metrics to quantify the impacts of transformations on data privacy and model usability, which can significantly accelerate the search speed. Comprehensive evaluations demonstrate that the policies discovered by our method can defeat existing reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the model performance.