论文标题
深度学习模型架构如何影响其隐私?对CNN和Transformers的隐私攻击的全面研究
How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers
论文作者
论文摘要
作为过去十年中蓬勃发展的研究领域,深度学习技术是由以空前的规模收集和处理的大数据驱动的。但是,由于培训数据中敏感信息的潜在泄漏而引起了隐私问题。最近的研究表明,深度学习模型容易受到各种隐私攻击的影响,包括会员推理攻击,属性推理攻击和梯度反转攻击。值得注意的是,这些攻击的功效因模型而异。在本文中,我们回答一个基本问题:模型体系结构是否影响模型隐私?通过调查从卷积神经网络(CNN)到变形金刚的代表性模型架构,我们证明了变压器通常比CNN表现出更高的隐私攻击脆弱性。此外,我们确定了激活层,茎层和LN层的微观设计,这是有助于CNN对隐私攻击的弹性的主要因素,而注意模块的存在是加剧变压器隐私脆弱性的另一个主要因素。我们的发现揭示了深度学习模型的宝贵见解,以防御隐私攻击,并激发了研究社区开发隐私友好的模型体系结构。
As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale. However, privacy concerns arise due to the potential leakage of sensitive information from the training data. Recent research has revealed that deep learning models are vulnerable to various privacy attacks, including membership inference attacks, attribute inference attacks, and gradient inversion attacks. Notably, the efficacy of these attacks varies from model to model. In this paper, we answer a fundamental question: Does model architecture affect model privacy? By investigating representative model architectures from convolutional neural networks (CNNs) to Transformers, we demonstrate that Transformers generally exhibit higher vulnerability to privacy attacks than CNNs. Additionally, we identify the micro design of activation layers, stem layers, and LN layers, as major factors contributing to the resilience of CNNs against privacy attacks, while the presence of attention modules is another main factor that exacerbates the privacy vulnerability of Transformers. Our discovery reveals valuable insights for deep learning models to defend against privacy attacks and inspires the research community to develop privacy-friendly model architectures.