论文标题
学会在对抗方面坚固且差异化
Learning to be adversarially robust and differentially private
论文作者
论文摘要
我们研究了源于鲁棒和差异私人优化产生的学习困难。我们首先研究基于梯度下降的对抗训练具有不同隐私的融合,将简单的二进制分类任务在线性分离数据上作为一个说明性示例。我们比较了私人和非私人设置中对抗性风险和名义风险之间的差距,这表明私人优化引入的数据维度依赖性术语综合了学习强大模型的困难。此后,我们讨论了对抗性训练的哪些部分和差异隐私性损害优化,从而确定对抗性扰动和脱离差异隐私的剪辑规范的大小都会增加损失景观的曲率,这意味着概括性较差。
We study the difficulties in learning that arise from robust and differentially private optimization. We first study convergence of gradient descent based adversarial training with differential privacy, taking a simple binary classification task on linearly separable data as an illustrative example. We compare the gap between adversarial and nominal risk in both private and non-private settings, showing that the data dimensionality dependent term introduced by private optimization compounds the difficulties of learning a robust model. After this, we discuss what parts of adversarial training and differential privacy hurt optimization, identifying that the size of adversarial perturbation and clipping norm in differential privacy both increase the curvature of the loss landscape, implying poorer generalization performance.