论文标题
自我召集保护:培训检查站是好的数据保护器
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
论文作者
论文摘要
随着数据变得越来越重要,公司将对发布数据非常谨慎,因为竞争对手可以使用它来培训高性能模型,从而对公司的商业能力构成巨大威胁。为了防止在数据上培训良好的模型,我们可以向其添加不可察觉的扰动。由于这种扰动旨在伤害整个培训过程,因此它们应反映DNN培训的脆弱性,而不是单个模型的脆弱性。基于这个新想法,我们寻求在培训中始终未被认可(从未正确分类)的扰动示例。在本文中,我们通过模型检查点的梯度揭示了它们,形成了拟议的自动化保护(SEP),这是非常有效的,因为(1)在正常训练期间忽略的示例学习倾向于产生忽略正常例子的DNN; (2)检查点的跨模型梯度接近正交,这意味着它们与具有不同体系结构的DNN一样多样化。也就是说,我们惊人的合奏性能仅需要计算训练一个模型。通过在3个数据集和5个架构上进行9个基准的广泛实验,SEP被证实是一种新的最新技术,例如,我们的小$ \ ell_ \ infty = 2/255 $扰动将CIFAR-10 RESNET18的准确性降低了94.56%至14.68%,而不是41.68%,而不是41.35%。代码可在https://github.com/sizhe-chen/sep上找到。
As data becomes increasingly vital, a company would be very cautious about releasing data, because the competitors could use it to train high-performance models, thereby posing a tremendous threat to the company's commercial competence. To prevent training good models on the data, we could add imperceptible perturbations to it. Since such perturbations aim at hurting the entire training process, they should reflect the vulnerability of DNN training, rather than that of a single model. Based on this new idea, we seek perturbed examples that are always unrecognized (never correctly classified) in training. In this paper, we uncover them by model checkpoints' gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints' cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures. That is, our amazing performance of ensemble only requires the computation of training one model. By extensive experiments with 9 baselines on 3 datasets and 5 architectures, SEP is verified to be a new state-of-the-art, e.g., our small $\ell_\infty=2/255$ perturbations reduce the accuracy of a CIFAR-10 ResNet18 from 94.56% to 14.68%, compared to 41.35% by the best-known method. Code is available at https://github.com/Sizhe-Chen/SEP.