论文标题

对抗性对比学习的强大预训练

Robust Pre-Training by Adversarial Contrastive Learning

论文作者

Jiang, Ziyu, Chen, Tianlong, Chen, Ting, Wang, Zhangyang

论文摘要

最近的工作表明,当与对抗性训练相结合时,自我监督的预训练可能会导致这项工作中最新的鲁棒性,我们改善了通过在数据增强和对抗性范围内保持一致的学习表征的鲁棒性自我观察的预培训。我们的方法利用了最近的对比学习框架,该框架通过在不同的增强观点下最大化特征一致性来学习表示形式。这特别符合对抗性鲁棒性的目标,因为对抗性脆弱性的原因是缺乏功能不变性,即,小型输入扰动可能会导致功能甚至预测标签的不良大变化。我们探索各种选项来制定对比度任务,并证明,通过注入对抗性扰动,对比性预训练可以导致既具有标签效率又健壮的模型。我们从经验上评估了提出的对抗性对比学习(ACL),并表明它可以始终超过现有方法。例如,在CIFAR-10数据集上,ACL的表现优于先前的最新无监督的鲁棒预训练方法,其良好精度为2.99%,标准精度为2.14%。我们进一步证明,即使只有少数标记的例子,A​​CL预训练可以改善半监督的对抗训练。我们的代码和预培训模型已在以下网址发布:https://github.com/vita-group/asdversarial-contrastive-learning。

Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness In this work, we improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations. Our approach leverages a recent contrastive learning framework, which learns representations by maximizing feature consistency under differently augmented views. This fits particularly well with the goal of adversarial robustness, as one cause of adversarial fragility is the lack of feature invariance, i.e., small input perturbations can result in undesirable large changes in features or even predicted labels. We explore various options to formulate the contrastive task, and demonstrate that by injecting adversarial perturbations, contrastive pre-training can lead to models that are both label-efficient and robust. We empirically evaluate the proposed Adversarial Contrastive Learning (ACL) and show it can consistently outperform existing methods. For example on the CIFAR-10 dataset, ACL outperforms the previous state-of-the-art unsupervised robust pre-training approach by 2.99% on robust accuracy and 2.14% on standard accuracy. We further demonstrate that ACL pre-training can improve semi-supervised adversarial training, even when only a few labeled examples are available. Our codes and pre-trained models have been released at: https://github.com/VITA-Group/Adversarial-Contrastive-Learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源