Fixing Data Augmentation to Improve Adversarial Robustness


Adversarial training suffers from robust overfitting, a phenomenon where the robust test accuracy starts to decrease during training. In this paper, we focus on both heuristics-driven and data-driven augmentations as a means to reduce robust overfitting. First, we demonstrate that, contrary to previous findings, when combined with model weight averaging, data augmentation can significantly boost robust accuracy. Second, we explore how state-of-the-art generative models can be leveraged to artificially increase the size of the training set and further improve adversarial robustness. Finally, we evaluate our approach on CIFAR-10 against $l_\infty$ and $l_2$ norm-bounded perturbations of size $\epsilon = 8/255$ and $\epsilon = 128/255$, respectively. We show large absolute improvements of +5.53% and +4.47% in robust accuracy compared to previous state-of-the-art methods. In particular, against $l_\infty$ norm-bounded perturbations, our model reaches 62.67% robust accuracy without using any external data, beating most prior work that use external data.