Batch-wise Logit-Similarity: Generalizing Logit-Squeezing and Label-Smoothing

Ali Shafahi (University of Maryland), Mohammad Amin Ghiasi (University of Maryland), Mahyar Najibi (University of Maryland), Furong Huang (University of Maryland), John Dickerson (University of Maryland), Tom Goldstein (University of Maryland)

Abstract
We study how cheap regularization methods can increase adversarial robustness. In particular, we introduce logit-similarity which can be seen as a generalization of label-smoothing and logit-squeezing. Our version of logit-squeezing applies a batch-wise penalty and allows penalizing the logits aggressively. By measuring the robustness of our models against various gradient-based and gradient-free attacks, we experimentally show that, with the correct choice of hyper-parameters, regularized models can be as robust as adversarially trained models on the CIFAR-10 and CIFAR-100 datasets when robustness is measured in terms of L-Infinity norm attacks. Unlike conventional adversarial training, regularization methods keep training time short and become robust against L-2 norm attacks in addition to L-Infinity norm.

DOI
10.5244/C.33.110
https://dx.doi.org/10.5244/C.33.110

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={Batch-wise Logit-Similarity: Generalizing Logit-Squeezing and Label-Smoothing},
author={Ali Shafahi and Mohammad Amin Ghiasi and Mahyar Najibi and Furong Huang and John Dickerson and Tom Goldstein},
year={2019},
month={September},
pages={110.1--110.12},
articleno={110},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.110},
url={https://dx.doi.org/10.5244/C.33.110}
}