We study how cheap regularization methods can increase adversarial robustness. In particular, we introduce logit-similarity which can be seen as a generalization of label-smoothing and logit-squeezing. Our version of logit-squeezing applies a batch-wise penalty and allows penalizing the logits aggressively. By measuring the robustness of our models against various gradient-based and gradient-free attacks, we experimentally show that, with the correct choice of hyper-parameters, regularized models can be as robust as adversarially trained models on the CIFAR-10 and CIFAR-100 datasets when robustness is measured in terms of L-Infinity norm attacks. Unlike conventional adversarial training, regularization methods keep training time short and become robust against L-2 norm attacks in addition to L-Infinity norm.