Deep convolutional neural networks (CNNs) are trained mostly based on the softmax cross-entropy loss to produce promising performance on various image classification tasks. While much research effort has been made to improve the building blocks of CNNs, the classifier margin in the loss attracts less attention for optimizing CNNs in contrast to the kernel-based methods, such as SVM. In this paper, we propose a novel method to induce a large-margin CNN for improving the classification performance. By analyzing the formulation of the softmax loss, we clarify the margin embedded in the loss as well as its connection to the distribution of softmax logits. Based on this analysis, the proposed method is formulated as regularization imposed on the logits to induce a large-margin classifier in a compatible form with the softmax loss. The experimental results on image classification using various CNNs demonstrate that the proposed method favorably improves performance compared to the other large-margin losses.
Supplementary material (ZIP)