MixConv: Mixed Depthwise Convolutional Kernels

Mingxing Tan (Google Brain), Quoc Le (Google Brain)

Abstract
Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MDConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MDConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. By integrating MDConv into AutoML search space, we have further developed a new family of models, named as MixNets, which significantly outperform previous models including MobileNetV2 [19] (ImageNet top-1 accuracy +4.2%), ShuffleNetV2 [15] (+3.5%), MnasNet [25] (+1.3%), ProxylessNAS [2] (+2.2%), and FBNet [26] (+2.0%). In particular, our MixNet-L achieves a new state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile settings (<600M FLOPS). Code is at \url{https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet/mixnet}.

DOI
10.5244/C.33.116
https://dx.doi.org/10.5244/C.33.116

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={MixConv: Mixed Depthwise Convolutional Kernels},
author={Mingxing Tan and Quoc Le},
year={2019},
month={September},
pages={116.1--116.13},
articleno={116},
numpages={13},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.116},
url={https://dx.doi.org/10.5244/C.33.116}
}