RecNets: Channel-wise Recurrent Convolutional Neural Networks

George Retsinas (National Technical University of Athens), Athena Elafrou (National Technical University of Athens), Georgios Goumas ( National Technical University of Athens), Petros Maragos (National Technical University of Athens)

Abstract
In this paper, we introduce channel-wise recurrent convolutional neural networks (RecNets), a family of novel, compact neural network architectures for computer vision tasks inspired by recurrent neural networks (RNNs). RecNets build upon Channel-wise Recurrent Convolutional (CRC) layers, a novel type of convolutional layer that splits the input channels into disjoint segments and processes them in a recurrent fashion. In this way, we simulate wide, yet compact models, since the number of parameters is vastly reduced via the parameter sharing of the RNN formulation. Experimental results on the CIFAR-10 and CIFAR-100 image classification tasks demonstrate the superior size/accuracy trade-off of RecNets compared to other compact state-of-the-art architectures.

DOI
10.5244/C.33.214
https://dx.doi.org/10.5244/C.33.214

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={RecNets: Channel-wise Recurrent Convolutional Neural Networks},
author={George Retsinas and Athena Elafrou and Georgios Goumas and Petros Maragos},
year={2019},
month={September},
pages={214.1--214.13},
articleno={214},
numpages={13},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.214},
url={https://dx.doi.org/10.5244/C.33.214}
}