XNOR-Net++: Improved binary neural networks

Adrian Bulat (Samsung AI Center, Cambridge), Georgios Tzimiropoulos (Samsung AI Centre, Cambridge)

Abstract
This paper proposes an improved training algorithm for binary neural networks in which both weights and activations are binary numbers. A key but fairly overlooked feature of the current state-of-the-art method of XNOR-Net[28] is the use of analytically calculated real-valued scaling factors for re-weighting the output of binary convolutions. We argue that analytic calculation of these factors is sub-optimal. Instead, in this work, we make the following contributions: (a) we propose to fuse the activation and weight scaling factors into a single one that is learned discriminatively via backpropagation. (b) More importantly, we explore several ways of constructing the shape of the scale factors while keeping the computational budget fixed. (c) We empirically measure the accuracy of our approximations and show that they are significantly more accurate than the analytically calculated one. (d) We show that our approach significantly outperforms XNOR-Net within the same computational budget when tested on the challenging task of ImageNet classification, offering up to 6% accuracy gain.

DOI
10.5244/C.33.15
https://dx.doi.org/10.5244/C.33.15

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={XNOR-Net++: Improved binary neural networks},
author={Adrian Bulat and Georgios Tzimiropoulos},
year={2019},
month={September},
pages={15.1--15.12},
articleno={15},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.15},
url={https://dx.doi.org/10.5244/C.33.15}
}