Push for Quantization: Deep Fisher Hashing

Yunqiang Li (Delft University of Technology), Wenjie Pei (Tencent), yufei zha (Air Force Engineering University), Jan van Gemert (Delft University of Technology)

Abstract
Current massive datasets demand light-weight access for analysis. Discrete hashing methods are thus beneficial because they map high-dimensional data to compact binary codes that are efficient to store and process, while preserving semantic similarity. To optimize powerful deep learning methods for image hashing, gradient-based methods are required. Binary codes, however, are discrete and thus have no continuous derivatives. Relaxing the problem by solving it in a continuous space and then quantizing the solution is not guaranteed to yield separable binary codes. The quantization needs to be included in the optimization. In this paper we push for quantization: We optimize maximum class separability in the binary space. To do so, we introduce a margin on distances between dissimilar image pairs as measured in the binary space. In addition to pair-wise distances, we draw inspiration from Fisher's Linear Discriminant Analysis (Fisher LDA) to maximize the binary distances between classes and at the same time minimize the binary distance of images within the same class. Experimental results on CIFAR-10, NUS-WIDE and ImageNet100 show that our approach leads to compact codes and compares favorably to the current state of the art.

DOI
10.5244/C.33.182
https://dx.doi.org/10.5244/C.33.182

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={Push for Quantization: Deep Fisher Hashing},
author={Yunqiang Li and Wenjie Pei and yufei zha and Jan van Gemert},
year={2019},
month={September},
pages={182.1--182.12},
articleno={182},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.182},
url={https://dx.doi.org/10.5244/C.33.182}
}