Image Captioning with Unseen Objects

Berkan Demirel (HAVELSAN Inc. & METU), Ramazan Gokberk Cinbis (METU), Nazli Ikizler-Cinbis (Hacettepe University)

Abstract
Image caption generation is a long standing and challenging problem at the intersection of computer vision and natural language processing. A number of recently proposed approaches utilize a fully supervised object recognition model within the captioning approach. Such models, however, tend to generate sentences which only consist of objects predicted by the recognition models, excluding instances of the classes without labelled training examples. In this paper, we propose a new challenging scenario that targets the image captioning problem in a fully zero-shot learning setting, where the goal is to be able to generate captions of test images containing objects that are not seen during training. The proposed approach jointly uses a novel zero-shot object detection model and a template-based sentence generator. Our experiments show promising results on the COCO dataset.

DOI
10.5244/C.33.17
https://dx.doi.org/10.5244/C.33.17

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={Image Captioning with Unseen Objects},
author={Berkan Demirel and Ramazan Gokberk Cinbis and Nazli Ikizler-Cinbis},
year={2019},
month={September},
pages={17.1--17.15},
articleno={17},
numpages={15},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.17},
url={https://dx.doi.org/10.5244/C.33.17}
}