Look and Modify: Modification Networks for Image Captioning

Fawaz Sammani (Multimedia University), Mahmoud Elsayed (Multimedia University)

Abstract
Attention-based neural encoder-decoder frameworks have been widely used for image captioning. Many of these frameworks deploy their full focus on generating the caption from scratch by relying solely on the image features or the object detection regional features. In this paper, we introduce a framework that learns to modify existing captions from a given framework by modeling the residual information, where at each timestep, the model learns what to keep, remove or add to the existing caption allowing the model to fully focus on ``what to modify'' rather than on ``what to predict''. We evaluate our method on the COCO dataset, trained on top of several image captioning frameworks and show that our model successfully modifies captions yielding better ones with better evaluation scores.

DOI
10.5244/C.33.120
https://dx.doi.org/10.5244/C.33.120

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={Look and Modify: Modification Networks for Image Captioning},
author={Fawaz Sammani and Mahmoud Elsayed},
year={2019},
month={September},
pages={120.1--120.12},
articleno={120},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.120},
url={https://dx.doi.org/10.5244/C.33.120}
}