SC-RANK: Improving Convolutional Image Captioning with Self-Critical Learning and Ranking Metric-based Reward

Shiyang Yan (Queen's University Belfast), Yang Hua (Queen's University Belfast), Neil Robertson (Queen's University Belfast)

Abstract
Image captioning usually employs a Recurrent Neural Network (RNN) to decode the image features from a Convolutional Neural Network (CNN) into a sentence. This RNN model is trained under Maximum Likelihood Estimation (MLE). However, inherent issues like the complex memorising mechanism of the RNNs and the exposure bias introduced by MLE exist in this approach. Recently, the convolutional captioning model shows advantages with a simpler architecture and a parallel training capability. Nevertheless, the MLE training brings the exposure bias which still prevents the model from achieving better performance. In this paper, we prove that the self-critical algorithm can optimise the CNN-based model to alleviate this problem. A ranking metric-based reward, denoted as SC-RANK, is proposed with the sentence embeddings from a pre-trained language model to generate more diversified captions. Applying SC-RANK can avoid the tedious tuning of the specially-designed language model and the knowledge transferred from a pre-trained language model proves to be helpful for image captioning tasks. State-of-the-art results have been obtained in the MSCOCO dataset by proposed SC-RANK.

DOI
10.5244/C.33.181
https://dx.doi.org/10.5244/C.33.181

Files
Paper (PDF)
Supplementary material (ZIP)

BibTeX
@inproceedings{BMVC2019,
title={SC-RANK: Improving Convolutional Image Captioning with Self-Critical Learning and Ranking Metric-based Reward},
author={Shiyang Yan and Yang Hua and Neil Robertson},
year={2019},
month={September},
pages={181.1--181.14},
articleno={181},
numpages={14},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.181},
url={https://dx.doi.org/10.5244/C.33.181}
}