MS-GAN: Text to Image Synthesis with Attention-Modulated Generators and Similarity-aware Discriminators

Fengling Mao (Chinese Academy of Sciences ), Bingpeng Ma (Chinese Academy of Sciences), Hong Chang (Chinese Academy of Sciences), Shiguang Shan (Chinese Academy of Sciences), Xilin Chen (Chinese Academy of Sciences)

Abstract
Existing approaches for text-to-image synthesis often produce images that either contain artifacts or do not well match the text, when the input text description is complex. In this paper, we propose a novel model named MS-GAN, composed of multi-stage attention-Modulated generators and Similarity-aware discriminators, to address these problems. Our proposed generator consists of multiple convolutional blocks that are modulated by both globally and locally attended features calculated between the output image and the text. With such an attention-modulation, our generator can better preserve the semantic information of the text during the text-to-image transformation. Moreover, we propose a similarity-aware discriminator to explicitly constrain the semantic consistency between the text and the synthesized image. Experimental results on Caltech-UCSD Birds and MS-COCO datasets demonstrate that our model can generate images that look more realistic and better match the given text description, compared to the state-of-the-art models.

DOI
10.5244/C.33.82
https://dx.doi.org/10.5244/C.33.82

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={MS-GAN: Text to Image Synthesis with Attention-Modulated Generators and Similarity-aware Discriminators},
author={Fengling Mao and Bingpeng Ma and Hong Chang and Shiguang Shan and Xilin Chen},
year={2019},
month={September},
pages={82.1--82.12},
articleno={82},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.82},
url={https://dx.doi.org/10.5244/C.33.82}
}