Abstract

In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different sub-regions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.

Keywords

Computer scienceGenerative grammarBoosting (machine learning)Generator (circuit theory)Generative adversarial networkArtificial intelligenceImage (mathematics)Similarity (geometry)Adversarial systemMatching (statistics)Deep learningNatural language processingImage synthesisWord (group theory)Pattern recognition (psychology)

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Citations
1812
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1812
OpenAlex
320
Influential
1285
CrossRef

Cite This

Tao Xu, Pengchuan Zhang, Qiuyuan Huang et al. (2018). AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition . https://doi.org/10.1109/cvpr.2018.00143

Identifiers

DOI
10.1109/cvpr.2018.00143
arXiv
1711.10485

Data Quality

Data completeness: 84%