Abstract

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

Keywords

Computer scienceBenchmark (surveying)Artificial intelligenceVisualizationGazeObject (grammar)BackpropagationSalientSequence (biology)Object detectionMachine translationImage (mathematics)Artificial neural networkMachine learningComputer visionPattern recognition (psychology)

Affiliated Institutions

Related Publications

Publication Info

Year
2015
Type
article
Volume
3
Pages
2048-2057
Citations
7493
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

7493
OpenAlex

Cite This

Kelvin Xu, Jimmy Ba, Ryan Kiros et al. (2015). Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. arXiv (Cornell University) , 3 , 2048-2057. https://doi.org/10.48550/arxiv.1502.03044

Identifiers

DOI
10.48550/arxiv.1502.03044