Actor-Critic Sequence Training for Image Captioning

Advances in Neural Information Processing Systems (NIPS), workshop on Visually-Grounded Interaction and Language, Long Beach, California, USA, December 2017.

[Arxiv]
[Workshop PDF]

Generating natural language descriptions of images is an important ca- pability for a robot or other visual-intelligence driven AI agent that may need to communicate with human users about what it is seeing. Such image captioning methods are typically trained by maximising the likelihood of ground-truth anno- tated caption given the image. While simple and easy to implement, this approach does not directly maximise the language quality metrics we care about such as CIDEr. In this paper we investigate training image captioning methods based on actor-critic reinforcement learning in order to directly optimise non-differentiable quality metrics of interest. By formulating a per-token advantage and value com- putation strategy in this novel reinforcement learning based captioning model, we show that it is possible to achieve the state of the art performance on the widely used MSCOCO benchmark.

Share Comments