STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset (ACL2017 Short) Constructing Large-Scale Japanese Image Caption Dataset (ACL2017 Short)
Yuya Yoshikawa, Yutaro Shigeto and Akikazu Takeuchi. STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset. Annual Meeting of the Association for Computational Linguistics (ACL), Short Paper, 2017. (to appear) [arXiv]
In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Most studies on image captioning target English language, and there are few image caption datasets in Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO. Our dataset consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using our dataset can generate more natural and better Japanese captions, compared to those generated using English Japanese machine translation after generating English captions.