Datasets:
MUNIChus - A {Mu}ltilingual {N}ews {I}mage {C}aptioning Benchmark
This repository introduces MUNIChus, the first multilingual news image captioning benchmark, comprising over 154,000 news images across nine languages. Alongside MUNIChus, we evaluate more than 20 state-of-the-art multimodal large language models (MLLMs) on three news image captioning approaches: (1) Zero-shot Prompting, (2) Few-shot Prompting (Random and Similar), and (3) Instruction Fine-tuning. The release of MUNIChus aims to provide a solution to the lack of multilingual resources for news image captioning, offering valuable benchmarks and resources for advancing multilingual multimodal NLP, particularly for low-resource languages such as Sinhala and Urdu. MUNIChus is the largest publicly available multilingual news image captioning dataset.
Data Collection
For version 1.0, we collected news articles, images, captions, and headlines from the British Broadcasting Corporation (BBC) across nine languages, published before December 31, 2024. To ensure data quality, we removed images with a height or width of less than 180 pixels, and retained only those examples whose captions contain more than three words. The following table provides details across languages.
| Language | Family | Train | Test | Unique Articles | Avg Images/Article | Avg Content Tokens | Avg Caption Tokens | Avg Title Tokens |
|---|---|---|---|---|---|---|---|---|
| Arabic | Afro-Asiatic (Semitic) | 5,119 | 999 | 2,289 | 2.67 | 1,010 | 12.70 | 10.85 |
| Chinese | Sino-Tibetan | 9,389 | 999 | 2,922 | 3.56 | 1,519 | 17.28 | 14.38 |
| English | Indo-European (Germanic) | 79,195 | 1,000 | 38,558 | 2.08 | 461 | 13.66 | 7.76 |
| French | Indo-European (Romance) | 10,247 | 999 | 2,853 | 3.94 | 1,510 | 17.23 | 14.23 |
| Hindi | Indo-European (Indo-Aryan) | 12,566 | 1,000 | 2,760 | 4.92 | 1,968 | 12.32 | 14.03 |
| Indonesian | Austronesian | 12,137 | 1,000 | 1,952 | 6.73 | 1,794 | 16.24 | 14.35 |
| Japanese | Japonic | 7,641 | 1,000 | 3,805 | 2.27 | 1,287 | 20.63 | 18.12 |
| Sinhala | Indo-European (Indo-Aryan) | 2,418 | 998 | 1,046 | 3.27 | 1,194 | 16.39 | 10.16 |
| Urdu | Indo-European (Indo-Aryan) | 6,602 | 998 | 2,478 | 3.07 | 1,792 | 15.68 | 18.28 |
| Total | — | 145,314 | 8,993 | 58,663 | 3.61 | 1,412 | 15.98 | 13.68 |
Languages are categorised as high-resource (English, French, Chinese), mid-resource (Arabic, Hindi, Japanese), and low-resource (Indonesian, Sinhala, Urdu) following Joshi et al. (2020). Chinese was tokenised using Jieba and Japanese using MeCab.
Data
All instances across the nine languages were concatenated to create the final dataset. MUNIChus is available on HuggingFace and can be downloaded using the following code.
from datasets import Dataset
from datasets import load_dataset
munichus = Dataset.to_pandas(load_dataset('tharindu/MUNIChus', split='train'))
To load the test split, use:
munichus_test = Dataset.to_pandas(load_dataset('tharindu/MUNIChus', split='test'))
To load a specific language subset (e.g., Sinhala):
munichus_si = Dataset.to_pandas(load_dataset('tharindu/MUNIChus', 'sinhala', split='train'))
Models
We release fine-tuned checkpoints for two models on HuggingFace:
- Aya-vision-8b (fine-tuned): alita9/xl-munichus-CohereLabs-aya-vision-8b
- Llama-3.2-11B-Vision-Instruct (fine-tuned): alita9/xl-munichus-meta-llama-Llama-3.2-11B-Vision-Instruct
Evaluation
We use BLEU-4 and CIDEr as the primary evaluation metrics, applied consistently across all nine languages. For Chinese and Japanese, we apply language-specific word segmentation (Jieba and MeCab respectively) before metric computation.
License
MUNIChus is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License (CC BY-NC-SA 4.0).
Citation
If you are using the dataset or the models, please cite the following paper.
@inproceedings{Munichus2026,
author = {Chen, Yuji and Plum, Alistair and Hettiarachchi, Hansi and Kanojia, Diptesh and Basnet, Saroj and Zampieri, Marcos and Ranasinghe, Tharindu},
title = {{MUNIChus: Multilingual News Image Captioning Benchmark}},
booktitle = {The Fifteenth biennial Language Resources and Evaluation Conference (LREC 2026)},
year = {2026}
}
Acknowledgements
Hansi Hettiarachchi is partially supported by the CA21167 COST action UniDive, funded by COST (European Cooperation in Science and Technology). We acknowledge the EuroHPC Joint Undertaking for awarding us access to Leonardo at CINECA, Italy, and to the MeluXina high-performance computing infrastructure (granted by the University of Luxembourg on the EuroHPC supercomputer hosted by LuxProvide) for running our experiments.
- Downloads last month
- 49