RAVENEA
π Paper | π Project Page | π» Github
RAVENEA is a multimodal benchmark designed to comprehensively evaluate the capabilities of VLMs in cultural understanding through RAG, introduced in RAVENEA: A Benchmark for Multimodal Retrieval-Augmented Visual Culture Understanding.
It provides:
A large-scale cultural retrieval-generation corpus featuring 1,868 culturally grounded images paired with over 10,000 human-ranked Wikipedia documents.
Two downstream tasks for assessing culture-centric visual understanding (cVQA) and culture-informed image captioning (cIC).
Broad cross-cultural coverage spanning 8 countries and 11 categories, including China, India, Indonesia, Korea, Mexico, Nigeria, Russia, and Spain. The benchmark encompasses a diverse taxonomic spectrum: Architecture, Cuisine, History, Art, Daily Life, Companies, Sports & Recreation, Transportation, Religion, Nature, and Tools.
Dataset Structure
The dataset is organized as follows:
ravenea/
βββ images/ # Directory containing all images
βββ metadata_train.jsonl # Training split metadata
βββ metadata_val.jsonl # Validation split metadata
βββ metadata_test.jsonl # Test split metadata
βββ metadata.jsonl # Full metadata
βββ cic_downstream.jsonl # culture-informed image captioning task
βββ cvqa_downstream.jsonl # culture-centric visual question answering task
βββ wiki_documents.jsonl # Corpus of Wikipedia articles for retrieval
Schema
Metadata (metadata_*.jsonl)
Each line is a JSON object representing a data sample:
file_name: Path to the image file (e.g.,./ravenea/images/ccub_101_China_38.jpg).country: Country of origin for the cultural content.task_type: Task category (e.g.,cICfor image captioning/QA).category: Broad cultural category (e.g.,Daily Life).human_captions: Human-written caption describing the image.questions: List of questions associated with the image.options: Multiple-choice options for the questions.answers: Correct answers for the questions.enwiki_ids: List of relevant Wikipedia article IDs.culture_relevance: Score or indicator of cultural relevance.
Wikipedia Corpus (wiki_documents.jsonl)
Contains the knowledge base for retrieval:
id: Unique identifier for the article (e.g.,enwiki/65457597).text: Full text content of the Wikipedia article.date_modified: Last modification date of the article.
Usage
Download the Dataset
Please download the dataset then unzip it to the current directory.
from huggingface_hub import hf_hub_download
local_path = hf_hub_download(
repo_id="jaagli/ravenea",
filename="./ravenea.zip",
repo_type="dataset",
local_dir="./",
)
print(f"File downloaded to: {local_path}")
Loading the Data
You can load the dataset using standard Python libraries.:
import json
from pathlib import Path
def load_jsonl(file_path):
data = []
with open(file_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line))
return data
# Load metadata
train_data = load_jsonl("./ravenea/metadata_train.jsonl")
test_data = load_jsonl("./ravenea/metadata_test.jsonl")
# Load Wikipedia corpus
wiki_docs = load_jsonl("./ravenea/wiki_documents.jsonl")
doc_id_to_text = {doc['id']: doc['text'] for doc in wiki_docs}
# Example: Accessing a sample
sample = train_data[0]
print(f"Image: {sample['file_name']}")
print(f"Caption: {sample['human_captions']}")
print(f"Docs: {sample['enwiki_ids']}")
BibTeX Citation
@inproceedings{
li2026ravenea,
title={{RAVENEA}: A Benchmark for Multimodal Retrieval-Augmented Visual Culture Understanding},
author={Jiaang Li and Yifei Yuan and Wenyan Li and Mohammad Aliannejadi and Daniel Hershcovich and Anders S{\o}gaard and Ivan Vuli{\'c} and Wenxuan Zhang and Paul Pu Liang and Yang Deng and Serge Belongie},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=4zAbkxQ23i}
}
- Downloads last month
- 83