YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
R3-Bench: Read the Room Reasoning Benchmark
This is the official repository for R3-Bench, introduced in the paper:
"Read the Room: Video Social Reasoning with Mental-Physical Causal Chains" (ICLR 2026).
R3-Bench is an evaluation benchmark with fine-grained annotations of belief, intent, desire, emotion, and their causal chains in complex social scenarios.
π Table of Contents
π Dataset Structure
The dataset is organized as follows:
videos.csv: Contains video ids and their corresponding start and end times (seconds).
r3-bench-hard.json: Contains the question-answer pairs in R3-Bench-Hard.
{
submission_id: {
"Human_Annotation_Data": {
"QID": str,
"Question": str,
"Answer_Index": int,
"Options": List[str],
},
"YouTube_ID": str,
"Start_Seconds": int,
"End_Seconds": int,
}
}
- r3-bench-dx.json: Contains the chain structure and question-answer pairs in R3-Bench-DX. "Belief-X-I/Intent-X-I/Desire-X-I/Emotion-X-I" means the mental state of person X, where 'I' stands for index. "Event-I" means the event, where 'I' stands for index. "Sub-Chain-I" means the subchain, where 'I' stands for index.
{
submission_id: {
chain_id: {
"Referents": {
Referent-X: str, the referent of person X,
},
"Nodes": {
"Belief-X-I/Intent-X-I/Desire-X-I/Emotion-X-I/Event-I": {
"QID": str,
"Description": str,
"Question": str,
"Answer_Index": int,
"Options": List[str],
}
},
"Sub-Chains": {
Sub-Chain-I: {
"Why_QA": {
"QID": str,
"Question": str,
"Answer_Index": int,
"Options": List[str],
},'
"How/What_QA": {
"QID": str,
"Question": str,
"Answer_Index": int,
"Options": List[str],
},
"Reasons": list of node ids, the reasons in the subchain,
"Result": node id, the result in the subchain,
},
}
}
},
"YouTube_ID": str,
"Start_Seconds": int,
"End_Seconds": int,
}
π₯ Download Videos
We provide video ids and their corresponding start and end times in videos.csv. You can download the videos using the YouTube API or any YouTube video downloader by specifying the video id and the time range.
π Evaluation Settings
Our evaluation was conducted using VLMEvalKit.
The evaluation prompt without subtitles is:
These are the frames of a video. Select the best answer to the following multiple-choice question based on the video. Based on your understanding, respond with only the letter (A, B, C, D, or E) of the correct option.
Question: {question}
{options (separated with '\n')}
Answer:
We use Whisper large-v2 to extract subtitles from the videos. The evaluation prompt with subtitles is:
These are the frames of a video. This video's subtitles are listed below:
{subtitles}
Select the best answer to the following multiple-choice question based on the video. Based on your understanding, respond with only the letter (A, B, C, D, or E) of the correct option.
Question: {question}
{options (separated with '\n')}
Answer:
For the video, we extract 16 frames at a resolution of 640x360.
π License
- Our dataset is licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
- The videos are collected from public sources (YouTube) and are subject to their respective original licenses. We only provide video IDs and corresponding start and end times for academic research purposes.
π Citation
If you find our dataset or paper useful in your research, please consider citing:
@inproceedings{
niu2026read,
title={Read the Room: Video Social Reasoning with Mental-Physical Causal Chains},
author={Lixing Niu and Jiapeng Li and Xingping Yu and Xinyi Dong and Shu Wang and Ruining Feng and Bo Wu and Ping Wei and Yisen Wang and Lifeng Fan},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=TJilJnZjpw}
}
βοΈ Contact
For any questions, feedback, or issues regarding the dataset, please open an issue in this repository or contact:
Lixing Niu: lxniu@stu.pku.edu.cn
- Downloads last month
- 13