File size: 3,175 Bytes
641d807 77d83da |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
dataset_info:
features:
- name: images
sequence: image
- name: question
dtype: string
- name: answers
sequence: string
- name: correct_answer
dtype: string
- name: question_type
dtype: string
splits:
- name: train
num_bytes: 5167070090.512
num_examples: 172384
- name: static
num_bytes: 3140831722.665
num_examples: 127405
- name: val
num_bytes: 305661617.158
num_examples: 4001
- name: test
num_bytes: 125653489.0
num_examples: 150
download_size: 2182325666
dataset_size: 8739216919.335
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: static
path: data/static-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
# SAT-v2 Dataset
## Paper
**SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models**
This dataset is part of the SAT (Spatial Aptitude Training) project, which introduces a dynamic benchmark for evaluating and improving spatial reasoning capabilities in multimodal language models.
- **Project Page**: [https://arijitray.com/SAT/](https://arijitray.com/SAT/)
- **Paper**: [arXiv:2412.07755](https://arxiv.org/abs/2412.07755)
## Dataset Description
SAT-v2 is a comprehensive spatial reasoning benchmark containing over 300,000 questions across multiple splits. The dataset tests various aspects of spatial understanding including perspective-taking, object relationships, and dynamic scene understanding.
## Loading the Dataset
```python
from datasets import load_dataset
# Load the training split
dataset = load_dataset("array/SAT-v2", split="train")
# Or load a specific split
val_dataset = load_dataset("array/SAT-v2", split="val")
static_dataset = load_dataset("array/SAT-v2", split="static")
test_dataset = load_dataset("array/SAT-v2", split="test")
# Access a sample
sample = dataset[0]
print(sample["question"])
print(sample["answers"])
print(sample["correct_answer"])
```
## Dataset Splits
- **train**: 172,384 examples - Dynamic training questions
- **static**: 127,405 examples - Static spatial reasoning questions
- **val**: 4,001 examples - Validation set
- **test**: 150 examples - Test set
**Important Note on Test Set Evaluation:** When evaluating on the test set, please use circular evaluation by switching the position of the correct answer to avoid position bias. If you're using lmms-eval, refer to the implementation here: [https://github.com/arijitray1993/lmms-eval/tree/main/lmms_eval/tasks/sat_real](https://github.com/arijitray1993/lmms-eval/tree/main/lmms_eval/tasks/sat_real)
## Citation
If you use this dataset, please cite:
```bibtex
@misc{ray2025satdynamicspatialaptitude,
title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models},
author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
year={2025},
eprint={2412.07755},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.07755},
}
```
|