Datasets:
Add paper link, robotics task category, and sample usage
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,25 +1,27 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: cc-by-4.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
- en
|
| 8 |
tags:
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
size_categories:
|
| 14 |
-
- 10K<n<100K
|
| 15 |
---
|
|
|
|
| 16 |
# Theory of Space: Visual Scene Dataset
|
| 17 |
|
| 18 |
This dataset provides pre-rendered 3D multi-room environments for evaluating spatial reasoning in Vision Language Models (VLMs). It is designed to support the **Theory of Space (ToS)** benchmark, which tests whether foundation models can actively construct spatial beliefs through exploration.
|
| 19 |
|
| 20 |
-
**Paper**: Theory of Space: Can
|
| 21 |
-
**Project Page**: https://theory-of-space.github.io
|
| 22 |
-
**GitHub Repository**: https://github.com/mll-lab-nu/Theory-of-Space
|
| 23 |
|
| 24 |
## Dataset Overview
|
| 25 |
|
|
@@ -32,6 +34,7 @@ This dataset provides pre-rendered 3D multi-room environments for evaluating spa
|
|
| 32 |
|
| 33 |
## Usage
|
| 34 |
|
|
|
|
| 35 |
Download via Hugging Face CLI:
|
| 36 |
|
| 37 |
```bash
|
|
@@ -48,7 +51,20 @@ cd Theory-of-Space
|
|
| 48 |
source setup.sh
|
| 49 |
```
|
| 50 |
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
## File Structure
|
| 54 |
|
|
@@ -78,4 +94,13 @@ tos-data/
|
|
| 78 |
| `*_fbexp.png` | Images rendered after false-belief modifications |
|
| 79 |
| `top_down*.png` | Bird's-eye view for visualization and debugging |
|
| 80 |
|
|
|
|
| 81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: cc-by-4.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10K<n<100K
|
| 7 |
task_categories:
|
| 8 |
+
- robotics
|
| 9 |
+
- visual-question-answering
|
| 10 |
+
- image-to-text
|
|
|
|
| 11 |
tags:
|
| 12 |
+
- spatial-reasoning
|
| 13 |
+
- 3d-scenes
|
| 14 |
+
- vision-language
|
| 15 |
+
- benchmark
|
|
|
|
|
|
|
| 16 |
---
|
| 17 |
+
|
| 18 |
# Theory of Space: Visual Scene Dataset
|
| 19 |
|
| 20 |
This dataset provides pre-rendered 3D multi-room environments for evaluating spatial reasoning in Vision Language Models (VLMs). It is designed to support the **Theory of Space (ToS)** benchmark, which tests whether foundation models can actively construct spatial beliefs through exploration.
|
| 21 |
|
| 22 |
+
**Paper**: [Theory of Space: Can Foundation Models Construct Spatial Beliefs through Active Exploration?](https://huggingface.co/papers/2602.07055)
|
| 23 |
+
**Project Page**: [https://theory-of-space.github.io](https://theory-of-space.github.io)
|
| 24 |
+
**GitHub Repository**: [https://github.com/mll-lab-nu/Theory-of-Space](https://github.com/mll-lab-nu/Theory-of-Space)
|
| 25 |
|
| 26 |
## Dataset Overview
|
| 27 |
|
|
|
|
| 34 |
|
| 35 |
## Usage
|
| 36 |
|
| 37 |
+
### Download
|
| 38 |
Download via Hugging Face CLI:
|
| 39 |
|
| 40 |
```bash
|
|
|
|
| 51 |
source setup.sh
|
| 52 |
```
|
| 53 |
|
| 54 |
+
### Sample Usage
|
| 55 |
+
To run a full pipeline evaluation (explore + eval + cogmap) using the provided scripts:
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
python scripts/SpatialGym/spatial_run.py \
|
| 59 |
+
--phase all \
|
| 60 |
+
--model-name gpt-5.2 \
|
| 61 |
+
--num 25 \
|
| 62 |
+
--data-dir room_data/3-room/ \
|
| 63 |
+
--output-root result/ \
|
| 64 |
+
--render-mode vision,text \
|
| 65 |
+
--exp-type active,passive \
|
| 66 |
+
--inference-mode batch
|
| 67 |
+
```
|
| 68 |
|
| 69 |
## File Structure
|
| 70 |
|
|
|
|
| 94 |
| `*_fbexp.png` | Images rendered after false-belief modifications |
|
| 95 |
| `top_down*.png` | Bird's-eye view for visualization and debugging |
|
| 96 |
|
| 97 |
+
## Citation
|
| 98 |
|
| 99 |
+
```bibtex
|
| 100 |
+
@inproceedings{zhang2026theoryofspace,
|
| 101 |
+
title = {Theory of Space: Can Foundation Models Construct Spatial Beliefs through Active Exploration?},
|
| 102 |
+
author = {Zhang, Pingyue and Huang, Zihan and Wang, Yue and Zhang, Jieyu and Xue, Letian and Wang, Zihan and Wang, Qineng and Chandrasegaran, Keshigeyan and Zhang, Ruohan and Choi, Yejin and Krishna, Ranjay and Wu, Jiajun and Li, Fei-Fei and Li, Manling},
|
| 103 |
+
booktitle = {International Conference on Learning Representations (ICLR)},
|
| 104 |
+
year = {2026},
|
| 105 |
+
}
|
| 106 |
+
```
|