Update README with model/dataset documentation

#2
by hunarbatra - opened
Files changed (1) hide show
  1. README.md +162 -5
README.md CHANGED
@@ -1,7 +1,164 @@
1
  ---
2
- datasets:
3
- - OX-PIXL/STVQA-7K
4
- base_model:
5
- - Qwen/Qwen2.5-VL-3B-Instruct
 
 
 
 
 
 
 
6
  ---
7
- Paper: https://arxiv.org/abs/2511.07403
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - spatial-reasoning
7
+ - multimodal
8
+ - vision-language
9
+ - scene-graph
10
+ - reinforcement-learning
11
+ base_model: Qwen/Qwen2.5-VL-3B-Instruct
12
+ pipeline_tag: image-text-to-text
13
  ---
14
+
15
+ # SpatialThinker-3B
16
+
17
+ <p align="center">
18
+ <a href="https://arxiv.org/abs/2511.07403">
19
+ <img src="https://img.shields.io/badge/arXiv-2511.07403-b31b1b.svg" alt="arXiv">
20
+ </a>
21
+ <a href="https://hunarbatra.com/SpatialThinker">
22
+ <img src="https://img.shields.io/badge/🌐%20Project%20Page-blue.svg" alt="Project Page">
23
+ </a>
24
+ <a href="https://github.com/hunarbatra/SpatialThinker">
25
+ <img src="https://img.shields.io/badge/GitHub-Repository-black.svg" alt="GitHub">
26
+ </a>
27
+ </p>
28
+
29
+ **SpatialThinker-3B** is a 3D-aware multimodal large language model (MLLM) trained with reinforcement learning to integrate structured spatial grounding with multi-step reasoning. The model simulates human-like spatial perception by constructing a scene graph of task-relevant objects and spatial relations, and reasoning towards an answer via dense spatial rewards.
30
+
31
+ ## Model Description
32
+
33
+ - **Base Model**: Qwen2.5-VL-3B-Instruct
34
+ - **Training**: GRPO (Group Relative Policy Optimization) with dense spatial rewards
35
+ - **Training Data**: STVQA-7K (7,587 spatial VQA samples)
36
+ - **Authors**: Hunar Batra, Haoqin Tu, Hardy Chen, Yuanze Lin, Cihang Xie, Ronald Clark
37
+ - **Institutions**: University of Oxford, UC Santa Cruz
38
+
39
+ ## Key Features
40
+
41
+ - **Structured Spatial Reasoning**: Constructs question-focused scene subgraphs with objects, bounding boxes, and relations
42
+ - **Dense Spatial Rewards**: Multi-objective reward function enforcing format, count, accuracy, and spatial grounding
43
+ - **9 Spatial Reasoning Categories**: Relations, reach, size, orientation, instance location, depth, distance, count, and existence
44
+ - **Outperforms GPT-4o**: On spatial understanding benchmarks while using only 7K training samples
45
+
46
+ ## Inference Template
47
+
48
+ Use the following template for inference:
49
+
50
+ ```
51
+ You FIRST observe the image in <observe> </observe> tags, then visualise the relevant scene graph in <scene> </scene> tags, followed by thinking about the reasoning process as an internal monologue within <think> </think> tags and then provide the final answer. The final answer MUST BE put within <answer> </answer> tags, and only return the final choice including the correct option and answer within the answer tags, e.g., <answer> (A) cat </answer>.
52
+
53
+ Image size: {Width} x {Height}
54
+ ```
55
+
56
+ ## Output Format
57
+
58
+ The model generates structured output with four components:
59
+
60
+ 1. **`<observe>`**: Scene description covering relevant objects
61
+ 2. **`<scene>`**: JSON scene graph with objects (id, bbox) and relationships (subject, predicate, object)
62
+ 3. **`<think>`**: Step-by-step reasoning as internal monologue
63
+ 4. **`<answer>`**: Final answer with option letter and text
64
+
65
+ ### Example Output
66
+
67
+ ```
68
+ <observe>
69
+ The image shows a living room with a couch, a coffee table, and a cat sitting on the floor.
70
+ </observe>
71
+ <scene>
72
+ {
73
+ "objects": [
74
+ {"id": "couch.1", "bbox": [50, 100, 400, 350]},
75
+ {"id": "cat.1", "bbox": [200, 300, 280, 400]},
76
+ {"id": "table.1", "bbox": [150, 250, 350, 320]}
77
+ ],
78
+ "relationships": [
79
+ {"subject": "cat.1", "predicate": "in front of", "object": "couch.1"},
80
+ {"subject": "cat.1", "predicate": "beside", "object": "table.1"}
81
+ ]
82
+ }
83
+ </scene>
84
+ <think>
85
+ Looking at the scene graph, the cat is positioned in front of the couch and beside the coffee table. The bounding box coordinates show the cat is at y=300-400 while the couch extends to y=350, confirming the cat is on the floor in front of the couch.
86
+ </think>
87
+ <answer> (B) in front of the couch </answer>
88
+ ```
89
+
90
+ ## Usage
91
+
92
+ ```python
93
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
94
+ from PIL import Image
95
+
96
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
97
+ "OX-PIXL/SpatialThinker-3B",
98
+ torch_dtype="auto",
99
+ device_map="auto"
100
+ )
101
+ processor = AutoProcessor.from_pretrained("OX-PIXL/SpatialThinker-3B")
102
+
103
+ # Load image
104
+ image = Image.open("your_image.jpg")
105
+ width, height = image.size
106
+
107
+ # Prepare prompt with template
108
+ template = f"""You FIRST observe the image in <observe> </observe> tags, then visualise the relevant scene graph in <scene> </scene> tags, followed by thinking about the reasoning process as an internal monologue within <think> </think> tags and then provide the final answer. The final answer MUST BE put within <answer> </answer> tags, and only return the final choice including the correct option and answer within the answer tags, e.g., <answer> (A) cat </answer>.
109
+
110
+ Image size: {width} x {height}"""
111
+
112
+ question = "Where is the cat relative to the couch? (A) on top of (B) in front of (C) behind (D) beside"
113
+
114
+ messages = [
115
+ {
116
+ "role": "user",
117
+ "content": [
118
+ {"type": "image", "image": image},
119
+ {"type": "text", "text": template + "\n\n" + question},
120
+ ],
121
+ }
122
+ ]
123
+
124
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
125
+ inputs = processor(text=[text], images=[image], return_tensors="pt").to(model.device)
126
+
127
+ generated_ids = model.generate(**inputs, max_new_tokens=1024)
128
+ output = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
129
+ print(output)
130
+ ```
131
+
132
+ ## Evaluation Results
133
+
134
+ SpatialThinker-3B achieves state-of-the-art performance on spatial reasoning benchmarks:
135
+
136
+ | Benchmark | SpatialThinker-3B |
137
+ |-----------|------------------------|
138
+ | CV-Bench (3D) | Strong performance |
139
+ | BLINK-Spatial | Outperforms GPT-4o |
140
+ | SpatialBench | SOTA results |
141
+ | RealWorldQA | Competitive |
142
+
143
+ See the [paper](https://arxiv.org/abs/2511.07403) for detailed results.
144
+
145
+ ## Citation
146
+
147
+ ```bibtex
148
+ @misc{batra2025spatialthinkerreinforcing3dreasoning,
149
+ title={SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards},
150
+ author={Hunar Batra and Haoqin Tu and Hardy Chen and Yuanze Lin and Cihang Xie and Ronald Clark},
151
+ year={2025},
152
+ eprint={2511.07403},
153
+ archivePrefix={arXiv},
154
+ primaryClass={cs.CV},
155
+ url={https://arxiv.org/abs/2511.07403},
156
+ }
157
+ ```
158
+
159
+ ## Links
160
+
161
+ - πŸ“„ **Paper**: [arXiv:2511.07403](https://arxiv.org/abs/2511.07403)
162
+ - 🌐 **Project Page**: [hunarbatra.com/SpatialThinker](https://hunarbatra.com/SpatialThinker)
163
+ - πŸ’» **GitHub**: [github.com/hunarbatra/SpatialThinker](https://github.com/hunarbatra/SpatialThinker)
164
+ - πŸ€— **Dataset**: [OX-PIXL/STVQA-7K](https://huggingface.co/datasets/OX-PIXL/STVQA-7K)