File size: 3,315 Bytes
058d892
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
annotations_creators:
- expert-generated
language:
- en
license: mit
task_categories:
- text-to-3d
- text-to-video
- other
tags:
- blender
- procedural-generation
- physics-simulation
- 4d-generation
- code-generation
pretty_name: Code4D Benchmark
size_categories:
- n<1K
---

# Dataset Card for Code4D (Code2Worlds)

## Dataset Description

- **Paper:** [Code2Worlds: Empowering Coding LLMs for 4D World Generation](https://arxiv.org/abs/2602.11757)
- **Repository:** [GitHub](https://github.com/AIGeeksGroup/Code2Worlds)

### Dataset Summary

The **Code4D** benchmark is a dataset designed to evaluate the capability of Large Language Models (LLMs) in generating physically grounded 4D environments. It pairs natural language prompts with complex 3D scenes (provided here as `.blend` files) that exhibit temporal evolution, physical interactions, and atmospheric changes.

Unlike existing text-to-3D datasets that focus solely on static structures, Code4D challenges models on dynamic fidelity, including fluid dynamics, particle systems, rigid-body dynamics, and soft-body simulations.

This dataset supports the **Code2Worlds** framework, which formulates 4D generation as language-to-simulation code generation using a dual-stream architecture (Object Stream and Scene Stream).

### Supported Tasks and Leaderboards

- **Text-to-4D Scene Generation:** Generating dynamic 3D scenes from text descriptions.
- **Procedural Code Generation:** Evaluating LLMs on generating Blender/Infinigen API calls.
- **Physics Simulation Benchmarking:** Assessing the realism of generated physical interactions.

### Languages

The prompts and documentation are in **English**.

---

## Dataset Structure

### Data Instances

Each instance in the dataset consists of a text prompt and its corresponding Blender project file (`.blend`).

**Example:**

* **Prompt:** "A breeze stirs through the autumn forest, gently swaying the entire tree as leaves dance in the wind."
* **File:** `scene_1.blend`

### Data Fields

- `prompt` (string): The natural language instruction describing the scene and desired dynamics.
- `blend_file` (file): The Blender 3D project file containing the scene layout, assets, and simulation settings.
---

## Dataset Creation

### Curation Rationale

The dataset was constructed to address the "semantic-physical execution gap" in generative models. It specifically targets scenarios where monolithic generation fails, requiring precise control over both local object structures and global environmental layouts.

---

## Considerations for Using the Data

### Software Dependencies

To open and render the `.blend` files properly, you need:
- **Blender 4.3** or higher.
- **Infinigen** libraries.

### Computational Requirements

The benchmark scenes are designed for high-fidelity rendering.
- **Nature Scenes:** Configured for 1920x1080 resolution, 240 frames, 128 samples.
- **Indoor Scenes:** Configured for 1920x1080 resolution, 120 frames, 196 samples.

---

## Citation

If you use this dataset in your research, please cite the following paper:

```bibtex
@article{zhang2026code2worlds,
  title={Code2Worlds: Empowering Coding LLMs for 4D World Generation},
  author={Zhang, Yi and Wang, Yunshuang and Zhang, Zeyu and Tang, Hao},
  journal={arXiv preprint arXiv:2602.11757},
  year={2026}
}