File size: 5,632 Bytes
c3f5a81 3cc87a4 c3f5a81 8fdb845 c3f5a81 8fdb845 c3f5a81 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 | ---
license: apache-2.0
task_categories:
- other
language:
- en
- ko
tags:
- world-model
- embodied-ai
- benchmark
- agi
- cognitive-evaluation
- vidraft
- prometheus
- wm-bench
- final-bench-family
pretty_name: World Model Bench (WM Bench)
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: wm_bench.jsonl
---
# π World Model Bench (WM Bench) v1.0
> **Beyond FID β Measuring Intelligence, Not Just Motion**
**WM Bench** is the world's first benchmark for evaluating the **cognitive capabilities** of World Models and Embodied AI systems.
[](https://huggingface.co/spaces/FINAL-Bench/worldmodel-bench)
[](https://huggingface.co/spaces/FINAL-Bench/World-Model)
[](https://huggingface.co/datasets/VIDraft/FINAL-Bench)
[](LICENSE)
---
## π― Why WM Bench?
Existing world model evaluations focus on:
- **FID / FVD** β image and video quality ("Does it look real?")
- **Atari scores** β performance in fixed game environments
**WM Bench measures something different: Does the model *think* correctly?**
| Existing Benchmarks | WM Bench |
|---|---|
| FID: "Does it look real?" | "Does it understand the scene?" |
| FVD: "Is the video smooth?" | "Does it predict threats correctly?" |
| Atari: Fixed game environment | Any environment via JSON input |
| No emotion modeling | Emotion escalation measurement |
| No memory testing | Contextual memory utilization |
---
## π Benchmark Structure
### 3 Pillars Β· 10 Categories Β· 100 Scenarios
```
WM Score (0 β 1000)
βββ π P1: Perception 250 pts β C01, C02
βββ π§ P2: Cognition 450 pts β C03, C04, C05, C06, C07
βββ π₯ P3: Embodiment 300 pts β C08, C09, C10
```
**Why Cognition is 45%:** Existing world models measure perception and motion β but not **judgment**. WM Bench is the only benchmark that measures the quality of a model's decisions.
| Cat | Name | World First? |
|-----|------|-------------|
| C01 | Environmental Awareness | |
| C02 | Entity Recognition & Classification | |
| C03 | Prediction-Based Reasoning | β¦ |
| C04 | Threat-Type Differentiated Response | β¦ |
| C05 | Autonomous Emotion Escalation | β¦β¦ |
| C06 | Contextual Memory Utilization | β¦ |
| C07 | Post-Threat Adaptive Recovery | β¦ |
| C08 | Motion-Emotion Expression | β¦ |
| C09 | Real-Time Cognitive-Action Performance | |
| C10 | Body-Swap Extensibility | β¦β¦ |
β¦ = First defined in this benchmark
β¦β¦ = No prior research exists
### Grade Scale
| Grade | Score | Label |
|-------|-------|-------|
| S | 900+ | Superhuman |
| A | 750+ | Advanced |
| B | 600+ | Baseline |
| C | 400+ | Capable |
| D | 200+ | Developing |
| F | <200 | Failing |
---
## π How to Participate
**No 3D environment needed.** WM Bench evaluates via text I/O only:
```
INPUT: scene_context JSON
OUTPUT: PREDICT: left=danger(wall), right=safe(open), fwd=danger(beast), back=safe
MOTION: a person sprinting right in desperate terror
```
### Participation Tracks
| Track | Description | Max Score |
|-------|-------------|-----------|
| **A** | Text-only (API) | 750 / 1000 |
| **B** | Text + performance metrics | 1000 / 1000 |
| **C** | Text + performance + live demo | 1000 / 1000 + β Verified |
### Quick Start
```bash
git clone https://huggingface.co/datasets/VIDraft/wm-bench-dataset
cd wm-bench-dataset
python example_submission.py \
--api_url https://api.openai.com/v1/chat/completions \
--api_key YOUR_KEY \
--model YOUR_MODEL \
--output my_submission.json
```
Then upload `my_submission.json` to the [WM Bench Leaderboard](https://huggingface.co/spaces/FINAL-Bench/worldmodel-bench).
---
## π Current Leaderboard

| Rank | Model | Org | WM Score | Grade | Track |
|------|-------|-----|----------|-------|-------|
| 1 | VIDRAFT PROMETHEUS v1.0 | VIDRAFT | 726 | B | C β |


*Submit your model at the [WM Bench Leaderboard](https://huggingface.co/spaces/FINAL-Bench/worldmodel-bench)*
---
## π PROMETHEUS World Model β Live Demo
**WM Bench is powered by VIDRAFT PROMETHEUS**, the world's first real-time embodied AI that combines FloodDiffusion motion generation with a Kimi K2.5 cognitive brain.

> Perceive β Predict β Decide β Act



π **Try it live:** [FINAL-Bench/World-Model](https://huggingface.co/spaces/FINAL-Bench/World-Model)
---
## π¦ Dataset Files
```
wm-bench-dataset/
βββ wm_bench.jsonl # 100 scenarios + ground truth
βββ example_submission.py # Participation template
βββ wm_bench_scoring.py # Scoring engine (fully open)
βββ wm_bench_eval.py # Evaluation runner
βββ README.md
```
---
## π¬ FINAL Bench Family
WM Bench is part of the **FINAL Bench Family** β a suite of AGI evaluation benchmarks by VIDRAFT:
| Benchmark | Measures | Status |
|-----------|----------|--------|
| [FINAL Bench](https://huggingface.co/datasets/VIDraft/FINAL-Bench) | Text AGI (metacognition) | π HF Global Top 5 Β· 4 press coverages |
| **WM Bench** | **Embodied AGI (world models)** | **π Live** | |