Vision2Web / README.md
he-zh22's picture
Upload folder using huggingface_hub
8f03299 verified
---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- agent
size_categories:
- n<1K
configs:
- config_name: webpage
data_files:
- split: test
path: "webpage/test.parquet"
- config_name: frontend
data_files:
- split: test
path: "frontend/test.parquet"
- config_name: website
data_files:
- split: test
path: "website/test.parquet"
---
# Vision2Web: A Hierarchical Benchmark for Visual Website Development with Agent Verification
![Web Development](https://img.shields.io/badge/Task-Web%20Development-red)
![Multi-Modal](https://img.shields.io/badge/Task-Multi--Modal-red)
![Vision2Web](https://img.shields.io/badge/Dataset-Vision2Web-blue)
<div align='center'>
[[๐Ÿ  Project Page](https://vision2web-bench.github.io/)] [[๐Ÿ“– arXiv Paper](https://arxiv.org/abs/2603.26648)] [[๐Ÿ† Leaderboard](https://vision2web-bench.github.io/#leaderboard)] [[๐Ÿ“ฎ Submit Results](https://huggingface.co/datasets/zai-org/Vision2Web-Leaderboard)]
</div>
<p align="center">
<img src="./docs/images/vision2web-cover.png" width="85%">
</p>
Vision2Web is a comprehensive benchmark designed to evaluate multimodal coding agents on **visual website development tasks spanning the full software development lifecycle**.
This dataset repository contains the **benchmark tasks, UI prototypes, test workflows, and resources** used to evaluate agent performance.
---
# ๐Ÿ‘€ Introduction
Vision2Web is a hierarchical benchmark for evaluating multimodal coding agents on **end-to-end visual website development**, measuring their ability to integrate:
- UI understanding
- requirements reasoning
- interactive logic
- full-stack implementation
in **long-horizon development scenarios**.
<p align="center">
<img src="./docs/images/compare_bench.png" width="70%">
</p>
The benchmark is organized into three progressive levels:
### Level 1 โ€“ Static Webpage
Generate responsive executable webpages from multi-device UI prototypes
(desktop / tablet / mobile).
**Metric**
- Visual Score (VS)
---
### Level 2 โ€“ Interactive Frontend
Develop multi-page interactive frontends from multiple prototypes and textual specifications.
**Metrics**
- Visual Score (VS)
- Functional Score (FS)
---
### Level 3 โ€“ Full-Stack Website
Build complete full-stack web systems from requirement documents and UI prototypes.
Agents must implement:
- backend logic
- state management
- frontend interactions
**Metrics**
- Visual Score (VS)
- Functional Score (FS)
---
Evaluation uses a **workflow-based agent verification paradigm** combining:
- **GUI Agent verifiers** for functional correctness
- **VLM-based judges** for visual fidelity
This enables **scalable and implementation-agnostic evaluation** across increasing levels of complexity.
---
# ๐Ÿ“Š Benchmark Statistics
Vision2Web contains:
- **193 tasks**
- **16 subcategories**
- **4 major domains**
Domains include:
- E-Commerce
- SaaS
- Content Platforms
- Public Service
The dataset includes:
- **918 prototype images**
- **1,255 functional test cases**
<table align="center">
<tr>
<td align="center" width="50%">
<img src="./docs/images/task_distribution.png" width="100%"/>
</td>
<td align="center" width="50%">
<img src="./docs/images/test_case_distribution.png" width="100%"/><br/><br/>
<img src="./docs/images/compare_task.png" width="80%"/>
</td>
</tr>
</table>
---
# ๐Ÿ“ฅ Using the Dataset
The dataset can be downloaded directly from Hugging Face.
After downloading, extract the dataset and place it in your project directory with the following structure:
```
datasets/
โ”œโ”€โ”€ webpage/ # Level 1: Static Webpage (100 tasks)
โ”œโ”€โ”€ frontend/ # Level 2: Interactive Frontend (66 tasks)
โ””โ”€โ”€ website/ # Level 3: Full-Stack Website (27 tasks)
```
Each task directory contains the following components:
| File / Folder | Description |
|---|---|
| `prototypes/` | UI prototype images (desktop / tablet / mobile) |
| `resources/` | Multimedia assets used in tasks |
| `workflow.json` | Functional test workflow specification |
| `prompt.txt` | Textual requirements (Level 2 only) |
| `prd.md` | Requirement document (Level 3 only) |
Once extracted, ensure the dataset directory is placed at the root of the Vision2Web project so that the evaluation pipeline can locate the benchmark tasks correctly.
---
# โš ๏ธ License
Vision2Web is released under the **CC-BY-NC-SA-4.0 license**.
---
# โœ’๏ธ Citation
If you find Vision2Web useful in your research, please cite:
```bibtex
@misc{he2026vision2webhierarchicalbenchmarkvisual,
title={Vision2Web: A Hierarchical Benchmark for Visual Website Development with Agent Verification},
author={Zehai He and Wenyi Hong and Zhen Yang and Ziyang Pan and Mingdao Liu and Xiaotao Gu and Jie Tang},
year={2026},
eprint={2603.26648},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2603.26648},
}
```