Datasets:
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- video-text-to-text
tags:
- GUI
- CUA
- Agents
- action prediction
- multimodal
- computer-use
- video-demonstrations
- desktop-automation
VideoCUA
The largest open, human annotated video corpus for desktop computer use
Part of CUA-Suite: Massive Human-annotated Video Demonstrations for Computer-Use Agents
Paper • Project Page • GitHub • UI-Vision • GroundCUA
Overview
VideoCUA is the largest open expert video corpus for desktop computer use, comprising ~10K tasks, 55 hours of continuous 30 fps screen recordings, and 6 million frames across 87 professional desktop applications spanning 12 categories.
Unlike sparse screenshot datasets, VideoCUA preserves the full temporal dynamics of human interaction — every mouse movement, click, drag, scroll, and keystroke is logged with millisecond precision alongside continuous video. This enables research in action prediction, imitation learning, visual world models, and video-based reward modeling.
VideoCUA is part of CUA-Suite, a unified ecosystem that also includes:
- UI-Vision — A desktop-centric benchmark evaluating element grounding, layout understanding, and action prediction.
- GroundCUA — A large-scale pixel-precise UI grounding dataset with 5M+ human-verified element annotations.
Usage
To process the raw video data and action logs into trajectories for training or evaluation, you can use the synthesis pipeline provided in the GitHub repository.
1. Download & Extract
bash download_data.sh --repo ServiceNow/VideoCUA --output_dir ./VideoCUA
2. Convert to Trace Format
To extract video frames at each action timestamp and produce standardized trajectories:
python convert_videocua.py \
--data_dir ./VideoCUA/data \
--output_dir ./videocua_processed \
--num_workers 4
3. Generate CoT Annotations
python gen_cot.py \
--task_list_path ./videocua_processed/task_list.json \
--model claude-sonnet-4.5 \
--num_threads 4 \
--suffix cot_v1
Repository Structure
.
├── assets/
│ ├── cua-suite-logo.png
│ └── cua-suite-teaser.png
├── raw_data/ # One zip per application (87 total)
│ ├── 7-Zip.zip
│ ├── Affine.zip
│ ├── Anki.zip
│ ├── ...
│ └── draw.io.zip
└── README.md
Data Format
Each application zip in raw_data/ contains multiple task folders identified by numeric task IDs. Each task folder has the following structure:
<task_id>/
├── action_log.json # Task metadata and timestamped actions
└── video/
├── video.mp4 # Continuous 30 fps screen recording (1920×1080)
└── video_metadata.json # Video properties (fps, duration, resolution, etc.)
action_log.json
{
"task_id": 45525,
"task_instruction": "Open test.7z present in archive it and see the contents",
"platform": "7-Zip",
"action_log": [
{
"action_type": "CLICK",
"timestamp": 2.581,
"action_params": {
"x": 47,
"y": 242,
"numClicks": 2
},
"groundcua_id": "9a7daeed..."
}
]
}
Each action entry includes a groundcua_id field — this is the unique identifier for the corresponding screenshot in the GroundCUA repository. Using this ID, you can look up the fully annotated screenshot in GroundCUA, linking the video action trajectory to dense UI grounding annotations.
Citation
If you find VideoCUA or any of the other works in CUA-Suite useful for your research, please cite our works:
@inproceedings{
jian2026cuasuite,
title={{CUA}-Suite: Massive Human-annotated Video Demonstrations for Computer-Use Agents},
author={Xiangru Jian and Shravan Nayak and Kevin Qinghong Lin and Aarash Feizi and Kaixin Li and Patrice Bechard and Spandana Gella and Sai Rajeswar},
booktitle={ICLR 2026 Workshop on Lifelong Agents: Learning, Aligning, Evolving},
year={2026},
url={https://openreview.net/forum?id=IgTUGrZfMr}
}
@inproceedings{
feizi2026grounding,
title={Grounding Computer Use Agents on Human Demonstrations},
author={Aarash Feizi and Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Kaixin Li and Rabiul Awal and Xing Han L{\`u} and Johan Obando-Ceron and Juan A. Rodriguez and Nicolas Chapados and David Vazquez and Adriana Romero-Soriano and Reihaneh Rabbany and Perouz Taslakian and Christopher Pal and Spandana Gella and Sai Rajeswar},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=9WiPZy3Kro}
}
@inproceedings{
nayak2025uivision,
title={{UI}-Vision: A Desktop-centric {GUI} Benchmark for Visual Perception and Interaction},
author={Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Juan A. Rodriguez and Montek Kalsi and Nicolas Chapados and M. Tamer {\"O}zsu and Aishwarya Agrawal and David Vazquez and Christopher Pal and Perouz Taslakian and Spandana Gella and Sai Rajeswar},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=5Rtj4mYH1C}
}
License
This dataset is released under the MIT License.