Datasets:
pretty_name: Doom Frame Dataset
tags:
- doom
- vizdoom
- reinforcement-learning
- imitation-learning
- webdataset
configs:
- config_name: preview
data_files:
- split: train
path: data/train-000000.tar
- config_name: full
data_files:
- split: train
path: data/train-*.tar
DoomFrameDataset
DoomFrameDataset is a ViZDoom frame-action dataset generated from policy rollouts. It is packaged as WebDataset tar shards for streaming training, imitation learning, behavior cloning, and offline reinforcement-learning experiments.
The dataset contains RGB game frames paired with the action selected by the rollout policy and per-step metadata such as reward, episode id, step id, terminal flag, and value estimate.
Dataset Size
| Config | Files | Samples | Intended use |
|---|---|---|---|
preview |
1 shard | ~79k | Hugging Face preview and quick sanity checks |
full |
31 shards | 2,398,745 | Training and full streaming reads |
The packaged dataset is about 68 GB.
Files
data/
train-000000.tar
train-000001.tar
...
train-000030.tar
action_map.json
README.md
Each tar shard contains paired files with the same numeric key:
000000000000.png
000000000000.json
000000000001.png
000000000001.json
...
The PNG is the game frame. The JSON is the metadata for that frame.
Sample Metadata
{
"action_id": 1,
"action_name": "TURN_RIGHT",
"action_vector": [0.0, 0.0, 0.0, 0.0, 1.0, 0.0],
"curriculum_level": 1,
"done": false,
"episode": 1,
"frame_path": "frames/episode_001/step_000000.png",
"global_step": 0,
"reward": 0.0,
"source_frame_path": "frames/episode_001/step_000000.png",
"step": 0,
"value": 1.7968196868896484,
"webdataset_key": "000000000000"
}
See action_map.json for the full action id, action name, and action vector mapping.
Load The Preview Config
Use preview when you only want to verify the dataset or inspect examples in the Hugging Face Dataset Viewer.
from datasets import load_dataset
ds = load_dataset(
"brahmandam/DoomFrameDataset",
"preview",
split="train",
streaming=True,
)
sample = next(iter(ds))
print(sample.keys())
print(sample["json"])
image = sample["png"]
Stream The Full Dataset
Use full for training.
from datasets import load_dataset
ds = load_dataset(
"brahmandam/DoomFrameDataset",
"full",
split="train",
streaming=True,
)
for sample in ds:
image = sample["png"]
metadata = sample["json"]
action_id = metadata["action_id"]
break
You can also read the shards directly with WebDataset:
import webdataset as wds
urls = "https://huggingface.co/datasets/brahmandam/DoomFrameDataset/resolve/main/data/train-{000000..000030}.tar"
dataset = (
wds.WebDataset(urls)
.decode("pil")
.to_tuple("png", "json")
)
image, metadata = next(iter(dataset))
Notes
The preview config intentionally points to a single shard so the Hub can inspect a small part of the dataset without processing the full 68 GB. For training, use the full config.
This dataset was generated from automated ViZDoom policy rollouts. It should be treated as gameplay observation/action data, not human demonstrations.