row_id stringlengths 81 81 | match_id stringclasses 1
value | map_name stringclasses 1
value | round int32 1 1 | player int32 3 11 | chunk_index int32 0 6 | shard_id stringclasses 1
value | preview_video video | primary_weapon stringclasses 10
values | player_side stringclasses 2
values | survived_chunk bool 2
classes | duration_s float32 16 60 | fps float32 32 32 | width int32 1.28k 1.28k | height int32 720 720 | uploaded_at timestamp[ms, tz=UTC]date 2026-04-27 01:36:10 2026-04-27 01:36:10 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00011-chunk_000003 | 2393343 | de_ancient | 1 | 11 | 3 | cs2-lock-cz-01-7c7d9364 | Karambit | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00003-chunk_000000 | 2393343 | de_ancient | 1 | 3 | 0 | cs2-lock-cz-01-7c7d9364 | M9 Bayonet | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00003-chunk_000001 | 2393343 | de_ancient | 1 | 3 | 1 | cs2-lock-cz-01-7c7d9364 | M9 Bayonet | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00003-chunk_000002 | 2393343 | de_ancient | 1 | 3 | 2 | cs2-lock-cz-01-7c7d9364 | M9 Bayonet | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00003-chunk_000003 | 2393343 | de_ancient | 1 | 3 | 3 | cs2-lock-cz-01-7c7d9364 | M9 Bayonet | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00003-chunk_000004 | 2393343 | de_ancient | 1 | 3 | 4 | cs2-lock-cz-01-7c7d9364 | M9 Bayonet | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00003-chunk_000005 | 2393343 | de_ancient | 1 | 3 | 5 | cs2-lock-cz-01-7c7d9364 | M9 Bayonet | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00003-chunk_000006 | 2393343 | de_ancient | 1 | 3 | 6 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | false | 25.5625 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00004-chunk_000000 | 2393343 | de_ancient | 1 | 4 | 0 | cs2-lock-cz-01-7c7d9364 | Butterfly Knife | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00004-chunk_000001 | 2393343 | de_ancient | 1 | 4 | 1 | cs2-lock-cz-01-7c7d9364 | Butterfly Knife | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00004-chunk_000002 | 2393343 | de_ancient | 1 | 4 | 2 | cs2-lock-cz-01-7c7d9364 | Butterfly Knife | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00004-chunk_000003 | 2393343 | de_ancient | 1 | 4 | 3 | cs2-lock-cz-01-7c7d9364 | Butterfly Knife | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00004-chunk_000004 | 2393343 | de_ancient | 1 | 4 | 4 | cs2-lock-cz-01-7c7d9364 | Butterfly Knife | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00004-chunk_000005 | 2393343 | de_ancient | 1 | 4 | 5 | cs2-lock-cz-01-7c7d9364 | Butterfly Knife | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00004-chunk_000006 | 2393343 | de_ancient | 1 | 4 | 6 | cs2-lock-cz-01-7c7d9364 | Glock-18 | T | true | 52.375 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00005-chunk_000000 | 2393343 | de_ancient | 1 | 5 | 0 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00005-chunk_000001 | 2393343 | de_ancient | 1 | 5 | 1 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00005-chunk_000002 | 2393343 | de_ancient | 1 | 5 | 2 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00005-chunk_000003 | 2393343 | de_ancient | 1 | 5 | 3 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00005-chunk_000004 | 2393343 | de_ancient | 1 | 5 | 4 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00005-chunk_000005 | 2393343 | de_ancient | 1 | 5 | 5 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00005-chunk_000006 | 2393343 | de_ancient | 1 | 5 | 6 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | false | 36.4375 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00006-chunk_000000 | 2393343 | de_ancient | 1 | 6 | 0 | cs2-lock-cz-01-7c7d9364 | knife | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00006-chunk_000001 | 2393343 | de_ancient | 1 | 6 | 1 | cs2-lock-cz-01-7c7d9364 | knife | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00006-chunk_000002 | 2393343 | de_ancient | 1 | 6 | 2 | cs2-lock-cz-01-7c7d9364 | knife | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00006-chunk_000003 | 2393343 | de_ancient | 1 | 6 | 3 | cs2-lock-cz-01-7c7d9364 | knife | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00006-chunk_000004 | 2393343 | de_ancient | 1 | 6 | 4 | cs2-lock-cz-01-7c7d9364 | Dual Berettas | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00006-chunk_000005 | 2393343 | de_ancient | 1 | 6 | 5 | cs2-lock-cz-01-7c7d9364 | Dual Berettas | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00006-chunk_000006 | 2393343 | de_ancient | 1 | 6 | 6 | cs2-lock-cz-01-7c7d9364 | Dual Berettas | CT | false | 52.375 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00007-chunk_000000 | 2393343 | de_ancient | 1 | 7 | 0 | cs2-lock-cz-01-7c7d9364 | Karambit | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00007-chunk_000001 | 2393343 | de_ancient | 1 | 7 | 1 | cs2-lock-cz-01-7c7d9364 | Karambit | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00007-chunk_000002 | 2393343 | de_ancient | 1 | 7 | 2 | cs2-lock-cz-01-7c7d9364 | C4 Explosive | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00007-chunk_000003 | 2393343 | de_ancient | 1 | 7 | 3 | cs2-lock-cz-01-7c7d9364 | C4 Explosive | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00007-chunk_000004 | 2393343 | de_ancient | 1 | 7 | 4 | cs2-lock-cz-01-7c7d9364 | Karambit | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00007-chunk_000005 | 2393343 | de_ancient | 1 | 7 | 5 | cs2-lock-cz-01-7c7d9364 | Karambit | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00007-chunk_000006 | 2393343 | de_ancient | 1 | 7 | 6 | cs2-lock-cz-01-7c7d9364 | Glock-18 | T | false | 46 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00008-chunk_000000 | 2393343 | de_ancient | 1 | 8 | 0 | cs2-lock-cz-01-7c7d9364 | USP-S | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00008-chunk_000001 | 2393343 | de_ancient | 1 | 8 | 1 | cs2-lock-cz-01-7c7d9364 | knife | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00008-chunk_000002 | 2393343 | de_ancient | 1 | 8 | 2 | cs2-lock-cz-01-7c7d9364 | knife | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00008-chunk_000003 | 2393343 | de_ancient | 1 | 8 | 3 | cs2-lock-cz-01-7c7d9364 | knife | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00008-chunk_000004 | 2393343 | de_ancient | 1 | 8 | 4 | cs2-lock-cz-01-7c7d9364 | Dual Berettas | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00008-chunk_000005 | 2393343 | de_ancient | 1 | 8 | 5 | cs2-lock-cz-01-7c7d9364 | Dual Berettas | CT | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00008-chunk_000006 | 2393343 | de_ancient | 1 | 8 | 6 | cs2-lock-cz-01-7c7d9364 | Dual Berettas | CT | false | 16 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00009-chunk_000000 | 2393343 | de_ancient | 1 | 9 | 0 | cs2-lock-cz-01-7c7d9364 | Glock-18 | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00009-chunk_000001 | 2393343 | de_ancient | 1 | 9 | 1 | cs2-lock-cz-01-7c7d9364 | Glock-18 | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00009-chunk_000002 | 2393343 | de_ancient | 1 | 9 | 2 | cs2-lock-cz-01-7c7d9364 | Glock-18 | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00009-chunk_000003 | 2393343 | de_ancient | 1 | 9 | 3 | cs2-lock-cz-01-7c7d9364 | Glock-18 | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00009-chunk_000004 | 2393343 | de_ancient | 1 | 9 | 4 | cs2-lock-cz-01-7c7d9364 | knife_t | T | true | 60 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 | |
2393343-akimbo-vs-chicanery-m1-ancient-round_00001-player_slot_00009-chunk_000006 | 2393343 | de_ancient | 1 | 9 | 6 | cs2-lock-cz-01-7c7d9364 | P250 | T | true | 52.375 | 32 | 1,280 | 720 | 2026-04-27T01:36:10 |
HLTV CS2 POV Rendered Dataset — test_small
A test slice (16 chunks across one match, one map, three players) of
blanchon/cs2_dataset_render,
designed end-to-end for 10 TB+ scale and parallel render fleets.
Each chunk is a one-minute Counter-Strike 2 player POV clip, with all four streams accessible per row:
- video (mp4, 1280×720, 32 fps)
- audio (wav, tick-aligned)
- inputs (per-tick player input — keys, mouse, weapon, view angles)
- worlds (per-tick player world state — XYZ, velocity, health, …)
Plus a low-resolution, low-fps preview (~500 KB) for fast browsing, a set of derived chunk-level summaries (weapon, side, damage, shots, distance) for byte-frugal filtering, and small relational indexes for cross-cutting queries by match, team, round, etc.
TL;DR
from datasets import load_dataset
REPO = "blanchon/cs2_dataset_render_test_small"
# The default config is `previews` — ~500 KB per row, perfect for
# scrubbing through the dataset:
ds = load_dataset(REPO, split="train", streaming=True)
for row in ds.take(3):
show(row["preview_video"])
# When you want full media, opt into the `chunks` config:
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "inputs"], # ← bytes are pushed down
filters=[("player", "==", 3), ("chunk_index", "==", 0)],
)
for row in ds:
train_step(row["video"], row["inputs"])
Bytes pulled scale with the filtered output, not the dataset size.
Verified end-to-end: a single 50 MB matched video pulls ~42.5 MB;
single-row inputs queries pull ~96 KB; non-matching filters pull
~1.3 MB of metadata only.
Configs and relationships
Four overlapping views of the same data. Each child carries the parent's keys as columns, so queries chain top-down with foreign keys.
┌──────────────────────────────────────────────────────────────────────┐
│ previews (one row per chunk, ~500 KB low-res mp4 + scalars) │ ← YAML default
│ row_id, match_id, map_name, round, player, chunk_index, │
│ preview_video, primary_weapon, player_side, survived_chunk, … │
└──────────┬───────────────────────────────────────────────────────────┘
│ FK: row_id, match_id, round, player, chunk_index
▼
┌──────────────────────────────────────────────────────────────────────┐
│ chunks (one row per chunk — the heavy data) │
│ match_id, map_name, round, player, chunk_index, │
│ video, audio, inputs, worlds, │
│ primary_weapon, weapons_used, player_side, survived_chunk, │
│ damage_taken, n_damage_events, shots_fired, │
│ weapon_switches, distance_traveled, … │
└──────────▲───────────────────────────────────────────────────────────┘
│ FK: match_id, map_name, round
│
┌──────────┴───────────────────────────────────────────────────────────┐
│ rounds (one row per (match, map, round)) │
│ match_id, map_name, round, demo_round, │
│ round_start_tick, freeze_end_tick, round_end_tick, │
│ round_duration_ticks │
└──────────▲───────────────────────────────────────────────────────────┘
│ FK: match_id, map_name
│
┌──────────┴───────────────────────────────────────────────────────────┐
│ matches (one row per (match, map)) │
│ match_id, map_name, team1, team2, event, score, winner, │
│ match_date, patch_version, … │
└──────────────────────────────────────────────────────────────────────┘
| config | rows (10 TB) | total size | per-row | use case |
|---|---|---|---|---|
previews |
~75 M | ~37 GB | ~500 KB | quick scrubbing UI, training-set vetting |
matches |
~30 k | ~6 MB | ~200 B | filter by team / event / date / score |
rounds |
~750 k | ~30–100 MB | ~150 B | filter by round duration / timing |
chunks |
~75 M | ~10 TB | ~91 MB | full media, training, per-tick gameplay |
previews is the YAML default so:
- The Hub Dataset Viewer landing page renders inline preview videos.
load_dataset(REPO)with no config arg loads previews — cheap by default.- Full media is one explicit config away:
load_dataset(REPO, "chunks").
Filesystem layout
README.md
data/
match=<match_id>/map=<map>/player=<player>/
chunks-full-<machine_id>-<uuid8>.parquet # heavy: ~91 MB / row, full streams
chunks-preview-<machine_id>-<uuid8>.parquet # light: ~500 KB / row, low-res mp4
index/
manifest-<machine_id>-<uuid8>.parquet # match-level scalars (no media)
rounds-<machine_id>-<uuid8>.parquet # round-level scalars (no media)
state/
processed/<input_metadata_stem>/<match_id>.json
failed/...
skipped/...
Both chunks-full-* and chunks-preview-* live under the same
(match, map, player) partition path so the hive prune (player == N)
applies to both configs identically.
Per-config schemas
previews (default)
| Column | Type | Notes |
|---|---|---|
preview_video |
Video() (bytes embedded) |
~500 KB / row, 320×180 @ 8 fps + low-bitrate audio. Hub Viewer renders inline. |
row_id |
string | FK → chunks.row_id (one-to-one). |
match_id, map_name, round, player, chunk_index |
string / int | Hive partition + sort keys. |
shard_id |
string | Worker contribution id. |
primary_weapon, player_side, survived_chunk |
string / bool | Cheap filters — replicated from chunks. |
duration_s, fps, width, height |
float / int | Properties of the original clip (not the preview). |
uploaded_at |
timestamp[ms, UTC] |
matches
| Column | Type | Notes |
|---|---|---|
match_id, map_name, map_index |
string / int | One row per (match, map). |
shard_id |
string | |
hltv_demo_id, match_url |
string | Source provenance. |
event, team1, team2 |
string | The two teams + tournament. |
score1, score2 |
int32 | Final score on this map. |
winner, winner_side, format, stars |
string / int | Match-level outcome. |
match_date, patch_version, rounds_played |
mixed | |
uploaded_at |
timestamp[ms, UTC] |
rounds
| Column | Type | Notes |
|---|---|---|
match_id, map_name, round |
string / int | FK to matches; one row per (match, map, round). |
demo_round, shard_id |
int / string | |
round_start_tick, freeze_end_tick, round_end_tick |
int64 | |
round_duration_ticks |
int64 | Pre-computed round_end_tick - round_start_tick. |
uploaded_at |
timestamp[ms, UTC] |
chunks (heavy)
| Column | Type | Notes |
|---|---|---|
video |
Video() (bytes embedded) |
torchcodec VideoDecoder on read. |
audio |
Audio() (bytes embedded) |
AudioDecoder / {array, sampling_rate, path}. |
inputs |
struct of arrays | Per-tick player input. |
worlds |
struct of arrays | Per-tick player world state. |
chunk_metadata |
string | Raw JSON of the per-chunk metadata.json (redundant with the typed columns; kept for archival). |
| Chunk-level summaries (filterable scalars) | ||
weapons_used |
list<string> |
Distinct weapons during the chunk. |
primary_weapon |
string | Most-held weapon — filter "AWP gameplays". |
player_side |
string | "T" / "CT" / "unknown" from worlds.team_num mode. |
survived_chunk |
bool | is_alive did not flip to False during this chunk. |
damage_taken |
int32 | Σ HP losses during the chunk. |
n_damage_events |
int32 | Count of HP-drop events. |
shots_fired |
int32 | Edge count of attack press events. |
weapon_switches |
int32 | Number of weapon transitions. |
distance_traveled |
float32 | Σ Euclidean XYZ distance (Hammer units). |
| Identifiers | ||
row_id, shard_id |
string | |
match_id, map_name, player |
string / int | Hive partition keys. |
map_index, round, demo_round, chunk_index, spec_slot |
int32 | |
Match metadata (also in matches config) |
||
event, team1, team2, winner, winner_side, format, stars, match_url, hltv_demo_id, patch_version |
mixed | |
score1, score2, rounds_played |
int32 | |
match_date |
timestamp[ms, UTC] | |
Timing / geometry (round timing also in rounds) |
||
start_frame, end_frame, start_tick, end_tick |
int64 | |
round_start_tick, freeze_end_tick, round_end_tick, death_tick |
int64 | |
survived_round, segment_start_tick, segment_end_tick, video_frames |
mixed | |
fps, duration_s |
float32 | |
width, height |
int32 | |
| Worker provenance | ||
machine_id, source_job_id, input_metadata_file |
string | |
machine_index, machine_count, match_slot |
int32 | |
uploaded_at |
timestamp[ms, UTC] |
Rows inside each per-player parquet are sorted by (round, chunk_index),
which is monotonic with start_tick — iterating yields chronological play.
Recipes — common queries by use case
Pattern reminder: always pass
columns=[...]andfilters=[...]as kwargs toload_dataset, never asIterableDataset.select_columns(...)/.filter(lambda …). The kwargs push down to the parquet reader and turn into byte-range skips at the Hub. The post-iteration form pulls 1000× more bytes.
A. Browse / scrub
A1. Random preview gallery
ds = load_dataset(REPO, split="train", streaming=True) # default config = previews
for row in ds.shuffle(seed=42, buffer_size=200).take(20):
show(row["preview_video"]) # ~500 KB pulled per preview
A2. Pretty grid of "AWP plays of player 3 on dust2"
ds = load_dataset(
REPO, split="train", streaming=True,
columns=["preview_video", "row_id", "match_id", "round", "chunk_index"],
filters=[
("map_name", "==", "de_dust2"),
("player", "==", 3),
("primary_weapon", "==", "AWP"),
],
)
gallery = list(ds.take(100)) # ≤ 50 MB pulled regardless of dataset size
B. Train
B1. Streaming PyTorch loop, video + inputs, all data
from datasets import load_dataset
ds = load_dataset(REPO, "chunks", split="train", streaming=True,
columns=["video", "inputs"])
ds = ds.with_format("torch")
for batch in DataLoader(ds, batch_size=4, num_workers=4):
train_step(batch["video"], batch["inputs"])
B2. Train only on a subset that matches some criterion
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "audio", "inputs"],
filters=[
("player_side", "==", "T"),
("survived_chunk", "==", True),
("shots_fired", ">", 5),
],
)
B3. Per-tick analysis without media (inputs/worlds only)
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["match_id", "round", "player", "chunk_index", "inputs", "worlds"],
filters=[("map_name", "==", "de_dust2")],
)
# ~210 KB of `inputs` + ~210 KB of `worlds` per matched row, no video bytes.
C. Filter by match-level attributes
C1. By team name (uses the matches config)
team1 / team2 aren't partition keys (would explode path
combinatorics), so we resolve them via the small matches index first:
matches = load_dataset(
REPO, "matches", split="train",
columns=["match_id", "team1", "team2"],
filters=[("team1", "==", "Akimbo")],
)
match_ids = sorted({r["match_id"] for r in matches})
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "inputs"],
filters=[("match_id", "in", match_ids), ("player", "==", 3)],
)
C2. By event / date range
matches = load_dataset(
REPO, "matches", split="train",
columns=["match_id", "event", "match_date"],
filters=[("event", "==", "Dust2.us Eagle Masters Series 7")],
)
D. Filter by round-level attributes
D1. Long rounds only (>60 s of action)
rounds = load_dataset(
REPO, "rounds", split="train",
columns=["match_id", "round", "round_duration_ticks"],
filters=[("round_duration_ticks", ">", 64 * 60)], # 64 tickrate × 60 s
)
keys = {(r["match_id"], r["round"]) for r in rounds}
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "match_id", "round", "player"],
filters=[
("match_id", "in", sorted({m for m, _ in keys})),
("round", "in", sorted({r for _, r in keys})),
],
)
E. Three-level chained query (the full recipe)
"For every match Akimbo played, on long rounds, give me player 3's AWP chunks — and pull only the video bytes."
from datasets import load_dataset
REPO = "blanchon/cs2_dataset_render_test_small"
# 1. matches: team filter
matches = load_dataset(
REPO, "matches", split="train",
columns=["match_id"], filters=[("team1", "==", "Akimbo")],
)
match_ids = sorted({r["match_id"] for r in matches})
# 2. rounds: timing filter, scoped by match_ids
rounds = load_dataset(
REPO, "rounds", split="train",
columns=["match_id", "round", "round_duration_ticks"],
filters=[("match_id", "in", match_ids),
("round_duration_ticks", ">", 64 * 60)],
)
round_numbers = sorted({r["round"] for r in rounds})
# 3. chunks: gameplay filter, scoped by both
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "match_id", "round", "player", "chunk_index"],
filters=[
("match_id", "in", match_ids),
("round", "in", round_numbers),
("player", "==", 3),
("primary_weapon", "==", "AWP"),
],
)
Each step pushes its columns= / filters= down to the parquet reader.
At 10 TB scale this is single-digit MB of metadata + (matched_chunks × 50 MB)
of video.
F. Specific-row lookups
F1. By exact row_id (after preview pre-selection)
prevs = load_dataset(REPO, split="train", streaming=True,
columns=["row_id", "preview_video"])
chosen = [r["row_id"] for r in user_picks_from(prevs.take(50))]
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "audio", "inputs", "worlds"],
filters=[("row_id", "in", chosen)],
)
F2. Single (match, map, player), in chronological order
The cleanest possible query — three filters all resolve at the partition path → exactly one parquet is opened, rows are pre-sorted:
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "inputs", "round", "chunk_index", "start_tick"],
filters=[
("match_id", "==", "2393343"),
("map_name", "==", "de_ancient"),
("player", "==", 3),
],
)
# rows arrive sorted by (round, chunk_index), monotonic with start_tick.
G. Aggregate / summary queries
G1. Count chunks per primary weapon
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["primary_weapon"],
)
from collections import Counter
counts = Counter(r["primary_weapon"] for r in ds)
Pulls only the primary_weapon column (~kB per matched parquet
metadata + tens of bytes per row). Whole-dataset aggregation in MB, not
TB.
Training loop with PyTorch DataLoader
import torch
from torch.utils.data import DataLoader
from datasets import load_dataset
REPO = "blanchon/cs2_dataset_render_test_small"
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "inputs", "match_id", "round", "player", "chunk_index"],
filters=[("player_side", "==", "T")],
).with_format("torch")
loader = DataLoader(ds, batch_size=4, num_workers=4, prefetch_factor=2)
for batch in loader:
video = batch["video"] # decoded by torchcodec, returned as torch tensor
inputs = batch["inputs"] # dict of torch tensors per inputs subfield
train_step(video, inputs)
Tips:
- Pass
columns=/filters=atload_datasettime, not to the DataLoader. - Use
streaming=Truefor full datasets > local disk. - For small datasets you want in RAM, drop
streaming=Trueand the dataset materialises into an Arrow table.
Streaming tuning for high throughput
The default fsspec block size (32 MiB) underutilises bandwidth on multi-GB shards. Bump to 128 MiB blocks + prefetch:
import pyarrow as pa
import pyarrow.dataset as pads
from datasets import load_dataset
opts = pads.ParquetFragmentScanOptions(
cache_options=pa.CacheOptions(prefetch_limit=1, range_size_limit=128 << 20)
)
ds = load_dataset(
REPO, "chunks", split="train", streaming=True,
columns=["video", "inputs"],
filters=[("player", "==", 3)],
fragment_scan_options=opts,
)
Roughly 2× streaming throughput on multi-GB shards.
Bare pyarrow.dataset and DuckDB SQL
For raw-bytes / cross-config / SQL workflows, skip the datasets layer:
pyarrow.dataset
import pyarrow.dataset as pads
from huggingface_hub import HfFileSystem
ds = pads.dataset(
"datasets/blanchon/cs2_dataset_render_test_small/data",
filesystem=HfFileSystem(),
format="parquet",
partitioning="hive", # match=, map=, player= → partition columns
)
table = ds.to_table(
columns=["row_id", "video"],
filter=(
(pads.field("map") == "de_ancient")
& (pads.field("player") == 3)
& (pads.field("chunk_index") == 0)
),
)
for i in range(table.num_rows):
rec = table.column("video")[i].as_py()
open(f"clip_{table.column('row_id')[i].as_py()}.mp4", "wb").write(rec["bytes"])
DuckDB
-- Browse the manifest cross-match
INSTALL httpfs;
LOAD httpfs;
-- All Akimbo matches in 2026
SELECT match_id, event, team1, team2, score1, score2, match_date
FROM 'hf://datasets/blanchon/cs2_dataset_render_test_small/index/manifest-*.parquet'
WHERE team1 = 'Akimbo' OR team2 = 'Akimbo'
ORDER BY match_date DESC;
-- Filter chunks-full directly via partition pruning + column projection
SELECT row_id, primary_weapon, shots_fired, distance_traveled
FROM 'hf://datasets/blanchon/cs2_dataset_render_test_small/data/match=2393343/**/chunks-full-*.parquet'
WHERE player = 3 AND chunk_index = 0;
DuckDB inherits the same partition/predicate-pushdown wins.
Partial download via the hf CLI
# Download all clips (full + previews) for one match's de_ancient map
hf download blanchon/cs2_dataset_render_test_small --repo-type dataset \
--include "data/match=2393343/map=de_ancient/**"
# Download only previews for the whole dataset
hf download blanchon/cs2_dataset_render_test_small --repo-type dataset \
--include "data/**/chunks-preview-*.parquet"
# Download only the index parquets (fastest dataset-wide scan)
hf download blanchon/cs2_dataset_render_test_small --repo-type dataset \
--include "index/*"
How this is built — every optimisation we applied
A summary of the choices baked into this dataset, in the order they matter at scale.
1. Video() + Audio() features with embedded bytes
Per-row video and audio columns store the canonical
{bytes: binary, path: string} struct that datasets.Video() /
Audio() decode to torchcodec.VideoDecoder / AudioDecoder. The Hub
auto-conversion strips bytes back to URLs in the viewer-side parquet,
so the Dataset Viewer renders inline while user-side load_dataset
decodes from the original embedded bytes.
2. inputs / worlds as struct-of-arrays
Per-tick streams are nested struct-of-arrays — row["inputs"]["view"]["pitch"]
returns a list. Vectorisation-friendly, no extra hf_hub_download
round-trips, no manual sidecar parsing.
3. Three-axis hive partitioning (match, map, player)
data/match=<m>/map=<map>/player=<p>/... — file-level prune for these
keys means non-matching parquets are never opened. A player == 3
filter reads 1/10 of files at scale.
4. Sorted rows + row_group_size=1
Rows in each per-player parquet are sorted by (round, chunk_index)
(monotonic with start_tick). With row_group_size=1 (one row per
row group) every scalar column gets exact min/max statistics — predicate
pushdown skips non-matching row groups at byte-range level. Verified:
zero-match filters cost ~1.3 MB of metadata only.
5. Derived chunk-level summaries (filterable scalars)
| Column | Why |
|---|---|
weapons_used |
List of distinct weapons during the chunk. |
primary_weapon |
Most-held weapon — filter "AWP gameplays". |
player_side |
"T" / "CT" — filter by side. |
survived_chunk |
Filter survival / death events. |
damage_taken, n_damage_events |
Filter clutch / engaged moments. |
shots_fired |
Filter active fights. |
weapon_switches |
Filter buy phase / repositioning. |
distance_traveled |
Filter mobile / static gameplay. |
All scalars → first-class predicate pushdown via row-group stats.
6. Four relational configs with FK relationships
previews → matches → rounds → chunks. Every child carries the
parent's keys as columns. Multi-step filtering chains cheaply — see
recipe E. Tiny configs answer broad questions (team, event, round
duration); heavy chunks is only opened once needed.
7. Co-located chunks-full-* and chunks-preview-*
Both live under data/match=…/map=…/player=…/. The same hive prune
applies to both configs identically, so a player == 3 filter on the
preview config reads exactly one ~3 MB preview parquet at scale.
8. Page index + Content-Defined Chunking
pq.write_table(
table, out_path,
compression="zstd",
row_group_size=1,
write_statistics=True,
write_page_index=True, # Viewer + random access
use_content_defined_chunking=True, # Xet dedup at upload
)
CDC means re-uploading the same dataset transfers near-zero bytes; a re-rendered chunk transfers only its own ~50 MB of new mp4 plus tiny parquet diff. Verified: re-pushing a 1.16 GB parquet with one row changed transferred ~2.7 MB of "new data".
9. Tuned 128 MiB streaming blocks
opts = pads.ParquetFragmentScanOptions(
cache_options=pa.CacheOptions(prefetch_limit=1, range_size_limit=128 << 20)
)
≈ 2× throughput vs the fsspec default on multi-GB shards.
10. Multi-machine sharding pattern
Each worker (machine_id + uuid8) writes only into its own filenames
— chunks-full-<shard>.parquet, chunks-preview-<shard>.parquet,
manifest-<shard>.parquet, rounds-<shard>.parquet. Workers never
collide on file paths; upload_large_folder runs in parallel for
hundreds of machines. State markers under state/{processed,failed,skipped}/...
coordinate work assignment.
11. Xet storage backend
The repo is on Xet (verified via x-xet-hash header and
cas-bridge.xethub.hf.co redirect). CDC + Xet together mean a partial
re-render of one chunk uploads only the changed bytes.
Measured byte counts
End-to-end measurements from a fresh Docker container (no caches), with
HfFileSystem instrumented to count every byte over the wire.
chunks config (heavy data) — selective reads
| Operation | Bytes pulled | HTTP reads | Time |
|---|---|---|---|
| Reference: full row (1.12 GB shard, baseline) | 1,124,642,845 | 32 | 5 s |
columns=["video"] (1 row) |
823,092,299 | 18 | — |
columns=["inputs"] (1 row) |
566,198 | 10 | — |
columns=["worlds"] (1 row) |
646,938 | 10 | — |
columns=[13 scalars] |
209,763 | 10 | — |
Filter player=3, chunk=0, columns=["video"] (1 row) |
207 MB | 10 | — |
Filter map=non-existent (no rows) |
0 | 0 | — |
load_dataset(streaming=True, columns=["inputs"]) |
95,585 | 2 | 2.1 s |
load_dataset(streaming, columns + filters) |
269,829 | 5 | 2.8 s |
load_dataset(streaming, player=3 chunk=0, video only) |
42.5 MB | 6 | 6.3 s |
load_dataset(streaming, player=3, inputs only) (7 rows) |
1.45 MB | 29 | 13 s |
❌ load_dataset().select_columns(["inputs"]) (post-iter) |
181.5 MB | 10 | 21 s |
The post-iteration .select_columns form pulls 1000× more bytes —
always pass projection as a load_dataset kwarg.
v4 layout — derived-column filters (Docker, fresh)
Filter (always columns=[inputs, …]) |
Bytes | Rows |
|---|---|---|
primary_weapon=="AWP" |
1.34 MB | 0 |
primary_weapon=="USP-S" |
1.60 MB | 9 |
primary_weapon=="M9 Bayonet" |
1.50 MB | 6 |
player_side=="T" |
1.97 MB | 21 |
player_side=="CT" |
2.12 MB | 28 |
survived_chunk==True |
2.58 MB | 44 |
survived_chunk==False |
1.50 MB | 5 |
shots_fired > 5 |
1.56 MB | 6 |
damage_taken > 0 |
1.55 MB | 6 |
matches config scan (manifest) |
13.7 KB | 1 |
Zero-match filters cost ~1.3 MB of metadata only — the parquet reader
short-circuits before pulling any data pages. Filter-matched queries
cost roughly (matched_rows × per-row column size) + metadata.
Producing the dataset (worker spec for the scale-up)
The render fleet pattern: each machine claims matches by index
(match_slot mod machine_count), runs cs-recorder, groups its
chunks per player, generates previews, and uploads:
data/match=<match_id>/map=<map>/player=<player>/chunks-full-<machine>-<uuid8>.parquet
data/match=<match_id>/map=<map>/player=<player>/chunks-preview-<machine>-<uuid8>.parquet
index/manifest-<machine>-<uuid8>.parquet # one row per (match, map)
index/rounds-<machine>-<uuid8>.parquet # one row per (match, map, round)
state/processed/<metadata_stem>/<match_id>.json # write LAST
Worker checklist:
- Unique filenames per
(match, map, player)and per shard, withmachine_id+ short uuid in the filename. - Sort rows by
(round, chunk_index)before writing each per-playerchunks-full-*.parquet(player is fixed per file). - Write each parquet with the optimised flags:
pq.write_table( table, out_path, compression="zstd", row_group_size=1, write_statistics=True, write_page_index=True, use_content_defined_chunking=True, ) - Generate previews alongside the full mp4 (one ffmpeg pass per
chunk: 320×180 @ 8 fps + low-bitrate audio + optional inputs overlay):
ffmpeg -i video.mp4 -i audio.wav \ -map 0:v:0 -map 1:a:0 \ -vf "scale=320:180:flags=lanczos,fps=8" \ -c:v libx264 -crf 32 -preset veryfast -pix_fmt yuv420p -movflags +faststart \ -c:a aac -b:a 32k -ac 1 -ar 16000 \ -y preview.mp4 - Emit all four parquets per contribution:
chunks-full-*,chunks-preview-*,manifest-*,rounds-*. - Check
state/processed/<metadata_stem>/<match_id>.jsonbefore processing — skip if present. Upload that marker LAST so a half-done contribution isn't claimed as complete. - Use
api.upload_large_folder(...)on the worker's full staging directory; different shard filenames never collide.
FAQ / common pitfalls
Q. Can I use IterableDataset.select_columns(...) instead of load_dataset(columns=...)?
A. Don't. The former is a post-iteration projection — the parquet
reader still reads full rows, then drops columns in Python. Verified:
single-row inputs query reads 181.5 MB via select_columns vs 96 KB
via load_dataset(columns=["inputs"]). Always pass projection at
load_dataset time.
Q. Why is the YAML default previews and not chunks?
A. So the Hub Dataset Viewer landing page renders ~500 KB inline videos
instead of streaming 50 MB clips. Users opt in to heavy data with
load_dataset(REPO, "chunks").
Q. How do I re-sort by start_tick?
A. You don't need to. Within each per-player parquet rows are already
sorted by (round, chunk_index), which is monotonic with start_tick.
Q. Filtering by team name is slow — is that the layout's fault?
A. team1 / team2 aren't partition keys. Use the matches config
to resolve team → match_ids first (recipe C1). At 10 TB scale, a
team-name query via the manifest costs ~6 MB; without it, ~600 MB of
parquet footer reads.
Q. The Hub Viewer shows my heavy clips, not the previews.
A. Make sure the YAML has default: true on the previews config.
That flag is what makes previews the viewer landing page.
Q. Can I add custom derived columns?
A. Yes — extend the worker's derive_chunk_summary() with any scalar
derivation; first-class scalar columns in the parquet get tight
row-group stats automatically (because of row_group_size=1), so they
become first-class filter targets.
Q. The per-player parquet is large (~500 MB). Can I cache it locally?
A. Use load_dataset(streaming=False) (the default) for cached random
access, or streaming=True for memory-bounded sequential access.
Q. Re-uploads are taking forever.
A. Make sure (1) the repo is on Xet (look for x-xet-hash headers on
file URLs), (2) the writer uses use_content_defined_chunking=True. With
both enabled, re-uploading a 1.16 GB parquet with one chunk changed
transfers ~2.7 MB.
Built with ❤️ for 10 TB scale. Questions / issues → open a discussion on the Hub repo.
- Downloads last month
- -