The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
0: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
1: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
2: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
3: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
4: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
5: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
6: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
7: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
8: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
9: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
10: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
11: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
12: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
13: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
14: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
15: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
16: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
17: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
18: struct<start
...
0, start: int64
child 1, end: int64
95600: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95601: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95602: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95603: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95604: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95605: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95606: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95607: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95608: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95609: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95610: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95611: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95612: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95613: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95614: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95615: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95616: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
to
{'text': Value('int64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1914, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 781, in finalize
self.write_rows_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 663, in write_rows_on_file
self._write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
0: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
1: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
2: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
3: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
4: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
5: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
6: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
7: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
8: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
9: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
10: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
11: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
12: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
13: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
14: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
15: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
16: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
17: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
18: struct<start
...
0, start: int64
child 1, end: int64
95600: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95601: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95602: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95603: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95604: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95605: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95606: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95607: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95608: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95609: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95610: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95611: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95612: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95613: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95614: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95615: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
95616: struct<start: int64, end: int64>
child 0, start: int64
child 1, end: int64
to
{'text': Value('int64')}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1925, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text int64 |
|---|
514 |
515 |
516 |
517 |
518 |
519 |
520 |
521 |
522 |
523 |
524 |
525 |
526 |
527 |
528 |
529 |
530 |
531 |
532 |
533 |
534 |
535 |
536 |
537 |
538 |
539 |
540 |
541 |
542 |
543 |
544 |
545 |
546 |
547 |
548 |
549 |
550 |
551 |
552 |
553 |
554 |
555 |
556 |
557 |
558 |
559 |
560 |
561 |
562 |
563 |
564 |
565 |
566 |
567 |
568 |
569 |
570 |
571 |
572 |
573 |
574 |
575 |
576 |
577 |
578 |
579 |
580 |
581 |
582 |
583 |
584 |
585 |
586 |
587 |
588 |
589 |
590 |
591 |
592 |
593 |
594 |
595 |
596 |
597 |
598 |
599 |
600 |
601 |
602 |
603 |
604 |
605 |
606 |
607 |
608 |
609 |
610 |
611 |
612 |
613 |
DROID RGB Dataloader (Standalone)
Minimal, self-contained PyTorch loader for DROID video (RGB only). Pairs with our preprocessed DROID mirror:
Dataset: SeonghuJeon/droid-1.0.1-preprocessed
DO NOT use lerobot/droid_1.0.1 directly. That HF release has:
- Broken
from_timestamp/to_timestampfields inmeta/episodes/*.parquet— UNIX-epoch drift up to ~54 years. Any naive video seek lands on garbage. 30,815 episodes (32%) with missing/truncated mp4 files that throw randomav.error.*deep into training.
Our preprocessed mirror bakes in the timestamp fix and ships our blacklist + non-idle range sidecars. Same LeRobot v3.0 layout, drop-in replacement.
What this repo (droid-rgb-loader) ships
| File | Purpose |
|---|---|
droid_rgb_dataset.py |
DroidRGBDataset, ~275 LoC, only deps are torch/numpy/pandas/pyarrow/pyav |
example_usage.py |
End-to-end sanity check + DataLoader example |
_stats/droid_blacklist_eps.json |
30,815 episodes to skip |
_stats/droid_nonidle_ranges.json |
per-episode {start,end} frame range where arm is moving |
The stats files are also mirrored inside droid-1.0.1-preprocessed for
convenience, but you can feed either copy to the loader via stats_dir=.
Step 1 — Download the preprocessed DROID dataset
HF_XET_HIGH_PERFORMANCE=1 \
hf download SeonghuJeon/droid-1.0.1-preprocessed \
--repo-type dataset \
--local-dir /path/to/droid-1.0.1-preprocessed
This is ~1.2TB (all three cameras + framecache sidecars + fixed meta). Video-only download (skip framecache, keep raw mp4 + meta) is smaller:
hf download SeonghuJeon/droid-1.0.1-preprocessed \
--repo-type dataset \
--local-dir /path/to/droid-1.0.1-preprocessed \
--include "meta/*" "data/*" "videos/**/*.mp4"
If you only want one camera:
hf download SeonghuJeon/droid-1.0.1-preprocessed \
--repo-type dataset \
--local-dir /path/to/droid-1.0.1-preprocessed \
--include "meta/*" "data/*" "videos/observation.images.exterior_2_left/**/*.mp4"
Step 2 — Install dependencies
pip install torch numpy pandas pyarrow av pillow
av is PyAV, the Python binding for ffmpeg. That's the only decode dep — no
decord, no torchcodec, no torchvision VideoReader.
Step 3 — Get this loader repo
hf download SeonghuJeon/droid-rgb-loader --repo-type dataset --local-dir ./droid-rgb-loader
cd droid-rgb-loader
Step 4 — Run the example
python example_usage.py \
--root /path/to/droid-1.0.1-preprocessed \
--n-frames 8 --stride 3 --num-workers 2
Expected output:
usable episodes: 63908
exterior: shape=(2, 8, 224, 224, 3) dtype=torch.uint8
wrist: shape=(2, 8, 224, 224, 3) dtype=torch.uint8
If usable episodes is much lower than ~64k, something is wrong with paths or
stats dir.
Using DroidRGBDataset in your own code
from droid_rgb_dataset import DroidRGBDataset
from torch.utils.data import DataLoader
ds = DroidRGBDataset(
root="/path/to/droid-1.0.1-preprocessed",
stats_dir="./_stats",
camera_keys=("observation.images.exterior_2_left",
"observation.images.wrist_left"),
n_frames=16, # window length
stride=1, # 1=15Hz, 3=5Hz
image_size=(224, 224),
apply_blacklist=True, # mandatory
apply_nonidle=True, # skip idle pre/post-roll
)
loader = DataLoader(ds, batch_size=4, num_workers=4, shuffle=True)
for batch in loader:
ext = batch["observation.images.exterior_2_left"] # (B, T, H, W, 3) uint8
...
Each __getitem__ picks a random valid window from the i-th usable
episode. __len__ returns the number of usable episodes. Iterate the
DataLoader multiple times per epoch if you want more diverse windows than one
per episode.
Why the blacklist is non-negotiable
DROID on the original HF release has ~30,815 episodes (32%) whose mp4
files are missing, truncated, or fail to decode. They remain silently present
in the metadata parquets. We rebuilt the blacklist by walking every
observation.images.* mp4 on disk and we re-encode nothing — we just skip
those episodes. The preprocessed mirror on
SeonghuJeon/droid-1.0.1-preprocessed only contains video files we verified
decode end-to-end, but the blacklist is still applied at dataloader time as a
belt-and-suspenders check.
Why apply_nonidle matters
DROID was collected via VR teleop, and episodes start/end with the operator
sitting still (several seconds of no-op frames). droid_nonidle_ranges.json
tells you the [start, end) frame range where the arm is actually moving, so
you don't waste training capacity on motionless pre-roll. We built it from
gripper + cartesian velocity thresholds.
88,097 episodes (of the non-blacklisted ones) have a non-empty non-idle range. Episodes missing from this file are considered fully idle and dropped by the loader.
Camera keys
DROID stores three exterior views. We strongly recommend:
observation.images.exterior_2_left(primary third-person view)observation.images.wrist_left(wrist-mounted)
Do NOT use observation.images.exterior_1_left: it is mis-calibrated or
occluded on a large fraction of episodes. We blocked it in all our runs.
File layout (LeRobot v3.0)
<root>/
meta/info.json
meta/episodes/chunk-NNN/file-NNN.parquet <- FIXED timestamps (mirror only)
data/chunk-NNN/file-NNN.parquet
videos/<camera_key>/chunk-NNN/file-NNN.mp4
videos/<camera_key>/chunk-NNN/file-NNN.framecache/ (optional speedup, mirror only)
One mp4 holds many episodes. Each episode's window inside the mp4 is given
by videos/<key>/from_timestamp in the episode meta parquet. The loader seeks
there and decodes n_frames frames at the requested stride.
Performance notes
- PyAV seek + decode: ~10-30ms per window on a local SSD. Fast enough for
training at
num_workers=8+. - Stride 3 = 5Hz matches the DROID effective control rate most policies use (native 15Hz is over-sampled for slow VR teleop data).
- Framecache: the preprocessed mirror ships
.framecachesidecars next to each mp4 (one JPEG per frame, pre-decoded). This loader does not use them, but if you want roughly 3× faster random-window reads, ask us for the framecache reader — it's not included here to keep the code standalone. - Multi-worker: PyAV containers are not fork-safe, so the loader opens and
closes a container per
__getitem__. This is the safe default.
Known limitations
- No language instructions returned (DROID has them in the data parquet; add
a column read in
_build_indexif you need them). - No action / state returned. If you need them later, the spec lives in
meta/info.json→features, and the relevant columns are indata/chunk-NNN/file-NNN.parquetkeyed by(episode_index, frame_index). - Windows are per-episode random at
__getitem__time (not exhaustively enumerated).len(ds) == num_usable_episodes, not num_windows.
Contact
Built by Seonghu Jeon. If you hit a broken mp4 that's not in the blacklist, or a non-idle range that looks wrong, please send the episode index back so we can update the stats.
- Downloads last month
- 53