The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 80, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 33, in _get_pipeline_from_tar
for filename, f in tar_iterator:
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
for x in self.generator(*self.args):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1380, in _iter_from_urlpath
yield from cls._iter_tar(f)
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1331, in _iter_tar
stream = tarfile.open(fileobj=f, mode="r|*")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 1886, in open
t = cls(name, filemode, stream, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 1762, in __init__
self.firstmember = self.next()
^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 2750, in next
raise ReadError(str(e)) from None
tarfile.ReadError: invalid header
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
PointWorld-DROID
Dataset Description:
PointWorld-DROID is the packaged DROID-derived annotation release used for training and evaluating the 3D world model, PointWorld. It contains precomputed 3D annotations derived from the official DROID dataset, including episode-level 3D point trajectories, optimized camera metadata, optional downsampled depth, and the released expert confidence artifact used by the PointWorld DROID evaluation pipeline.
This dataset is for research and development only.
This Hugging Face repository hosts the packaged release, not the original raw DROID scenes and not prebuilt WebDataset shards. After download, users should restore the packaged archives to the canonical HDF5/JSON layout. From there, they can either use the files directly as 3D annotations or follow the PointWorld repository workflow for integrity checking, conversion, training, and evaluation.
Dataset Owner(s):
NVIDIA Corporation
Dataset Creation Date:
04/15/2026
License/Terms of Use:
NVIDIA License
Intended Usage:
Research and development for world modeling, 3D vision, and robotic manipulation.
This release is intended for two common use cases:
- Using the restored data with the released PointWorld codebase, and
- Using the restored 3D annotations directly, for example to access depth, camera poses, visibility masks, or tracked 3D point trajectories without using the PointWorld training pipeline.
Dataset Characterization
Data Collection Method
- Hybrid: Human, Automatic/Sensors
Labeling Method
- Automatic/Sensors
Dataset Format
The release is distributed in two layers:
- packaged Hugging Face download artifacts:
- split compressed archives package.tar.zst.part-*
- restore helper script recover_dataset_from_parts.sh
- restored dataset contents:
- episode-level HDF5 files (.h5)
- camera metadata JSON sidecars (.json)
- one auxiliary confidence HDF5 file for released DROID evaluation
Restored canonical layout:
droid/
cameras/
*_cameras.json
confidence/
expert_confidence-seed=42.h5
depth_320x180/
*_depth.h5
flows-fs-optimized/
*_flows.h5
Local conversion to WebDataset train/test shards is supported by the PointWorld repository tooling.
Dataset Quantification
Current release package counts inspected from the restored release tree:
- 42,935 episode-level *_flows.h5 files
- 42,935 episode-level *_depth.h5 files
- 42,935 camera JSON sidecars
- 1 released expert-confidence HDF5 file
- restored size on disk: about 4.65 TB
- packaged download size: about 3.91 TB
Each *_flows.h5 file contains multiple clip groups keyed as "{start}:{end}", so the corpus contains substantially more clips than files.
Measurement of Total Data Storage: 3.91 TB
Reference(s):
- Project Website: https://point-world.github.io/
- Paper: https://arxiv.org/abs/2601.03782
- Code: https://github.com/NVlabs/PointWorld
- Original DROID Dataset: https://droid-dataset.github.io/
- Planned release location: NVIDIA Hugging Face PointWorld-DROID
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with the applicable terms of service, developers should work with their internal developer teams to ensure this dataset meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
Packaged Release Layout
Archive groups in the packaged release:
- droid/flows-fs-optimized/package.tar.zst.part-*
- droid/depth_320x180/package.tar.zst.part-*
- droid/cameras/package.tar.zst.part-*
- droid/confidence/package.tar.zst.part-*
In the inspected packaged build, these resolved to:
- 545 parts for flows-fs-optimized
- 252 parts for depth_320x180
- 1 part for cameras
- 1 part for confidence
Restoring The Packaged Release
huggingface-cli download nvidia/PointWorld-DROID \
--repo-type dataset \
--local-dir /path/to/PointWorld-DROID
cd /path/to/PointWorld-DROID
bash recover_dataset_from_parts.sh \
--packages . \
--out /path/to/pointworld_droid_restored \
--threads 0
After restoration, the canonical dataset root is:
/path/to/pointworld_droid_restored/droid
Use With The Released PointWorld Code
For end-to-end instructions on integrity checking, HDF5-to-WebDataset conversion, training, and evaluation, please use the PointWorld GitHub repository.
Direct Use Of The 3D Annotations
If you only want the annotations and do not plan to use the PointWorld training code, the restored files can be read directly.
*_flows.h5
Each DROID flow file is an episode-level HDF5 file. Clip groups are stored under keys of the form "{start}:{end}".
Per-clip robot-state datasets typically include:
- gripper_open: (T, 1) bool
- gripper_pose: (T, 7) float32
- gripper_positions: (T,) float32
- joint_positions: (T, 7) float32
- optional arrays such as joint_velocities and joint_torques
Each camera group, such as camera_20103212_ext, contains:
- initial_rgb: (1,) HDF5 object dataset containing JPEG bytes
- initial_depth: (180, 320) uint16 depth in millimeters
- intrinsic: (3, 3) float32
- extrinsic: (4, 4) float32
- scene_flows: (T, N, 3) float16
- scene_colors: (T, N, 3) uint8
- scene_normals: (T, N, 3) int8
- scene_visibility: (T, N) bool
- scene_depth_valid_mask: (T, N) bool
Important notes:
- DROID flow camera names ending in _ext are external cameras.
- wrist-camera depth may exist in *_depth.h5, but wrist-camera point flows are not annotated in flows-fs-optimized
- in these files, scene_flows stores tracked 3D point positions over time
*_depth.h5
Each depth file stores per-frame downsampled depth at 320x180. Camera groups typically contain:
- depth: (F, 180, 320) uint16
- timestamps: (F,) int64
The optional depth package is useful for analysis and downstream research. It is not intended to replace the full raw-to-annotation generation pipeline if your goal is to reproduce PointWorld's annotation generation from raw DROID data.
*_cameras.json
Each JSON sidecar stores optimized camera metadata for one episode and includes fields such as:
- uuid
- scene_path
- optimization_success
- optimization_summary
- one entry per camera serial containing optimized_extrinsics
The released camera JSONs are retained for optimization-success episodes only.
expert_confidence-seed=42.h5
This file is an auxiliary DROID evaluation artifact used by the released PointWorld DROID test pipeline. If you are not reproducing the PointWorld filtered DROID metrics, you can ignore it.
Reading The Files In Python
import io
import h5py
import numpy as np
from PIL import Image
def decode_jpeg_object(entry) -> np.ndarray:
if isinstance(entry, np.ndarray):
jpeg_bytes = entry.astype(np.uint8, copy=False).tobytes()
elif isinstance(entry, (bytes, bytearray, memoryview)):
jpeg_bytes = bytes(entry)
elif hasattr(entry, "tobytes"):
jpeg_bytes = entry.tobytes()
else:
jpeg_bytes = bytes(entry)
return np.array(Image.open(io.BytesIO(jpeg_bytes)).convert("RGB"))
path = "/path/to/pointworld_droid_restored/droid/flows-fs-optimized/example_flows.h5"
with h5py.File(path, "r") as f:
clip_key = sorted([k for k in f.keys() if ":" in k])[0]
clip = f[clip_key]
cam_key = sorted([k for k in clip.keys() if k.startswith("camera_")])[0]
cam = clip[cam_key]
rgb = decode_jpeg_object(cam["initial_rgb"][0])
depth_m = cam["initial_depth"][:].astype(np.float32) / 1000.0
scene_flows = cam["scene_flows"][:].astype(np.float32)
scene_colors = cam["scene_colors"][:]
scene_normals = cam["scene_normals"][:].astype(np.float32) / 127.0
scene_visibility = cam["scene_visibility"][:]
scene_depth_valid_mask = cam["scene_depth_valid_mask"][:]
Data Notes
- initial_rgb decodes to standard RGB image values in [0, 255].
- initial_depth and depth stacks are stored as uint16 millimeters. Convert to meters with depth_mm.astype(np.float32) / 1000.0.
- in tested DROID samples, initial_depth included zero-valued pixels, so downstream code should treat zero as potentially missing depth where appropriate
- scene_normals are stored as int8 using [-127, 127] quantization. A typical decode is normals_i8.astype(np.float32) / 127.0.
Citation
If you use this dataset, please cite the PointWorld paper and the original DROID dataset.
@article{huang2026pointworld,
title={PointWorld: Scaling 3D World Models for In-The-Wild Robotic Manipulation},
author={Huang, Wenlong and Chao, Yu-Wei and Mousavian, Arsalan and Liu, Ming-Yu and Fox, Dieter and Mo, Kaichun and Li, Fei-Fei},
journal={arXiv preprint arXiv:2601.03782},
year={2026}
}
- Downloads last month
- 1