Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 80, in _split_generators
                  first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 33, in _get_pipeline_from_tar
                  for filename, f in tar_iterator:
                                     ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
                  for x in self.generator(*self.args):
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1380, in _iter_from_urlpath
                  yield from cls._iter_tar(f)
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1331, in _iter_tar
                  stream = tarfile.open(fileobj=f, mode="r|*")
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 1886, in open
                  t = cls(name, filemode, stream, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 1762, in __init__
                  self.firstmember = self.next()
                                     ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 2750, in next
                  raise ReadError(str(e)) from None
              tarfile.ReadError: bad checksum
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

PointWorld-BEHAVIOR

Dataset Description:

PointWorld-BEHAVIOR is the packaged BEHAVIOR-derived annotation release used for training and evaluating the 3D world model PointWorld. It contains precomputed 3D annotations derived from BEHAVIOR simulation episodes, organized as episode-level HDF5 files that store robot state, camera parameters, initial RGB-D observations, and rigid-body scene geometry annotations.

This Hugging Face repository hosts the packaged release, not the original BEHAVIOR raw episodes nor the prebuilt WebDataset shards. After download, users should restore the packaged archives to the canonical HDF5 layout. From there, they can either use the files directly as 3D annotations or follow the PointWorld repository workflow for integrity checking, conversion, training, and evaluation.

This dataset is for research and development only.

Dataset Owner(s):

NVIDIA Corporation

Dataset Creation Date:

04/15/2026

License/Terms of Use:

NVIDIA License

Intended Usage:

This release is intended for research and development in world modeling, 3D vision, and robotic manipulation.

It is intended for two common use cases:

  1. Using the restored data with the released PointWorld codebase, and
  2. Using the restored 3D annotations directly, for example to access depth, camera poses, visibility masks, or tracked 3D point trajectories without using the PointWorld training pipeline.

Dataset Characterization

Data Collection Method: Synthetic

Labeling Method: Synthetic

Dataset Format

The release is distributed in two layers:

  1. Packaged Hugging Face download artifacts:
    • Split compressed archives package.tar.zst.part-*
    • Restore helper script recover_dataset_from_parts.sh
  2. Restored dataset contents:
    • Episode-level HDF5 files (.hdf5)

Restored canonical layout:

behavior/
  flows/
    task-0000/
      episode_*.hdf5
    task-0001/
      episode_*.hdf5
    ...

Local conversion to WebDataset train/test shards is supported by the PointWorld repository tooling.

Dataset Quantification

Current release package counts inspected from the restored release tree:

  • 7,842 episode-level episode_*.hdf5 files
  • 50 canonical task directories in the restored layout
  • 43 populated task directories in the current packaged release
  • Restored size on disk: Appx 1.55 TB
  • Packaged download size: Appx 718 GB

Each episode file contains multiple clip groups keyed as "{start}:{end}", so the corpus contains substantially more clips than files.

Measurement of Total Data Storage: 718GB

Reference(s):

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with the applicable terms of service, developers should work with their internal developer teams to ensure this dataset meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

Packaged Release Layout

The packaged release is sharded by task. Typical archive paths are:

  • behavior/flows/task-0000/package.tar.zst.part-*
  • behavior/flows/task-0001/package.tar.zst.part-*

Restoring The Packaged Release

huggingface-cli download nvidia/PointWorld-BEHAVIOR \
  --repo-type dataset \
  --local-dir /path/to/PointWorld-BEHAVIOR

cd /path/to/PointWorld-BEHAVIOR

bash recover_dataset_from_parts.sh \
  --packages . \
  --out /path/to/pointworld_behavior_restored \
  --threads 0

After restoration, the canonical dataset root is:

/path/to/pointworld_behavior_restored/behavior

Use With The Released PointWorld Code

For end-to-end instructions on integrity checking, HDF5-to-WebDataset conversion, training, and evaluation, please use the PointWorld GitHub repository.

Direct Use Of The 3D Annotations

If you only want the annotations and do not plan to use the PointWorld training code, the restored HDF5 files can be read directly.

Each BEHAVIOR file is an episode-level HDF5 file under:

behavior/flows/task-XXXX/episode_YYYYYYYY.hdf5

Clip groups are stored under keys of the form "{start}:{end}".

Per-clip non-camera datasets

Typical robot-state fields include:

  • world_to_robot: (4, 4) float32
  • base_pose: (T, 7) float32
  • left_gripper_open: (T, 1) bool
  • left_gripper_pose: (T, 7) float32
  • left_is_grasping: (T, 1) bool
  • right_gripper_open: (T, 1) bool
  • right_gripper_pose: (T, 7) float32
  • right_is_grasping: (T, 1) bool
  • joint_names: (J,) string array
  • joint_positions: (T, J) float32

Per-camera groups

Each clip contains camera groups such as:

  • camera_head
  • camera_left
  • camera_right

Each camera group contains:

  • initial_rgb: (1,) HDF5 object dataset containing JPEG bytes
  • initial_depth: (180, 320) uint16 depth in millimeters
  • intrinsic: (3, 3) float32
  • extrinsic: (4, 4) float32
  • extrinsic_trajectory: (T, 4, 4) float32
  • local_scene_points: group mapping mesh_name -> (N_i, 3) float16
  • local_scene_colors: group mapping mesh_name -> (N_i, 3) uint8
  • local_scene_normals: group mapping mesh_name -> (N_i, 3) int8
  • scene_mesh_trajectories: group mapping mesh_name -> (T, 7) float32

Unlike PointWorld-DROID, PointWorld-BEHAVIOR does not store one dense scene_flows array for the whole scene. Instead, it stores mesh-local point samples plus a rigid pose trajectory for each mesh over time. This is more storage-efficient for simulated rigid-body scenes while still allowing exact reconstruction of per-frame scene geometry.

Reconstructing Per-Frame Scene Geometry

To recover per-frame points for a given camera and timestep:

  1. Read local_scene_points[mesh_name]
  2. Read the mesh pose from scene_mesh_trajectories[mesh_name][t]
  3. Apply the pose to the local points
  4. Concatenate meshes if you want a single scene point cloud

The same mesh keys are shared across:

  • local_scene_points
  • local_scene_colors
  • local_scene_normals
  • scene_mesh_trajectories

Reading The Files In Python

import io
import h5py
import numpy as np
from PIL import Image


def decode_jpeg_object(entry) -> np.ndarray:
    if isinstance(entry, np.ndarray):
        jpeg_bytes = entry.astype(np.uint8, copy=False).tobytes()
    elif isinstance(entry, (bytes, bytearray, memoryview)):
        jpeg_bytes = bytes(entry)
    elif hasattr(entry, "tobytes"):
        jpeg_bytes = entry.tobytes()
    else:
        jpeg_bytes = bytes(entry)
    return np.array(Image.open(io.BytesIO(jpeg_bytes)).convert("RGB"))


path = "/path/to/pointworld_behavior_restored/behavior/flows/task-0000/episode_00000000.hdf5"

with h5py.File(path, "r") as f:
    clip_key = sorted([k for k in f.keys() if ":" in k])[0]
    clip = f[clip_key]
    cam = clip["camera_head"]

    rgb = decode_jpeg_object(cam["initial_rgb"][0])
    depth_m = cam["initial_depth"][:].astype(np.float32) / 1000.0

    mesh_name = sorted(cam["local_scene_points"].keys())[0]
    local_points = cam["local_scene_points"][mesh_name][:].astype(np.float32)
    local_colors = cam["local_scene_colors"][mesh_name][:]
    local_normals = cam["local_scene_normals"][mesh_name][:].astype(np.float32) / 127.0
    mesh_pose_traj = cam["scene_mesh_trajectories"][mesh_name][:].astype(np.float32)

Data Notes

  • initial_rgb decodes to standard RGB image values in [0, 255].
  • initial_depth is stored as uint16 millimeters. Convert to meters with depth_mm.astype(np.float32) / 1000.0.
  • local_scene_normals are stored as int8 using [-127, 127] quantization. A typical decode is normals_i8.astype(np.float32) / 127.0.
  • scene_mesh_trajectories stores per-mesh pose trajectories over time; the same mesh names index local_scene_points, local_scene_colors, and local_scene_normals.
  • HDF5 internal compression is transparent to readers after restoration; no extra decompression step is needed in user code.

Citation

If you use this dataset, please cite the PointWorld paper and the original BEHAVIOR dataset.

@article{huang2026pointworld,
  title={PointWorld: Scaling 3D World Models for In-The-Wild Robotic Manipulation},
  author={Huang, Wenlong and Chao, Yu-Wei and Mousavian, Arsalan and Liu, Ming-Yu and Fox, Dieter and Mo, Kaichun and Li, Fei-Fei},
  journal={arXiv preprint arXiv:2601.03782},
  year={2026}
}
Downloads last month
-

Paper for nvidia/PointWorld-BEHAVIOR