RoboInter-Data: LeRobot v2.1 Format (Actions + Annotations + Videos)
The primary data format of RoboInter-Data. Contains robot actions, camera observations, and rich intermediate representation annotations in LeRobot v2.1 format (parquet + MP4 videos), ready for policy training. Especially, we calculate the delta EEF (gripper) action of Droid (instead of the joint velocity or the origin cartesian action of the base).
| Sub-dataset | Source | Robot | Episodes | Frames | Tasks | Image Size | Raw Image Size |
|---|---|---|---|---|---|---|---|
lerobot_droid_anno |
DROID | Franka + Robotiq | 152,986 | 46,259,014 | 43,026 | 320 x 180 | 640 x 360 |
lerobot_rh20t_anno |
RH20T | Multiple | 82,894 | 40,755,632 | 146 | 320 x 180 | 640 x 360 |
Both datasets share fps=10, chunks_size=1000, and the same annotation schema.
Directory Layout
lerobot_droid_anno/ (or lerobot_rh20t_anno/)
├── meta/
│ ├── info.json # Dataset metadata (fps, features, shapes, etc.)
│ ├── episodes.jsonl # Per-episode info (one JSON per line)
│ ├── episodes_stats.jsonl # Per-episode statistics
│ └── tasks.jsonl # Task/instruction mapping
├── data/
│ └── chunk-{NNN}/ # Parquet data chunks (1,000 episodes per chunk)
│ ├── episode_000000.parquet
│ ├── episode_000001.parquet
│ └── ...
└── videos/
└── chunk-{NNN}/
├── observation.images.primary/
│ └── episode_{NNNNNN}.mp4
└── observation.images.wrist/
└── episode_{NNNNNN}.mp4
Data Fields
Core Fields (Shared by DROID & RH20T)
| Field | Shape | Type | Description |
|---|---|---|---|
action |
(7,) | float64 | Delta EEF action: [delta_x, delta_y, delta_z, delta_rx, delta_ry, delta_rz, gripper_command] |
state |
(7,) | float64 | EEF state: [x, y, z, rx, ry, rz, gripper_state] |
observation.images.primary |
(180, 320, 3) | video (H.264) | Primary camera RGB video |
observation.images.wrist |
(180, 320, 3) | video (H.264) | Wrist camera RGB video |
Metadata Fields (Shared)
| Field | Type | Description |
|---|---|---|
episode_name |
string | Episode unique identifier, e.g. "3072_exterior_image_1_left" |
camera_view |
string | Camera perspective, e.g. "exterior_image_1_left" |
task |
string | Task language description (via task_index -> tasks.jsonl) |
episode_index |
int64 | Episode index in dataset |
frame_index |
int64 | Frame index within episode |
timestamp |
float32 | Timestamp in seconds (frame_index / fps) |
index |
int64 | Global frame index across all episodes |
task_index |
int64 | Index into tasks.jsonl |
Other Information Fields — DROID Only
lerobot_droid_anno contains the following additional fields from the original DROID dataset:
| Field | Shape | Type | Description |
|---|---|---|---|
other_information.language_instruction_2 |
(1,) | string | Alternative language instruction (source 2) |
other_information.language_instruction_3 |
(1,) | string | Alternative language instruction (source 3) |
other_information.action_delta_tcp_pose |
(7,) | float64 | Delta TCP pose action: [dx, dy, dz, drx, dry, drz, gripper] |
other_information.action_delta_wrist_pose |
(7,) | float64 | Delta wrist pose action: [dx, dy, dz, drx, dry, drz, gripper] |
other_information.action_tcp_pose |
(7,) | float64 | Absolute TCP pose: [x, y, z, rx, ry, rz, gripper] |
other_information.action_wrist_pose |
(7,) | float64 | Absolute wrist pose: [x, y, z, rx, ry, rz, gripper] |
other_information.action_gripper_velocity |
(1,) | float64 | Gripper velocity |
other_information.action_joint_position |
(7,) | float64 | Joint position action: [j1..j7] |
other_information.action_joint_velocity |
(7,) | float64 | Joint velocity action: [j1..j7] |
other_information.action_cartesian_velocity |
(6,) | float64 | Cartesian velocity: [vx, vy, vz, wx, wy, wz] |
other_information.observation_joint_position |
(7,) | float64 | Observed joint positions: [j1..j7] |
other_information.observation_gripper_position |
(1,) | float64 | Observed gripper position |
other_information.observation_gripper_open_state |
(1,) | float64 | Gripper open state |
other_information.observation_gripper_pose6d |
(6,) | float64 | Gripper 6D pose: [x, y, z, rx, ry, rz] |
other_information.observation_tcp_pose6d |
(6,) | float64 | TCP 6D pose: [x, y, z, rx, ry, rz] |
other_information.is_first |
(1,) | bool | First frame flag |
other_information.is_last |
(1,) | bool | Last frame flag |
other_information.is_terminal |
(1,) | bool | Terminal state flag |
Other Information Fields — RH20T Only
lerobot_rh20t_anno contains the following additional fields from the original RH20T dataset:
| Field | Shape | Type | Description |
|---|---|---|---|
other_information.action_delta_tcp_pose |
(7,) | float64 | Delta TCP pose action: [dx, dy, dz, drx, dry, drz, gripper] |
other_information.action_tcp_pose |
(7,) | float64 | Absolute TCP pose: [x, y, z, rx, ry, rz, gripper] |
other_information.gripper_command |
(1,) | float64 | Gripper command |
other_information.observation_joint_position |
(14,) | float64 | Observed joint positions: [j1..j14] |
other_information.observation_gripper_open_state |
(1,) | float64 | Gripper open state |
other_information.observation_gripper_pose6d |
(6,) | float64 | Gripper 6D pose: [x, y, z, rx, ry, rz] |
other_information.tcp_camera |
(7,) | float64 | TCP in camera frame: [x, y, z, qx, qy, qz, qw] |
other_information.tcp_base |
(7,) | float64 | TCP in base frame: [x, y, z, qx, qy, qz, qw] |
other_information.gripper |
(1,) | string | Gripper metadata (JSON) |
other_information.is_first |
(1,) | bool | First frame flag |
other_information.is_last |
(1,) | bool | Last frame flag |
other_information.is_terminal |
(1,) | bool | Terminal state flag |
Key difference: DROID has 7-DoF joint positions and richer action representations (wrist pose, joint/cartesian velocities). RH20T has 14-DoF joint positions, TCP transforms in camera/base frames, and gripper metadata JSON.
Annotation Fields (Shared by DROID & RH20T)
All annotation fields are prefixed with annotation. and stored as JSON strings. Empty string "" means no annotation is available for that frame.
| Field | Format | Description |
|---|---|---|
annotation.time_clip |
[[start, end], ...] |
Subtask temporal segments (frame ranges) |
annotation.instruction_add |
string | Structured task language instruction |
annotation.substask |
string | Current subtask description |
annotation.primitive_skill |
string | Primitive skill label (pick, place, push, twist, etc.) |
annotation.segmentation |
string | Segmentation reference (path) |
annotation.object_box |
[[x1, y1], [x2, y2]] |
Manipulated object bounding box |
annotation.placement_proposal |
[[x1, y1], [x2, y2]] |
Target placement bounding box |
annotation.trace |
[[x, y], ...] |
Future 10-frame gripper trajectory waypoints |
annotation.gripper_box |
[[x1, y1], [x2, y2]] |
Gripper bounding box |
annotation.contact_frame |
int / -1 | Frame index when gripper contacts object (-1 = past contact) |
annotation.state_affordance |
[x, y, z, rx, ry, rz] |
6D EEF state at contact frame |
annotation.affordance_box |
[[x1, y1], [x2, y2]] |
Gripper bounding box at contact frame |
annotation.contact_points |
[x, y] |
Contact point in pixel coordinates |
annotation.origin_shape |
[h, w] |
Original image resolution for coordinate reference |
Bounding Box Format
All bounding boxes use pixel coordinates with origin at top-left:
[[x1, y1], [x2, y2]] // [top-left, bottom-right]
Trace Format
10 future waypoints for gripper trajectory prediction:
[[110, 66], [112, 68], [115, 70], [118, 72], [120, 75], [122, 78], [125, 80], [128, 82], [130, 85], [132, 88]]
Q_Annotation Fields (Quality Indicators, Shared)
Each annotation has a corresponding quality indicator prefixed with Q_annotation.:
| Field | Values | Description |
|---|---|---|
Q_annotation.instruction_add |
"Primary" / "Secondary" / "" |
Instruction quality |
Q_annotation.substask |
"Primary" / "Secondary" / "" |
Subtask quality |
Q_annotation.primitive_skill |
"Primary" / "Secondary" / "" |
Primitive skill quality |
Q_annotation.segmentation |
"Primary" / "Secondary" / "" |
Segmentation quality |
Q_annotation.object_box |
"Primary" / "Secondary" / "" |
Object box quality |
Q_annotation.placement_proposal |
"Primary" / "Secondary" / "" |
Placement proposal quality |
Q_annotation.trace |
"Primary" / "Secondary" / "" |
Trace quality |
Q_annotation.gripper_box |
"Primary" / "Secondary" / "" |
Gripper box quality |
Q_annotation.contact_frame |
"Primary" / "Secondary" / "" |
Contact frame quality |
Q_annotation.state_affordance |
"Primary" / "Secondary" / "" |
State affordance quality |
Q_annotation.affordance_box |
"Primary" / "Secondary" / "" |
Affordance box quality |
Q_annotation.contact_points |
"Primary" / "Secondary" / "" |
Contact points quality |
- Primary: High-confidence annotation
- Secondary: Acceptable quality, may have minor errors
- "" (empty): No annotation available
Quick Start
The dataloader code is at RoboInterData/lerobot_dataloader.
Installation
pip install numpy torch pyarrow av opencv-python
Basic Usage
from lerobot_dataloader import create_dataloader
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
batch_size=32,
action_horizon=16,
)
for batch in dataloader:
images = batch["observation.images.primary"] # (B, H, W, 3)
actions = batch["action"] # (B, 16, 7)
trace = batch["annotation.trace"] # Parsed JSON lists
skill = batch["annotation.primitive_skill"] # List of strings
break
Multiple Datasets (DROID + RH20T)
dataloader = create_dataloader(
[
"path/to/lerobot_droid_anno",
"path/to/lerobot_rh20t_anno",
],
batch_size=32,
action_horizon=16,
)
for batch in dataloader:
print(batch["dataset_name"]) # Source dataset identifier
break
Data Filtering
Frame Range Filtering
Remove idle frames at episode start/end using range_nop.json:
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
range_nop_path="path/to/range_nop.json",
)
Format of range_nop.json:
{
"3072_exterior_image_1_left": [12, 217, 206]
}
[start_frame, end_frame, valid_length] — frames outside this range are idle/stationary.
Q_Annotation Filtering
Select episodes by annotation quality:
from lerobot_dataloader import create_dataloader, QAnnotationFilter
# Only Primary quality
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
q_filters=[
QAnnotationFilter("Q_annotation.instruction_add", ["Primary"]),
QAnnotationFilter("Q_annotation.gripper_box", ["Primary"]),
]
)
# Any non-empty annotation
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
q_filters=[
QAnnotationFilter("Q_annotation.trace", ["not_empty"])
]
)
Combined Filtering
from lerobot_dataloader import FilterConfig, QAnnotationFilter
config = FilterConfig(
range_nop_path="path/to/range_nop.json",
q_filters=[
QAnnotationFilter("Q_annotation.trace", ["Primary", "Secondary"]),
],
q_filter_mode="all", # "all" = AND, "any" = OR
)
dataloader = create_dataloader("path/to/lerobot_droid_anno", filter_config=config)
Transforms
from lerobot_dataloader import Compose, Normalize, ResizeImages, ToTensorImages, LeRobotDataset
from lerobot_dataloader.transforms import compute_stats
# Compute normalization stats
dataset = LeRobotDataset("path/to/lerobot_droid_anno", load_videos=False)
stats = compute_stats(dataset)
# Create transform pipeline
transform = Compose([
ResizeImages(height=224, width=224),
ToTensorImages(), # (H,W,C) uint8 -> (C,H,W) float32
Normalize(stats),
])
dataloader = create_dataloader("path/to/lerobot_droid_anno", transform=transform)
Direct Dataset Access
from lerobot_dataloader import LeRobotDataset
from lerobot_dataloader.transforms import ParseAnnotations
dataset = LeRobotDataset(
"path/to/lerobot_droid_anno",
transform=ParseAnnotations(),
)
print(f"Total frames: {len(dataset)}")
print(f"Total episodes: {dataset.num_episodes}")
print(f"FPS: {dataset.fps}")
sample = dataset[0]
print(f"Action: {sample['action']}")
print(f"Object box: {sample['annotation.object_box']}")
print(f"Skill: {sample['annotation.primitive_skill']}")
Format Conversion
The LeRobot v2.1 format was converted from original data + LMDB annotations using:
Related Resources
| Resource | Link |
|---|---|
| RoboInter-Data (parent dataset) | HuggingFace |
| RoboInter Project | GitHub |
| DataLoader Code | lerobot_dataloader |
| Conversion Scripts | convert_to_lerobot |
| Demo Visualizer | RoboInterData-Demo |
| DROID Dataset | droid-dataset.github.io |
| RH20T Dataset | rh20t.github.io |
License
Please refer to the original dataset licenses for RoboInter, DROID, and RH20T.