File size: 15,687 Bytes
d5d60d0 4085946 d5d60d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
# RoboInter-Data: LeRobot v2.1 Format (Actions + Annotations + Videos)
The primary data format of [RoboInter-Data](https://huggingface.co/datasets/InternRobotics/RoboInter-Data). Contains robot **actions**, camera **observations**, and rich **intermediate representation annotations** in [LeRobot v2.1](https://github.com/huggingface/lerobot) format (parquet + MP4 videos), ready for policy training. Especially, we calculate the delta EEF (gripper) action of Droid (instead of the joint velocity or the origin cartesian action of the base).
| Sub-dataset | Source | Robot | Episodes | Frames | Tasks | Image Size | Raw Image Size |
|-------------|--------|-------|----------|--------|-------|------------|-------|
| `lerobot_droid_anno` | [DROID](https://droid-dataset.github.io/) | Franka + Robotiq | 152,986 | 46,259,014 | 43,026 | 320 x 180 | 640 x 360 |
| `lerobot_rh20t_anno` | [RH20T](https://rh20t.github.io/) | Multiple | 82,894 | 40,755,632 | 146 | 320 x 180 | 640 x 360 |
Both datasets share `fps=10`, `chunks_size=1000`, and the same annotation schema.
## Directory Layout
```
lerobot_droid_anno/ (or lerobot_rh20t_anno/)
├── meta/
│ ├── info.json # Dataset metadata (fps, features, shapes, etc.)
│ ├── episodes.jsonl # Per-episode info (one JSON per line)
│ ├── episodes_stats.jsonl # Per-episode statistics
│ └── tasks.jsonl # Task/instruction mapping
├── data/
│ └── chunk-{NNN}/ # Parquet data chunks (1,000 episodes per chunk)
│ ├── episode_000000.parquet
│ ├── episode_000001.parquet
│ └── ...
└── videos/
└── chunk-{NNN}/
├── observation.images.primary/
│ └── episode_{NNNNNN}.mp4
└── observation.images.wrist/
└── episode_{NNNNNN}.mp4
```
---
## Data Fields
### Core Fields (Shared by DROID & RH20T)
| Field | Shape | Type | Description |
|-------|-------|------|-------------|
| `action` | (7,) | float64 | Delta EEF action: [delta_x, delta_y, delta_z, delta_rx, delta_ry, delta_rz, gripper_command] |
| `state` | (7,) | float64 | EEF state: [x, y, z, rx, ry, rz, gripper_state] |
| `observation.images.primary` | (180, 320, 3) | video (H.264) | Primary camera RGB video |
| `observation.images.wrist` | (180, 320, 3) | video (H.264) | Wrist camera RGB video |
### Metadata Fields (Shared)
| Field | Type | Description |
|-------|------|-------------|
| `episode_name` | string | Episode unique identifier, e.g. `"3072_exterior_image_1_left"` |
| `camera_view` | string | Camera perspective, e.g. `"exterior_image_1_left"` |
| `task` | string | Task language description (via `task_index` -> `tasks.jsonl`) |
| `episode_index` | int64 | Episode index in dataset |
| `frame_index` | int64 | Frame index within episode |
| `timestamp` | float32 | Timestamp in seconds (`frame_index / fps`) |
| `index` | int64 | Global frame index across all episodes |
| `task_index` | int64 | Index into `tasks.jsonl` |
---
### Other Information Fields — DROID Only
`lerobot_droid_anno` contains the following additional fields from the original DROID dataset:
| Field | Shape | Type | Description |
|-------|-------|------|-------------|
| `other_information.language_instruction_2` | (1,) | string | Alternative language instruction (source 2) |
| `other_information.language_instruction_3` | (1,) | string | Alternative language instruction (source 3) |
| `other_information.action_delta_tcp_pose` | (7,) | float64 | Delta TCP pose action: [dx, dy, dz, drx, dry, drz, gripper] |
| `other_information.action_delta_wrist_pose` | (7,) | float64 | Delta wrist pose action: [dx, dy, dz, drx, dry, drz, gripper] |
| `other_information.action_tcp_pose` | (7,) | float64 | Absolute TCP pose: [x, y, z, rx, ry, rz, gripper] |
| `other_information.action_wrist_pose` | (7,) | float64 | Absolute wrist pose: [x, y, z, rx, ry, rz, gripper] |
| `other_information.action_gripper_velocity` | (1,) | float64 | Gripper velocity |
| `other_information.action_joint_position` | (7,) | float64 | Joint position action: [j1..j7] |
| `other_information.action_joint_velocity` | (7,) | float64 | Joint velocity action: [j1..j7] |
| `other_information.action_cartesian_velocity` | (6,) | float64 | Cartesian velocity: [vx, vy, vz, wx, wy, wz] |
| `other_information.observation_joint_position` | (7,) | float64 | Observed joint positions: [j1..j7] |
| `other_information.observation_gripper_position` | (1,) | float64 | Observed gripper position |
| `other_information.observation_gripper_open_state` | (1,) | float64 | Gripper open state |
| `other_information.observation_gripper_pose6d` | (6,) | float64 | Gripper 6D pose: [x, y, z, rx, ry, rz] |
| `other_information.observation_tcp_pose6d` | (6,) | float64 | TCP 6D pose: [x, y, z, rx, ry, rz] |
| `other_information.is_first` | (1,) | bool | First frame flag |
| `other_information.is_last` | (1,) | bool | Last frame flag |
| `other_information.is_terminal` | (1,) | bool | Terminal state flag |
### Other Information Fields — RH20T Only
`lerobot_rh20t_anno` contains the following additional fields from the original RH20T dataset:
| Field | Shape | Type | Description |
|-------|-------|------|-------------|
| `other_information.action_delta_tcp_pose` | (7,) | float64 | Delta TCP pose action: [dx, dy, dz, drx, dry, drz, gripper] |
| `other_information.action_tcp_pose` | (7,) | float64 | Absolute TCP pose: [x, y, z, rx, ry, rz, gripper] |
| `other_information.gripper_command` | (1,) | float64 | Gripper command |
| `other_information.observation_joint_position` | (14,) | float64 | Observed joint positions: [j1..j14] |
| `other_information.observation_gripper_open_state` | (1,) | float64 | Gripper open state |
| `other_information.observation_gripper_pose6d` | (6,) | float64 | Gripper 6D pose: [x, y, z, rx, ry, rz] |
| `other_information.tcp_camera` | (7,) | float64 | TCP in camera frame: [x, y, z, qx, qy, qz, qw] |
| `other_information.tcp_base` | (7,) | float64 | TCP in base frame: [x, y, z, qx, qy, qz, qw] |
| `other_information.gripper` | (1,) | string | Gripper metadata (JSON) |
| `other_information.is_first` | (1,) | bool | First frame flag |
| `other_information.is_last` | (1,) | bool | Last frame flag |
| `other_information.is_terminal` | (1,) | bool | Terminal state flag |
> **Key difference:** DROID has 7-DoF joint positions and richer action representations (wrist pose, joint/cartesian velocities). RH20T has 14-DoF joint positions, TCP transforms in camera/base frames, and gripper metadata JSON.
---
### Annotation Fields (Shared by DROID & RH20T)
All annotation fields are prefixed with `annotation.` and stored as JSON strings. Empty string `""` means no annotation is available for that frame.
| Field | Format | Description |
|-------|--------|-------------|
| `annotation.time_clip` | `[[start, end], ...]` | Subtask temporal segments (frame ranges) |
| `annotation.instruction_add` | string | Structured task language instruction |
| `annotation.substask` | string | Current subtask description |
| `annotation.primitive_skill` | string | Primitive skill label (pick, place, push, twist, etc.) |
| `annotation.segmentation` | string | Segmentation reference (path) |
| `annotation.object_box` | `[[x1, y1], [x2, y2]]` | Manipulated object bounding box |
| `annotation.placement_proposal` | `[[x1, y1], [x2, y2]]` | Target placement bounding box |
| `annotation.trace` | `[[x, y], ...]` | Future 10-frame gripper trajectory waypoints |
| `annotation.gripper_box` | `[[x1, y1], [x2, y2]]` | Gripper bounding box |
| `annotation.contact_frame` | int / -1 | Frame index when gripper contacts object (-1 = past contact) |
| `annotation.state_affordance` | `[x, y, z, rx, ry, rz]` | 6D EEF state at contact frame |
| `annotation.affordance_box` | `[[x1, y1], [x2, y2]]` | Gripper bounding box at contact frame |
| `annotation.contact_points` | `[x, y]` | Contact point in pixel coordinates |
| `annotation.origin_shape` | `[h, w]` | Original image resolution for coordinate reference |
#### Bounding Box Format
All bounding boxes use pixel coordinates with origin at top-left:
```json
[[x1, y1], [x2, y2]] // [top-left, bottom-right]
```
#### Trace Format
10 future waypoints for gripper trajectory prediction:
```json
[[110, 66], [112, 68], [115, 70], [118, 72], [120, 75], [122, 78], [125, 80], [128, 82], [130, 85], [132, 88]]
```
---
### Q_Annotation Fields (Quality Indicators, Shared)
Each annotation has a corresponding quality indicator prefixed with `Q_annotation.`:
| Field | Values | Description |
|-------|--------|-------------|
| `Q_annotation.instruction_add` | `"Primary"` / `"Secondary"` / `""` | Instruction quality |
| `Q_annotation.substask` | `"Primary"` / `"Secondary"` / `""` | Subtask quality |
| `Q_annotation.primitive_skill` | `"Primary"` / `"Secondary"` / `""` | Primitive skill quality |
| `Q_annotation.segmentation` | `"Primary"` / `"Secondary"` / `""` | Segmentation quality |
| `Q_annotation.object_box` | `"Primary"` / `"Secondary"` / `""` | Object box quality |
| `Q_annotation.placement_proposal` | `"Primary"` / `"Secondary"` / `""` | Placement proposal quality |
| `Q_annotation.trace` | `"Primary"` / `"Secondary"` / `""` | Trace quality |
| `Q_annotation.gripper_box` | `"Primary"` / `"Secondary"` / `""` | Gripper box quality |
| `Q_annotation.contact_frame` | `"Primary"` / `"Secondary"` / `""` | Contact frame quality |
| `Q_annotation.state_affordance` | `"Primary"` / `"Secondary"` / `""` | State affordance quality |
| `Q_annotation.affordance_box` | `"Primary"` / `"Secondary"` / `""` | Affordance box quality |
| `Q_annotation.contact_points` | `"Primary"` / `"Secondary"` / `""` | Contact points quality |
- **Primary**: High-confidence annotation
- **Secondary**: Acceptable quality, may have minor errors
- **""** (empty): No annotation available
---
## Download & Extract
The `data/` and `videos/` directories are distributed as `.tar` archives (one per chunk) to reduce the number of files during transfer. After downloading, extract them in place:
```bash
cd Annotation_with_action_lerobotv21
for dataset in lerobot_droid_anno lerobot_rh20t_anno; do
for subdir in data videos; do
cd ${dataset}/${subdir}
for f in *.tar; do tar xf "$f" && rm "$f"; done
cd ../..
done
done
```
After extraction, each `data/` will contain `chunk-000/`, `chunk-001/`, ... with `.parquet` files, and each `videos/` will contain `chunk-000/`, `chunk-001/`, ... with `.mp4` files. The `meta/` directories are ready to use without extraction.
## Quick Start
The dataloader code is at [RoboInterData/lerobot_dataloader](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/lerobot_dataloader).
### Installation
```bash
pip install numpy torch pyarrow av opencv-python
```
### Basic Usage
```python
from lerobot_dataloader import create_dataloader
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
batch_size=32,
action_horizon=16,
)
for batch in dataloader:
images = batch["observation.images.primary"] # (B, H, W, 3)
actions = batch["action"] # (B, 16, 7)
trace = batch["annotation.trace"] # Parsed JSON lists
skill = batch["annotation.primitive_skill"] # List of strings
break
```
### Multiple Datasets (DROID + RH20T)
```python
dataloader = create_dataloader(
[
"path/to/lerobot_droid_anno",
"path/to/lerobot_rh20t_anno",
],
batch_size=32,
action_horizon=16,
)
for batch in dataloader:
print(batch["dataset_name"]) # Source dataset identifier
break
```
### Data Filtering
#### Frame Range Filtering
Remove idle frames at episode start/end using `range_nop.json`:
```python
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
range_nop_path="path/to/range_nop.json",
)
```
Format of `range_nop.json`:
```json
{
"3072_exterior_image_1_left": [12, 217, 206]
}
```
`[start_frame, end_frame, valid_length]` — frames outside this range are idle/stationary.
#### Q_Annotation Filtering
Select episodes by annotation quality:
```python
from lerobot_dataloader import create_dataloader, QAnnotationFilter
# Only Primary quality
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
q_filters=[
QAnnotationFilter("Q_annotation.instruction_add", ["Primary"]),
QAnnotationFilter("Q_annotation.gripper_box", ["Primary"]),
]
)
# Any non-empty annotation
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
q_filters=[
QAnnotationFilter("Q_annotation.trace", ["not_empty"])
]
)
```
#### Combined Filtering
```python
from lerobot_dataloader import FilterConfig, QAnnotationFilter
config = FilterConfig(
range_nop_path="path/to/range_nop.json",
q_filters=[
QAnnotationFilter("Q_annotation.trace", ["Primary", "Secondary"]),
],
q_filter_mode="all", # "all" = AND, "any" = OR
)
dataloader = create_dataloader("path/to/lerobot_droid_anno", filter_config=config)
```
### Transforms
```python
from lerobot_dataloader import Compose, Normalize, ResizeImages, ToTensorImages, LeRobotDataset
from lerobot_dataloader.transforms import compute_stats
# Compute normalization stats
dataset = LeRobotDataset("path/to/lerobot_droid_anno", load_videos=False)
stats = compute_stats(dataset)
# Create transform pipeline
transform = Compose([
ResizeImages(height=224, width=224),
ToTensorImages(), # (H,W,C) uint8 -> (C,H,W) float32
Normalize(stats),
])
dataloader = create_dataloader("path/to/lerobot_droid_anno", transform=transform)
```
### Direct Dataset Access
```python
from lerobot_dataloader import LeRobotDataset
from lerobot_dataloader.transforms import ParseAnnotations
dataset = LeRobotDataset(
"path/to/lerobot_droid_anno",
transform=ParseAnnotations(),
)
print(f"Total frames: {len(dataset)}")
print(f"Total episodes: {dataset.num_episodes}")
print(f"FPS: {dataset.fps}")
sample = dataset[0]
print(f"Action: {sample['action']}")
print(f"Object box: {sample['annotation.object_box']}")
print(f"Skill: {sample['annotation.primitive_skill']}")
```
---
## Format Conversion
The LeRobot v2.1 format was converted from original data + LMDB annotations using:
- **DROID**: [convert_droid_to_lerobot_anno_fast.py](https://github.com/InternRobotics/RoboInter/blob/main/RoboInterData/convert_to_lerobot/convert_droid_to_lerobot_anno_fast.py)
- **RH20T**: [convert_rh20t_to_lerobot_anno_fast.py](https://github.com/InternRobotics/RoboInter/blob/main/RoboInterData/convert_to_lerobot/convert_rh20t_to_lerobot_anno_fast.py)
---
## Related Resources
| Resource | Link |
|----------|------|
| RoboInter-Data (parent dataset) | [HuggingFace](https://huggingface.co/datasets/InternRobotics/RoboInter-Data) |
| RoboInter Project | [GitHub](https://github.com/InternRobotics/RoboInter) |
| DataLoader Code | [lerobot_dataloader](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/lerobot_dataloader) |
| Conversion Scripts | [convert_to_lerobot](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData/convert_to_lerobot) |
| Demo Visualizer | [RoboInterData-Demo](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterData-Demo) |
| DROID Dataset | [droid-dataset.github.io](https://droid-dataset.github.io/) |
| RH20T Dataset | [rh20t.github.io](https://rh20t.github.io/) |
## License
Please refer to the original dataset licenses for [RoboInter](https://github.com/InternRobotics/RoboInter), [DROID](https://droid-dataset.github.io/), and [RH20T](https://rh20t.github.io/).
|