Structured Human Action and Intent Dataset - Telemetry - xAPI
Real-world task (VR-forklift-operation), capturing aligned state → action → outcome trajectories.
The data includes explicit intent, task structure, and reward signals (success/failure, safety events), making it directly usable for policy learning, RLHF, and training agents for physical AI and world models.
Dataset Statistics
| Split | Episodes | Timesteps (50 Hz) | Shards | Size |
|---|---|---|---|---|
| train | 9 | 384,950 | 3 | 183 MB |
| validation | 1 | 34,690 | 1 | 17 MB |
| test | 2 | 84,205 | 1 | 42 MB |
Total: 12 episodes, 503,845 timesteps
Exercises
3.5
Schema
Each row is one physics timestep (50 Hz). Columns:
Episode identifiers (4 columns)
| Column | Type | Description |
|---|---|---|
session_id |
string | UUID of the recording session |
episode_id |
string | UUID of the episode within the session |
exercise_id |
string | Task label — primary grouping key for ML |
episode_step |
int32 | 0-based row index within this episode; use to reconstruct sequences across shard boundaries |
Time index (2 columns)
| Column | Type | Description |
|---|---|---|
fixed_step_index |
int64 | Physics step counter (monotonic within episode) |
t_sim |
float64 | Simulation time in seconds since episode start |
Observations — 171 columns
| Group | Columns | Dim | Notes |
|---|---|---|---|
| Forklift body | obs_pos_x/y/z, obs_rot_x/y/z/w, obs_lin_vel_x/y/z, obs_ang_vel_x/y/z, obs_steer_angle, obs_motor_torque, obs_parking_brake, obs_gear |
17 | World-frame pose + drivetrain state |
| Mast & carriage | obs_mast_height/tilt/side, obs_carriage_pos_x/y/z, obs_carriage_rot_x/y/z/w |
10 | Fork assembly state (forklift frame) |
| HMD head pose | obs_hmd_pos_x/y/z, obs_hmd_rot_x/y/z/w, obs_hmd_tracked |
8 | Interpolated to physics rate, quaternion renormalized |
| Gaze | obs_gaze_dir_x/y/z, obs_gaze_hit_distance |
4 | Eye-tracking direction + surface hit distance |
| Hand controllers | obs_hand_{left,right}_pos_x/y/z, obs_hand_{left,right}_rot_x/y/z/w, obs_hand_{left,right}_trigger/grip/tracked |
20 | Pose interpolated; trigger/grip/tracked forward-filled |
| Environment rigidbodies | obs_rb_{slot}_{pos,rot,lin_vel,ang_vel}_*, obs_rb_{slot}_present |
112 | 8 role slots x 14 cols each (see below) |
Rigidbody role slots (obs_rb_*)
Each dynamic scene object is mapped to a fixed slot so the schema is uniform across exercises.
present=1 when the entity provided data at that timestep; zeroed columns when present=0.
| Slot | Matched entity | Notes |
|---|---|---|
rb_vehicle |
vehicle_cb / vehicle_rt |
Forklift chassis — always present |
rb_carriage |
carriage_cb_default / CarriageRail_ |
Mast carriage body |
rb_pivot_reach |
PIVOT_reach |
Reach-truck only; present=0 on counterbalance |
rb_pivot_tilt |
PIVOT_tilt |
Reach-truck only; present=0 on counterbalance |
rb_crate_0..3 |
TargetCrate_<block>.<step> sorted |
Up to 4 crates; unused slots zeroed |
Per-slot columns (prefix obs_{slot}_): pos_x/y/z, rot_x/y/z/w, lin_vel_x/y/z, ang_vel_x/y/z, present.
Actions — 7D float32
| Column | Range | Derivation |
|---|---|---|
act_throttle |
[-1, 1] | motor_torque / 4600 × gear_sign |
act_steer |
[-1, 1] | steer_angle / 70 |
act_brake |
[0, 1] | Prefer forklift_state.input_brake; fallback to human_controls brake axis/button |
act_lift |
[-1, 1] | direct from input_lift |
act_tilt |
[-1, 1] | direct from input_tilt |
act_sideshift |
[-1, 1] | direct from input_sideshift |
act_boost |
[0, 1] | direct from input_boost |
Rewards (4 columns)
| Column | Type | Description |
|---|---|---|
reward_collision |
float32 | -0.1 × max_collision_velocity per step |
reward_step_completed |
float64 | +1.0 at timesteps where a step completes |
reward_task |
float64 | +10.0 success / -5.0 failure on final timestep (from xAPI) |
reward_time |
float64 | -0.001 per timestep |
Episode signals (4 columns)
| Column | Type | Description |
|---|---|---|
step_token |
string | Active exercise step (forward-filled, "" between steps) |
done |
bool | True on final timestep |
truncated |
bool | True if episode ended without a clean episode_end marker |
paused |
bool | True during paused intervals (rows excluded by default at build time) |
Normalization
Observation vectors are not globally normalized — values are in Unity world-space units
(metres, rad/s, Nm). The action vector is normalized: act_throttle and act_steer are in
[-1, 1]; mast/fork inputs are direct joystick values in [-1, 1].
For training, normalize observations using per-column statistics computed from the training split.
Coordinate System
Unity left-handed: X right, Y up, Z forward. All positions in metres. Rotations as quaternions in (x, y, z, w) component order.
Known Limitations
- VR-only data: Episodes were recorded in a Unity VR simulator. Physics are high-fidelity but do not include all real-world sensor noise.
obs_rb_*_presentmasks: Rigidbody slots that are absent for an exercise type (e.g.obs_rb_pivot_reach_*on counterbalance trucks) havepresent=0and zeroed pose columns for those timesteps.
Loading the Dataset
import pandas as pd
# Load a single shard
df = pd.read_parquet("data/train-00000-of-00001.parquet")
# Reconstruct per-episode sequences
for episode_id, episode in df.groupby("episode_id"):
obs = episode[[c for c in episode.columns if c.startswith("obs_")]].values
act = episode[[c for c in episode.columns if c.startswith("act_")]].values # (T, 7)
reward = (
episode["reward_collision"]
+ episode["reward_step_completed"]
+ episode["reward_task"]
+ episode["reward_time"]
).values # (T,)
done = episode["done"].values # (T,)
# Load with Hugging Face datasets library
from datasets import load_dataset
ds = load_dataset("path/to/dataset", split="train")
Companion Annotation Tables
Two companion configs provide structured event data that can be joined back to the trajectory
via (session_id, episode_id, t_sim).
xapi config — one row per xAPI statement (attempted, completed, passed/failed):
xapi = pd.read_parquet("xapi/xapi-00000-of-00001.parquet")
# columns: session_id, episode_id, exercise_id, statement_id, timestamp, verb,
# actor_name, activity_id, step_token, success, completion, duration,
# duration_seconds, score_scaled, registration, extensions_json
rule_events config — one row per rule firing (collision, procedure violation, etc.):
rules = pd.read_parquet("rule_events/rule_events-00000-of-00001.parquet")
# columns: session_id, episode_id, exercise_id, t_sim, event_type, rule_name,
# rule_version, severity, completion, competency_category, competency_type,
# section_id, objective_index, pos_x/y/z, rot_x/y/z, localization_key
Companion Catalog
The catalog config provides episode-level metadata (exercise_id, duration, quality flags, stream
inventory) without downloading any trajectory data:
catalog = pd.read_parquet("catalog/episodes.parquet")
License
[PLACEHOLDER: license]
Citation
[PLACEHOLDER: citation]
- Downloads last month
- 34