The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning (DICE-RL)
Project Website | Paper | GitHub
This repository contains the datasets used in DICE-RL, a framework that uses reinforcement learning as a "distribution contraction" operator to refine pretrained generative robot policies. The data includes both pretraining data (for Behavior Cloning) and finetuning data (for DICE-RL) across various Robomimic environments.
Dataset Structure
The datasets are provided in numpy format, and each folder typically contains train.npy and normalization.npz. The data is organized following this structure:
data_dir/
βββ robomimic
βββ {env_name}-low-dim
β βββ ph_pretrain
β βββ ph_finetune
βββ {env_name}-img
βββ ph_pretrain
βββ ph_finetune
- ph_pretrain: Contains the datasets used for pretraining the BC policies.
- ph_finetune: Contains the datasets used for finetuning the DICE-RL policies. These are similar to the pretraining sets but with trajectories truncated to ensure value learning consistency between offline and online data (truncated to have exactly one success at the end).
- low-dim: State-based observations.
- img: High-dimensional pixel (image) observations.
Usage
You can download the datasets using the scripts provided in the GitHub repository:
bash script/download_hf.sh
For more details on generating your own data or processing raw Robomimic datasets, please refer to the project's dataset processing guide.
Citation
@article{sun2026prior,
title={From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning},
author={Sun, Zhanyi and Song, Shuran},
journal={arXiv preprint arXiv:2603.10263},
year={2026}
}
- Downloads last month
- 10