The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning (DICE-RL)
This repository contains the datasets used in the paper From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning.
Project Website | GitHub Repository
Dataset Description
Distribution Contractive Reinforcement Learning (DICE-RL) is a framework that uses reinforcement learning (RL) to refine pretrained generative robot policies. This repository hosts the data used for pretraining Behavior Cloning (BC) policies and finetuning them with DICE-RL across various Robomimic environments.
The data covers both:
- Low-dimensional (state-based) observations.
- Image-based (pixel-based) observations.
Data Splits
ph_pretrain: Datasets used for pretraining the BC policies for broad behavioral coverage.ph_finetune: Datasets used for DICE-RL finetuning. These trajectories are truncated to have exactly one success at the end to ensure consistent value learning.
Dataset Structure
The datasets are provided in numpy format. Once downloaded, they follow this structure:
data_dir/
└── robomimic
├── {env_name}-low-dim
│ ├── ph_pretrain
│ └── ph_finetune
└── {env_name}-img
├── ph_pretrain
└── ph_finetune
Each folder contains:
train.npy: The trajectory data.normalization.npz: Statistics used for data normalization.
Sample Usage
To download the datasets as intended by the authors, you can use the script provided in the official repository:
bash script/download_hf.sh
Citation
@article{sun2026prior,
title={From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning},
author={Sun, Zhanyi and Song, Shuran},
journal={arXiv preprint arXiv:2603.10263},
year={2026}
}
- Downloads last month
- 68