Transformers documentation
PI0
This model was released on 2024-10-31 and added to Hugging Face Transformers on 2026-03-16.
PI0
PI0 is a vision-language-action model for robotics manipulation. It jointly processes visual observations and language instructions to generate robot actions.
The abstract from the paper is as follows: Robot learning holds tremendous promise to unlock the full potential of flexible, general, and dexterous robot systems, as well as to address some of the deepest questions in artificial intelligence. However, bringing robot learning to the level of generality required for effective real-world systems faces major obstacles in terms of data, generalization, and robustness. In this paper, we discuss how generalist robot policies (i.e., robot foundation models) can address these challenges, and how we can design effective generalist robot policies for complex and highly dexterous tasks. We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge. We then discuss how this model can be trained on a large and diverse dataset from multiple dexterous robot platforms, including single-arm robots, dual-arm robots, and mobile manipulators. We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people and from a high-level VLM policy, and its ability to acquire new skills via fine-tuning. Our results cover a wide variety of tasks, such as laundry folding, table cleaning, and assembling boxes.
This model was contributed by Molbap and RaushanTurganbay. The original code can be found here.
You can find all the checkpoints under the PI0 collection.
Usage examples
import torch
from transformers.image_utils import load_image
from transformers import PI0Processor, PI0ForConditionalGeneration
model = PI0ForConditionalGeneration.from_pretrained(
"lerobot/pi0_base",
dtype=torch.float32,
device_map="auto",
attn_implementation="sdpa"
)
processor = PI0Processor.from_pretrained("google/paligemma2-3b-mix-224")
prompt = "Pick up the object"
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/vla_pi0.jpg")
inputs = processor(image, prompt, return_tensors="pt")
state = torch.randn(1, 32) # change with actual robot state
actions = model.sample_actions(**inputs, state=state, num_steps=3)
print(actions)PI0Config
class transformers.PI0Config
< source >( output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None tokenizer_class: str | None = None vlm_config: dict | transformers.configuration_utils.PreTrainedConfig | None = None dit_config: dict | transformers.configuration_utils.PreTrainedConfig | None = None chunk_size: int = 50 max_state_dim: int = 32 max_action_dim: int = 32 num_inference_steps: int = 10 time_sampling_beta_alpha: float = 1.5 time_sampling_beta_beta: float = 1.0 time_sampling_scale: float = 0.999 time_sampling_offset: float = 0.001 min_period: float = 0.004 max_period: float = 4.0 loss_reduction: str = 'mean' )
Parameters
- output_hidden_states (
bool, optional, defaults toFalse) — Whether or not the model should return all hidden-states. - return_dict (
bool, optional, defaults toTrue) — Whether to return aModelOutput(dataclass) instead of a plain tuple. - dtype (
Union[str, torch.dtype], optional) — The chunk size of all feed forward layers in the residual attention blocks. A chunk size of0means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processesn< sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?. - chunk_size_feed_forward (
int, optional, defaults to0) — Thedtypeof the weights. This attribute can be used to initialize the model to a non-defaultdtype(which is normallyfloat32) and thus allow for optimal storage allocation. For example, if the saved model isfloat16, ideally we want to load it back using the minimal amount of memory needed to loadfloat16weights. - is_encoder_decoder (
bool, optional, defaults toFalse) — Whether the model is used as an encoder/decoder or not. - id2label (
Union[dict[int, str], dict[str, str]], optional) — A map from index (for instance prediction index, or target index) to label. - label2id (
Union[dict[str, int], dict[str, str]], optional) — A map from label to index for the model. - problem_type (
Literal[regression, single_label_classification, multi_label_classification], optional) — Problem type forXxxForSequenceClassificationmodels. Can be one of"regression","single_label_classification"or"multi_label_classification". - tokenizer_class (
str, optional) — The class name of model’s tokenizer. - vlm_config (
dict, optional) — Configuration for the vlm backbone (PaliGemmaModel). - dit_config (
dict, optional) — Configuration for the DiT backbone. Defaults to a Gemma 300M variant. - chunk_size (
int, optional, defaults to 50) — Number of action steps to predict per chunk. - max_state_dim (
int, optional, defaults to 32) — Maximum state vector dimension (shorter vectors are zero-padded). - max_action_dim (
int, optional, defaults to 32) — Maximum action vector dimension (shorter vectors are zero-padded). - num_inference_steps (
int, optional, defaults to 10) — Number of denoising steps during inference. - time_sampling_beta_alpha (
float, optional, defaults to 1.5) — Alpha parameter for Beta distribution used to sample diffusion time during training. - time_sampling_beta_beta (
float, optional, defaults to 1.0) — Beta parameter for Beta distribution used to sample diffusion time during training. - time_sampling_scale (
float, optional, defaults to 0.999) — Scale factor for sampled time values. - time_sampling_offset (
float, optional, defaults to 0.001) — Offset added to sampled time values. - min_period (
float, optional, defaults to 0.004) — Minimum period for sinusoidal time embedding. - max_period (
float, optional, defaults to 4.0) — Maximum period for sinusoidal time embedding. - loss_reduction (
str, optional, defaults to"mean") — The reduction to use on MSE loss.
This is the configuration class to store the configuration of a PI0Model. It is used to instantiate a Pi0 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the lerobot/pi0_base
Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.
PI0Processor
class transformers.PI0Processor
< source >( image_processor = None tokenizer = None chat_template = None **kwargs )
Constructs a PI0Processor which wraps a image processor and a tokenizer into a single processor.
PI0Processor offers all the functionalities of PI0ImageProcessorFast and tokenizer_class. See the
~PI0ImageProcessorFast and ~tokenizer_class for more information.
__call__
< source >( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor'], list[typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']]], list[list[typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']]]], NoneType] text: str | list[str] | list[list[str]] | None = None actions: list | numpy.ndarray | torch.Tensor | None = None state: list | numpy.ndarray | torch.Tensor | None = None **kwargs: typing_extensions.Unpack[transformers.models.pi0.processing_pi0.PI0ProcessorKwargs] ) → BatchFeature
Parameters
- images (
Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list[PIL.Image.Image], list[numpy.ndarray], list[torch.Tensor], list[Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list[PIL.Image.Image], list[numpy.ndarray], list[torch.Tensor]]], list[list[Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list[PIL.Image.Image], list[numpy.ndarray], list[torch.Tensor]]]]], optional) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False. - text (
Union[str, list[str], list[list[str]]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If you pass a pretokenized input, setis_split_into_words=Trueto avoid ambiguity with batched inputs. - actions (
list | np.ndarray | torch.Tensor, optional) — Actions to be predicted by the model. If provided, padding, mean and std normalization will be applied. - state (
list | np.ndarray | torch.Tensor, optional) — Robotic states to be predicted by the model. If provided, padding, mean and std normalization will be applied. - return_tensors (
stror TensorType, optional) — If set, will return tensors of a particular framework. Acceptable values are:'pt': Return PyTorchtorch.Tensorobjects.'np': Return NumPynp.ndarrayobjects.
- **kwargs (ProcessingKwargs, optional) — Additional processing options for each modality (text, images, videos, audio). Model-specific parameters are listed above; see the TypedDict class for the complete list of supported arguments.
Returns
A BatchFeature with the following fields:
- input_ids — List of token ids to be fed to a model. Returned when
textis notNone. Ifsuffixis provided, theinput_idswill also contain the suffix input ids. - attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=Trueor if “attention_mask” is inself.model_input_namesand iftextis notNone). - pixel_values — Pixel values to be fed to a model. Returned when
imagesis notNone. - pixel_attention_mask — Pixel values padding mask to be fed to a model. Returned when
imagesis notNone. - state — Robot state compatible with model if
stateis not None - actions — Label-actions compatible with training if
actionsis not None
PI0Model
class transformers.PI0Model
< source >( config: PI0Config )
Parameters
- config (PI0Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Pi0 Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( action_embeds: Tensor input_ids: torch.Tensor | None = None pixel_values: torch.Tensor | None = None attention_mask: torch.Tensor | None = None pixel_attention_mask: torch.Tensor | None = None position_ids: torch.LongTensor | None = None inputs_embeds: torch.Tensor | None = None past_key_values: transformers.cache_utils.Cache | None = None **kwargs ) → BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
- action_embeds (
torch.Tensor, optional) — The embeddings of input actions and robot states. - input_ids (
torch.Tensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- pixel_values (
torch.Tensorof shape(batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained usingPI0ImageProcessorFast. SeePI0ImageProcessorFast.__call__for details (PI0Processor usesPI0ImageProcessorFastfor processing images). - attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- pixel_attention_mask (
torch.Tensor, optional) — The mask indicating padded positions in the input image. - position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - inputs_embeds (
torch.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - past_key_values (
~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.Only Cache instance is allowed as input, see our kv cache guide. If no
past_key_valuesare passed, DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’t have their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length).
Returns
BaseModelOutputWithPast or tuple(torch.FloatTensor)
A BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PI0Config) and inputs.
The PI0Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.If
past_key_valuesis used only the last hidden-state of the sequences of shape(batch_size, 1, hidden_size)is output.past_key_values (
Cache, optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=Truein the cross-attention blocks) that can be used (seepast_key_valuesinput) to speed up sequential decoding.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
PI0ForConditionalGeneration
PI0 model with action projection heads and flow matching.
forward
< source >( state: FloatTensor noise: torch.FloatTensor | None = None timestep: torch.FloatTensor | None = None input_ids: torch.Tensor | None = None pixel_values: torch.Tensor | None = None pixel_attention_mask: torch.BoolTensor | None = None attention_mask: torch.Tensor | None = None position_ids: torch.LongTensor | None = None inputs_embeds: torch.Tensor | None = None past_key_values: transformers.cache_utils.Cache | None = None actions: FloatTensor = None **kwargs ) → CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
- state (
torch.Tensor, optional) — Current robot state. - noise (
torch.Tensor, optional) — Random noise at current timestep that needs to be denoised - timestep (
torch.Tensor, optional) — Current denoising timestep. - input_ids (
torch.Tensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- pixel_values (
torch.Tensorof shape(batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained usingPI0ImageProcessorFast. SeePI0ImageProcessorFast.__call__for details (PI0Processor usesPI0ImageProcessorFastfor processing images). - pixel_attention_mask (
torch.Tensor, optional) — The mask indicating padded positions in the input image. - attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - inputs_embeds (
torch.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - past_key_values (
~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.Only Cache instance is allowed as input, see our kv cache guide. If no
past_key_valuesare passed, DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’t have their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - actions (
torch.Tensor, optional) — Input actions that need to be predicted. Used only when training to compiute loss.
Returns
CausalLMOutputWithPast or tuple(torch.FloatTensor)
A CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PI0Config) and inputs.
The PI0ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
loss (
torch.FloatTensorof shape(1,), optional, returned whenlabelsis provided) — Language modeling loss (for next-token prediction).logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).past_key_values (
Cache, optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding.hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from PIL import Image
>>> from transformers import AutoProcessor, PI0ForConditionalGeneration
>>> model = PI0ForConditionalGeneration.from_pretrained("lerobot/pi0_base")
>>> processor = AutoProcessor.from_pretrained("lerobot/pi0_base")
>>> messages = [
... {
... "role": "user", "content": [
... {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"},
... {"type": "text", "text": "Where is the cat standing?"},
... ]
... },
... ]
>>> inputs = processor.apply_chat_template(
... messages,
... tokenize=True,
... return_dict=True,
... return_tensors="pt",
... add_generation_prompt=True
... )
>>> # Generate
>>> generate_ids = model.generate(**inputs)
>>> processor.batch_decode(generate_ids, skip_special_tokens=True)[0]sample_actions
< source >( state: FloatTensor input_ids: LongTensor pixel_values: FloatTensor noise: torch.FloatTensor | None = None attention_mask: torch.Tensor | None = None pixel_attention_mask: torch.BoolTensor | None = None num_steps: int | None = None **kwargs )
Run flow matching inference to generate actions.