Buckets:

rtrm's picture
|
download
raw
13.2 kB
# Pipeline
## ModularPipeline[[diffusers.ModularPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>class diffusers.ModularPipeline</name><anchor>diffusers.ModularPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L1423</source><parameters>[{"name": "blocks", "val": ": typing.Optional[diffusers.modular_pipelines.modular_pipeline.ModularPipelineBlocks] = None"}, {"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType] = None"}, {"name": "components_manager", "val": ": typing.Optional[diffusers.modular_pipelines.components_manager.ComponentsManager] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **blocks** -- ModularPipelineBlocks, the blocks to be used in the pipeline</paramsdesc><paramgroups>0</paramgroups></docstring>
Base class for all Modular pipelines.
> [!WARNING] > This is an experimental feature and is likely to change in the future.
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>from_pretrained</name><anchor>diffusers.ModularPipeline.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L1604</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType]"}, {"name": "trust_remote_code", "val": ": typing.Optional[bool] = None"}, {"name": "components_manager", "val": ": typing.Optional[diffusers.modular_pipelines.components_manager.ComponentsManager] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, optional) --
Path to a pretrained pipeline configuration. It will first try to load config from
`modular_model_index.json`, then fallback to `model_index.json` for compatibility with standard
non-modular repositories. If the repo does not contain any pipeline config, it will be set to None
during initialization.
- **trust_remote_code** (`bool`, optional) --
Whether to trust remote code when loading the pipeline, need to be set to True if you want to create
pipeline blocks based on the custom code in `pretrained_model_name_or_path`
- **components_manager** (`ComponentsManager`, optional) --
ComponentsManager instance for managing multiple component cross different pipelines and apply
offloading strategies.
- **collection** (`str`, optional) --`
Collection name for organizing components in the ComponentsManager.</paramsdesc><paramgroups>0</paramgroups></docstring>
Load a ModularPipeline from a huggingface hub repo.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>get_component_spec</name><anchor>diffusers.ModularPipeline.get_component_spec</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L1947</source><parameters>[{"name": "name", "val": ": str"}]</parameters><retdesc>- a copy of the ComponentSpec object for the given component name</retdesc></docstring>
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>load_components</name><anchor>diffusers.ModularPipeline.load_components</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L2085</source><parameters>[{"name": "names", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **names** -- List of component names to load. If None, will load all components with
default_creation_method == "from_pretrained". If provided as a list or string, will load only the
specified components.
- ****kwargs** -- additional kwargs to be passed to `from_pretrained()`.Can be:
- a single value to be applied to all components to be loaded, e.g. torch_dtype=torch.bfloat16
- a dict, e.g. torch_dtype={"unet": torch.bfloat16, "default": torch.float32}
- if potentially override ComponentSpec if passed a different loading field in kwargs, e.g. `repo`,
`variant`, `revision`, etc.</paramsdesc><paramgroups>0</paramgroups></docstring>
Load selected components from specs.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>register_components</name><anchor>diffusers.ModularPipeline.register_components</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L1742</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- ****kwargs** -- Keyword arguments where keys are component names and values are component objects.
E.g., register_components(unet=unet_model, text_encoder=encoder_model)</paramsdesc><paramgroups>0</paramgroups></docstring>
Register components with their corresponding specifications.
This method is responsible for:
1. Sets component objects as attributes on the loader (e.g., self.unet = unet)
2. Updates the config dict, which will be saved as `modular_model_index.json` during `save_pretrained` (only
for from_pretrained components)
3. Adds components to the component manager if one is attached (only for from_pretrained components)
This method is called when:
- Components are first initialized in __init__:
- from_pretrained components not loaded during __init__ so they are registered as None;
- non from_pretrained components are created during __init__ and registered as the object itself
- Components are updated with the `update_components()` method: e.g. loader.update_components(unet=unet) or
loader.update_components(guider=guider_spec)
- (from_pretrained) Components are loaded with the `load_components()` method: e.g.
loader.load_components(names=["unet"]) or loader.load_components() to load all default components
Notes:
- When registering None for a component, it sets attribute to None but still syncs specs with the config
dict, which will be saved as `modular_model_index.json` during `save_pretrained`
- component_specs are updated to match the new component outside of this method, e.g. in
`update_components()` method
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>save_pretrained</name><anchor>diffusers.ModularPipeline.save_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L1698</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
Path to the directory where the pipeline will be saved.
- **push_to_hub** (`bool`, optional) --
Whether to push the pipeline to the huggingface hub.
- ****kwargs** -- Additional arguments passed to `save_config()` method</paramsdesc><paramgroups>0</paramgroups></docstring>
Save the pipeline to a directory. It does not save components, you need to save them separately.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>to</name><anchor>diffusers.ModularPipeline.to</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L2164</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **dtype** (`torch.dtype`, *optional*) --
Returns a pipeline with the specified
[`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)
- **device** (`torch.Device`, *optional*) --
Returns a pipeline with the specified
[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device)
- **silence_dtype_warnings** (`str`, *optional*, defaults to `False`) --
Whether to omit warnings if the target `dtype` is not compatible with the target `device`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiffusionPipeline](/docs/diffusers/pr_12595/en/api/pipelines/overview#diffusers.DiffusionPipeline)</rettype><retdesc>The pipeline converted to specified `dtype` and/or `dtype`.</retdesc></docstring>
Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the
arguments of `self.to(*args, **kwargs).`
> [!TIP] > If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is.
Otherwise, > the returned pipeline is a copy of self with the desired torch.dtype and torch.device.
Here are the ways to call `to`:
- `to(dtype, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified
[`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)
- `to(device, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified
[`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device)
- `to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the
specified [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) and
[`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>update_components</name><anchor>diffusers.ModularPipeline.update_components</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/modular_pipelines/modular_pipeline.py#L1954</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- ****kwargs** -- Component objects, ComponentSpec objects, or configuration values to update:
- Component objects: Only supports components we can extract specs using
`ComponentSpec.from_component()` method i.e. components created with ComponentSpec.load() or
ConfigMixin subclasses that aren't nn.Modules (e.g., `unet=new_unet, text_encoder=new_encoder`)
- ComponentSpec objects: Only supports default_creation_method == "from_config", will call create()
method to create a new component (e.g., `guider=ComponentSpec(name="guider",
type_hint=ClassifierFreeGuidance, config={...}, default_creation_method="from_config")`)
- Configuration values: Simple values to update configuration settings (e.g.,
`requires_safety_checker=False`)</paramsdesc><paramgroups>0</paramgroups><raises>- ``ValueError`` -- If a component object is not supported in ComponentSpec.from_component() method:
- nn.Module components without a valid `_diffusers_load_id` attribute
- Non-ConfigMixin components without a valid `_diffusers_load_id` attribute</raises><raisederrors>``ValueError``</raisederrors></docstring>
Update components and configuration values and specs after the pipeline has been instantiated.
This method allows you to:
1. Replace existing components with new ones (e.g., updating `self.unet` or `self.text_encoder`)
2. Update configuration values (e.g., changing `self.requires_safety_checker` flag)
In addition to updating the components and configuration values as pipeline attributes, the method also
updates:
- the corresponding specs in `_component_specs` and `_config_specs`
- the `config` dict, which will be saved as `modular_model_index.json` during `save_pretrained`
<ExampleCodeBlock anchor="diffusers.ModularPipeline.update_components.example">
Examples:
```python
# Update multiple components at once
pipeline.update_components(unet=new_unet_model, text_encoder=new_text_encoder)
# Update configuration values
pipeline.update_components(requires_safety_checker=False)
# Update both components and configs together
pipeline.update_components(unet=new_unet_model, requires_safety_checker=False)
# Update with ComponentSpec objects (from_config only)
pipeline.update_components(
guider=ComponentSpec(
name="guider",
type_hint=ClassifierFreeGuidance,
config={"guidance_scale": 5.0},
default_creation_method="from_config",
)
)
```
</ExampleCodeBlock>
Notes:
- Components with trained weights must be created using ComponentSpec.load(). If the component has not been
shared in huggingface hub and you don't have loading specs, you can upload it using `push_to_hub()`
- ConfigMixin objects without weights (e.g., schedulers, guiders) can be passed directly
- ComponentSpec objects with default_creation_method="from_pretrained" are not supported in
update_components()
</div></div>
<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/modular_diffusers/pipeline.md" />

Xet Storage Details

Size:
13.2 kB
·
Xet hash:
0ba7217ea10c30faf2566655f9e6a8870d658fcf6e5aade688e20be384e51c4d

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.