Buckets:
Pipeline
ModularPipeline[[diffusers.ModularPipeline]]
diffusers.ModularPipeline[[diffusers.ModularPipeline]]
Base class for all Modular pipelines.
> This is an experimental feature and is likely to change in the future.
from_pretraineddiffusers.ModularPipeline.from_pretrainedhttps://github.com/huggingface/diffusers/blob/vr_12652/src/diffusers/modular_pipelines/modular_pipeline.py#L1788[{"name": "pretrained_model_name_or_path", "val": ": str | os.PathLike | None"}, {"name": "trust_remote_code", "val": ": bool | None = None"}, {"name": "components_manager", "val": ": diffusers.modular_pipelines.components_manager.ComponentsManager | None = None"}, {"name": "collection", "val": ": str | None = None"}, {"name": "**kwargs", "val": ""}]- pretrained_model_name_or_path (str or os.PathLike, optional) --
Path to a pretrained pipeline configuration. It will first try to load config from
modular_model_index.json, then fallback to model_index.json for compatibility with standard
non-modular repositories. If the pretrained_model_name_or_path does not contain any pipeline config, it
will be set to None during initialization.
- trust_remote_code (
bool, optional) -- Whether to trust remote code when loading the pipeline, need to be set to True if you want to create pipeline blocks based on the custom code inpretrained_model_name_or_path - components_manager (
ComponentsManager, optional) -- ComponentsManager instance for managing multiple component cross different pipelines and apply offloading strategies. - collection (
str, optional) --` Collection name for organizing components in the ComponentsManager.0
Load a ModularPipeline from a huggingface hub repo.
Parameters:
blocks : ModularPipelineBlocks, the blocks to be used in the pipeline
get_component_spec[[diffusers.ModularPipeline.get_component_spec]]
Returns:
- a copy of the ComponentSpec object for the given component name
load_components[[diffusers.ModularPipeline.load_components]]
Load selected components from specs.
Parameters:
names : list of component names to load. If None, will load all components with default_creation_method == "from_pretrained". If provided as a list or string, will load only the specified components.
- **kwargs : additional kwargs to be passed to
from_pretrained().Can be: - a single value to be applied to all components to be loaded, e.g. torch_dtype=torch.bfloat16 - a dict, e.g. torch_dtype={"unet": torch.bfloat16, "default": torch.float32} - if potentially override ComponentSpec if passed a different loading field in kwargs, e.g.pretrained_model_name_or_path,variant,revision, etc. - if potentially override ComponentSpec if passed a different loading field in kwargs, e.g.pretrained_model_name_or_path,variant,revision, etc.
register_components[[diffusers.ModularPipeline.register_components]]
Register components with their corresponding specifications.
This method is responsible for:
- Sets component objects as attributes on the loader (e.g., self.unet = unet)
- Updates the config dict, which will be saved as
modular_model_index.jsonduringsave_pretrained(only for from_pretrained components) - Adds components to the component manager if one is attached (only for from_pretrained components)
This method is called when:
- Components are first initialized in init:
- from_pretrained components not loaded during init so they are registered as None;
- non from_pretrained components are created during init and registered as the object itself
- Components are updated with the
update_components()method: e.g. loader.update_components(unet=unet) or loader.update_components(guider=guider_spec) - (from_pretrained) Components are loaded with the
load_components()method: e.g. loader.load_components(names=["unet"]) or loader.load_components() to load all default components
Notes:
- When registering None for a component, it sets attribute to None but still syncs specs with the config
dict, which will be saved as
modular_model_index.jsonduringsave_pretrained - component_specs are updated to match the new component outside of this method, e.g. in
update_components()method
Parameters:
- **kwargs : Keyword arguments where keys are component names and values are component objects. E.g., register_components(unet=unet_model, text_encoder=encoder_model)
save_pretrained[[diffusers.ModularPipeline.save_pretrained]]
Save the pipeline and all its components to a directory, so that it can be re-loaded using the from_pretrained() class method.
Parameters:
save_directory (str or os.PathLike) : Directory to save the pipeline to. Will be created if it doesn't exist.
safe_serialization (bool, optional, defaults to True) : Whether to save the model using safetensors or the traditional PyTorch way with pickle.
variant (str, optional) : If specified, weights are saved in the format pytorch_model..bin.
max_shard_size (int or str, defaults to None) : The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5GB"). If expressed as an integer, the unit is bytes.
push_to_hub (bool, optional, defaults to False) : Whether to push the pipeline to the Hugging Face model hub after saving it.
- **kwargs : Additional keyword arguments: -
overwrite_modular_index(bool, optional, defaults toFalse): When saving a Modular Pipeline, its components inmodular_model_index.jsonmay reference repos different from the destination repo. Setting this toTrueupdates all component references inmodular_model_index.jsonso they point to the repo specified byrepo_id. -repo_id(str, optional): The repository ID to push the pipeline to. Defaults to the last component ofsave_directory. -commit_message(str, optional): Commit message for the push to hub operation. -private(bool, optional): Whether the repository should be private. -create_pr(bool, optional, defaults toFalse): Whether to create a pull request instead of pushing directly. -token(str, optional): The Hugging Face token to use for authentication.
to[[diffusers.ModularPipeline.to]]
Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the
arguments of self.to(*args, **kwargs).
> If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, > the returned pipeline is a copy of self with the desired torch.dtype and torch.device.
Here are the ways to call to:
to(dtype, silence_dtype_warnings=False) → DiffusionPipelineto return a pipeline with the specifieddtypeto(device, silence_dtype_warnings=False) → DiffusionPipelineto return a pipeline with the specifieddeviceto(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipelineto return a pipeline with the specifieddeviceanddtype
Parameters:
dtype (torch.dtype, optional) : Returns a pipeline with the specified dtype
device (torch.Device, optional) : Returns a pipeline with the specified device
silence_dtype_warnings (str, optional, defaults to False) : Whether to omit warnings if the target dtype is not compatible with the target device.
Returns:
[DiffusionPipeline](/docs/diffusers/pr_12652/en/api/pipelines/overview#diffusers.DiffusionPipeline)
The pipeline converted to specified dtype and/or dtype.
update_components[[diffusers.ModularPipeline.update_components]]
Update components and configuration values and specs after the pipeline has been instantiated.
This method allows you to:
- Replace existing components with new ones (e.g., updating
self.unetorself.text_encoder) - Update configuration values (e.g., changing
self.requires_safety_checkerflag)
In addition to updating the components and configuration values as pipeline attributes, the method also updates:
- the corresponding specs in
_component_specsand_config_specs - the
configdict, which will be saved asmodular_model_index.jsonduringsave_pretrained
Examples:
# Update pre-trained model
pipeline.update_components(unet=new_unet_model, text_encoder=new_text_encoder)
# Update configuration values
pipeline.update_components(requires_safety_checker=False)
Notes:
- Components loaded with
AutoModel.from_pretrained()orComponentSpec.load()will have loading specs preserved for serialization. Custom or locally loaded components without Hub references will have theirmodular_model_index.jsonentries updated automatically duringsave_pretrained(). - ConfigMixin objects without weights (e.g., schedulers, guiders) can be passed directly.
Parameters:
- **kwargs : Component objects or configuration values to update: - Component objects: Models loaded with
AutoModel.from_pretrained()orComponentSpec.load()are automatically tagged with loading information. ConfigMixin objects without weights (e.g., schedulers, guiders) can be passed directly. - Configuration values: Simple values to update configuration settings (e.g.,requires_safety_checker=False)
Xet Storage Details
- Size:
- 10.8 kB
- Xet hash:
- 09a199e59929963f365655724d34bbaf2789c62517019aff67bb64fb2cf0e3f6
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.