Buckets:
| # LoRA | |
| LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the denoiser, text encoder or both. The denoiser usually corresponds to a UNet ([UNet2DConditionModel](/docs/diffusers/pr_12229/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel), for example) or a Transformer ([SD3Transformer2DModel](/docs/diffusers/pr_12229/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel), for example). There are several classes for loading LoRA weights: | |
| - `StableDiffusionLoraLoaderMixin` provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. | |
| - `StableDiffusionXLLoraLoaderMixin` is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl) version of the `StableDiffusionLoraLoaderMixin` class for loading and saving LoRA weights. It can only be used with the SDXL model. | |
| - `SD3LoraLoaderMixin` provides similar functions for [Stable Diffusion 3](https://huggingface.co/blog/sd3). | |
| - `FluxLoraLoaderMixin` provides similar functions for [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux). | |
| - `CogVideoXLoraLoaderMixin` provides similar functions for [CogVideoX](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox). | |
| - `Mochi1LoraLoaderMixin` provides similar functions for [Mochi](https://huggingface.co/docs/diffusers/main/en/api/pipelines/mochi). | |
| - `AuraFlowLoraLoaderMixin` provides similar functions for [AuraFlow](https://huggingface.co/fal/AuraFlow). | |
| - `LTXVideoLoraLoaderMixin` provides similar functions for [LTX-Video](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video). | |
| - `SanaLoraLoaderMixin` provides similar functions for [Sana](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana). | |
| - `HunyuanVideoLoraLoaderMixin` provides similar functions for [HunyuanVideo](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video). | |
| - `Lumina2LoraLoaderMixin` provides similar functions for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2). | |
| - `WanLoraLoaderMixin` provides similar functions for [Wan](https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan). | |
| - `SkyReelsV2LoraLoaderMixin` provides similar functions for [SkyReels-V2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/skyreels_v2). | |
| - `CogView4LoraLoaderMixin` provides similar functions for [CogView4](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogview4). | |
| - `AmusedLoraLoaderMixin` is for the [AmusedPipeline](/docs/diffusers/pr_12229/en/api/pipelines/amused#diffusers.AmusedPipeline). | |
| - `HiDreamImageLoraLoaderMixin` provides similar functions for [HiDream Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hidream) | |
| - `QwenImageLoraLoaderMixin` provides similar functions for [Qwen Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/qwen) | |
| - `LoraBaseMixin` provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more. | |
| > [!TIP] | |
| > To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide. | |
| ## LoraBaseMixin[[diffusers.loaders.lora_base.LoraBaseMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.lora_base.LoraBaseMixin</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L478</source><parameters>[]</parameters></docstring> | |
| Utility class for handling LoRAs. | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>delete_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L838</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}]</parameters><paramsdesc>- **adapter_names** (`Union[List[str], str]`) -- | |
| The names of the adapters to delete.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Delete an adapter's LoRA layers from the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" | |
| ) | |
| pipeline.delete_adapters("cinematic") | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>disable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.disable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L778</source><parameters>[]</parameters></docstring> | |
| Disables the active LoRA layers of the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.disable_lora.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" | |
| ) | |
| pipeline.disable_lora() | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>enable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L808</source><parameters>[]</parameters></docstring> | |
| Enables the active LoRA layers of the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.enable_lora.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" | |
| ) | |
| pipeline.enable_lora() | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>enable_lora_hotswap</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L985</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **target_rank** (`int`) -- | |
| The highest rank among all the adapters that will be loaded. | |
| - **check_compiled** (`str`, *optional*, defaults to `"error"`) -- | |
| How to handle a model that is already compiled. The check can return the following messages: | |
| - "error" (default): raise an error | |
| - "warn": issue a warning | |
| - "ignore": do nothing</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Hotswap adapters without triggering recompilation of a model or if the ranks of the loaded adapters are | |
| different. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L536</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** -- (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. | |
| - **lora_scale** (`float`, defaults to 1.0) -- | |
| Controls how much to influence the outputs with the LoRA parameters. | |
| - **safe_fusing** (`bool`, defaults to `False`) -- | |
| Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. | |
| - **adapter_names** (`List[str]`, *optional*) -- | |
| Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Fuses the LoRA parameters into the original parameters of the corresponding blocks. | |
| > [!WARNING] > This is an experimental API. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora.example"> | |
| Example: | |
| ```py | |
| from diffusers import DiffusionPipeline | |
| import torch | |
| pipeline = DiffusionPipeline.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") | |
| pipeline.fuse_lora(lora_scale=0.7) | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>get_active_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L876</source><parameters>[]</parameters></docstring> | |
| Gets the list of the current active adapters. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters.example"> | |
| Example: | |
| ```python | |
| from diffusers import DiffusionPipeline | |
| pipeline = DiffusionPipeline.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", | |
| ).to("cuda") | |
| pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") | |
| pipeline.get_active_adapters() | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>get_list_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_list_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L909</source><parameters>[]</parameters></docstring> | |
| Gets the current list of all available adapters in the pipeline. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>set_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L675</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}, {"name": "adapter_weights", "val": ": typing.Union[float, typing.Dict, typing.List[float], typing.List[typing.Dict], NoneType] = None"}]</parameters><paramsdesc>- **adapter_names** (`List[str]` or `str`) -- | |
| The names of the adapters to use. | |
| - **adapter_weights** (`Union[List[float], float]`, *optional*) -- | |
| The adapter(s) weights to use with the UNet. If `None`, the weights are set to `1.0` for all the | |
| adapters.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Set the currently active adapters for use in the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_adapters.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" | |
| ) | |
| pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") | |
| pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>set_lora_device</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L931</source><parameters>[{"name": "adapter_names", "val": ": typing.List[str]"}, {"name": "device", "val": ": typing.Union[torch.device, str, int]"}]</parameters><paramsdesc>- **adapter_names** (`List[str]`) -- | |
| List of adapters to send device to. | |
| - **device** (`Union[torch.device, str, int]`) -- | |
| Device to send the adapters to. Can be either a torch device, a str or an integer.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Moves the LoRAs listed in `adapter_names` to a target device. Useful for offloading the LoRA to the CPU in case | |
| you want to load multiple adapters and free some GPU memory. | |
| After offloading the LoRA adapters to CPU, as long as the rest of the model is still on GPU, the LoRA adapters | |
| can no longer be used for inference, as that would cause a device mismatch. Remember to set the device back to | |
| GPU before using those LoRA adapters for inference. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device.example"> | |
| ```python | |
| >>> pipe.load_lora_weights(path_1, adapter_name="adapter-1") | |
| >>> pipe.load_lora_weights(path_2, adapter_name="adapter-2") | |
| >>> pipe.set_adapters("adapter-1") | |
| >>> image_1 = pipe(**kwargs) | |
| >>> # switch to adapter-2, offload adapter-1 | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cpu") | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cuda:0") | |
| >>> pipe.set_adapters("adapter-2") | |
| >>> image_2 = pipe(**kwargs) | |
| >>> # switch back to adapter-1, offload adapter-2 | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cpu") | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cuda:0") | |
| >>> pipe.set_adapters("adapter-1") | |
| >>> ... | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L622</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** (`List[str]`) -- List of LoRA-injectable components to unfuse LoRA from. | |
| - **unfuse_unet** (`bool`, defaults to `True`) -- Whether to unfuse the UNet LoRA parameters. | |
| - **unfuse_text_encoder** (`bool`, defaults to `True`) -- | |
| Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn't monkey-patched with the | |
| LoRA parameters then it won't have any effect.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Reverses the effect of | |
| [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). | |
| > [!WARNING] > This is an experimental API. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unload_lora_weights</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L513</source><parameters>[]</parameters></docstring> | |
| Unloads the LoRA parameters. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights.example"> | |
| Examples: | |
| ```python | |
| >>> # Assuming `pipeline` is already loaded with the LoRA parameters. | |
| >>> pipeline.unload_lora_weights() | |
| >>> ... | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>write_lora_layers</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.write_lora_layers</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L1008</source><parameters>[{"name": "state_dict", "val": ": typing.Dict[str, torch.Tensor]"}, {"name": "save_directory", "val": ": str"}, {"name": "is_main_process", "val": ": bool"}, {"name": "weight_name", "val": ": str"}, {"name": "save_function", "val": ": typing.Callable"}, {"name": "safe_serialization", "val": ": bool"}, {"name": "lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| Writes the state dict of the LoRA layers (optionally with metadata) to disk. | |
| </div></div> | |
| ## StableDiffusionLoraLoaderMixin[[diffusers.loaders.StableDiffusionLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.StableDiffusionLoraLoaderMixin</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L128</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into Stable Diffusion [UNet2DConditionModel](/docs/diffusers/pr_12229/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) and | |
| [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L411</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) -- | |
| A standard state dict containing the lora layer parameters. The key should be prefixed with an | |
| additional `text_encoder` to distinguish between unet lora layers. | |
| - **network_alphas** (`Dict[str, float]`) -- | |
| The value of the network alpha used for stable learning and preventing underflow. This value has the | |
| same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this | |
| link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). | |
| - **text_encoder** (`CLIPTextModel`) -- | |
| The text encoder model to load the LoRA layers into. | |
| - **prefix** (`str`) -- | |
| Expected prefix of the `text_encoder` in the `state_dict`. | |
| - **lora_scale** (`float`) -- | |
| How much to scale the output of the lora linear layer before it is added with the output of the regular | |
| lora layer. | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| Speed up model loading by only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights). | |
| - **metadata** (`dict`) -- | |
| Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived | |
| from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| This will load the LoRA layers specified in `state_dict` into `text_encoder` | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_unet</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L350</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "unet", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) -- | |
| A standard state dict containing the lora layer parameters. The keys can either be indexed directly | |
| into the unet or prefixed with an additional `unet` which can be used to distinguish between text | |
| encoder lora layers. | |
| - **network_alphas** (`Dict[str, float]`) -- | |
| The value of the network alpha used for stable learning and preventing underflow. This value has the | |
| same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this | |
| link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). | |
| - **unet** (`UNet2DConditionModel`) -- | |
| The UNet model to load the LoRA layers into. | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| Speed up model loading only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights). | |
| - **metadata** (`dict`) -- | |
| Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived | |
| from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| This will load the LoRA layers specified in `state_dict` into `unet`. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L138</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) -- | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict). | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| Speed up model loading by only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter | |
| in-place. This means that, instead of loading an additional adapter, this will take the existing | |
| adapter weights and replace them with the weights of the new adapter. This can be faster and more | |
| memory efficient. However, the main advantage of hotswapping is that when the model is compiled with | |
| torch.compile, loading the new adapter does not require recompilation of the model. When using | |
| hotswapping, the passed `adapter_name` should be the name of an already loaded adapter. | |
| If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need | |
| to call an additional method before loading the adapter: | |
| ```py | |
| pipeline = ... # load diffusers pipeline | |
| max_rank = ... # the highest rank among all LoRAs that you want to load | |
| # call *before* compiling and loading the LoRA adapter | |
| pipeline.enable_lora_hotswap(target_rank=max_rank) | |
| pipeline.load_lora_weights(file_name) | |
| # optionally compile the model now | |
| ``` | |
| Note that hotswapping adapters of the text encoder is not yet supported. There are some further | |
| limitations to this technique, which are documented here: | |
| https://huggingface.co/docs/peft/main/en/package_reference/hotswap | |
| - **kwargs** (`dict`, *optional*) -- | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and | |
| `self.text_encoder`. | |
| All kwargs are forwarded to `self.lora_state_dict`. | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is | |
| loaded. | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details on how the state dict is | |
| loaded into `self.unet`. | |
| See [load_lora_into_text_encoder()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder) for more details on how the state | |
| dict is loaded into `self.text_encoder`. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L239</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) -- | |
| Can be either: | |
| - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on | |
| the Hub. | |
| - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved | |
| with [ModelMixin.save_pretrained()](/docs/diffusers/pr_12229/en/api/models/overview#diffusers.ModelMixin.save_pretrained). | |
| - A [torch state | |
| dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). | |
| - **cache_dir** (`Union[str, os.PathLike]`, *optional*) -- | |
| Path to a directory where a downloaded pretrained model configuration is cached if the standard cache | |
| is not used. | |
| - **force_download** (`bool`, *optional*, defaults to `False`) -- | |
| Whether or not to force the (re-)download of the model weights and configuration files, overriding the | |
| cached versions if they exist. | |
| - **proxies** (`Dict[str, str]`, *optional*) -- | |
| A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', | |
| 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. | |
| - **local_files_only** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to only load local model weights and configuration files or not. If set to `True`, the model | |
| won't be downloaded from the Hub. | |
| - **token** (`str` or *bool*, *optional*) -- | |
| The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from | |
| `diffusers-cli login` (stored in `~/.huggingface`) is used. | |
| - **revision** (`str`, *optional*, defaults to `"main"`) -- | |
| The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier | |
| allowed by Git. | |
| - **subfolder** (`str`, *optional*, defaults to `""`) -- | |
| The subfolder location of a model file within a larger model repository on the Hub or locally. | |
| - **weight_name** (`str`, *optional*, defaults to None) -- | |
| Name of the serialized state dict file. | |
| - **return_lora_metadata** (`bool`, *optional*, defaults to False) -- | |
| When enabled, additionally return the LoRA adapter metadata, typically found in the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Return state dict for lora weights and the network alphas. | |
| > [!WARNING] > We support loading A1111 formatted LoRA checkpoints in a limited capacity. > > This function is | |
| experimental and might change in the future. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L469</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) -- | |
| Directory to save LoRA parameters to. Will be created if it doesn't exist. | |
| - **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) -- | |
| State dict of the LoRA layers corresponding to the `unet`. | |
| - **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) -- | |
| State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text | |
| encoder LoRA state dict because it comes from 🤗 Transformers. | |
| - **is_main_process** (`bool`, *optional*, defaults to `True`) -- | |
| Whether the process calling this is the main process or not. Useful during distributed training and you | |
| need to call this function on all processes. In this case, set `is_main_process=True` only on the main | |
| process to avoid race conditions. | |
| - **save_function** (`Callable`) -- | |
| The function to use to save the state dictionary. Useful during distributed training when you need to | |
| replace `torch.save` with another method. Can be configured with the environment variable | |
| `DIFFUSERS_SAVE_MODE`. | |
| - **safe_serialization** (`bool`, *optional*, defaults to `True`) -- | |
| Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. | |
| - **unet_lora_adapter_metadata** -- | |
| LoRA adapter metadata associated with the unet to be serialized with the state dict. | |
| - **text_encoder_lora_adapter_metadata** -- | |
| LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Save the LoRA parameters corresponding to the UNet and text encoder. | |
| </div></div> | |
| ## StableDiffusionXLLoraLoaderMixin[[diffusers.loaders.StableDiffusionXLLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.StableDiffusionXLLoraLoaderMixin</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L592</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into Stable Diffusion XL [UNet2DConditionModel](/docs/diffusers/pr_12229/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel), | |
| [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and | |
| [`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L958</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['unet', 'text_encoder', 'text_encoder_2']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L851</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) -- | |
| A standard state dict containing the lora layer parameters. The key should be prefixed with an | |
| additional `text_encoder` to distinguish between unet lora layers. | |
| - **network_alphas** (`Dict[str, float]`) -- | |
| The value of the network alpha used for stable learning and preventing underflow. This value has the | |
| same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this | |
| link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). | |
| - **text_encoder** (`CLIPTextModel`) -- | |
| The text encoder model to load the LoRA layers into. | |
| - **prefix** (`str`) -- | |
| Expected prefix of the `text_encoder` in the `state_dict`. | |
| - **lora_scale** (`float`) -- | |
| How much to scale the output of the lora linear layer before it is added with the output of the regular | |
| lora layer. | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| Speed up model loading by only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights). | |
| - **metadata** (`dict`) -- | |
| Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived | |
| from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| This will load the LoRA layers specified in `state_dict` into `text_encoder` | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_unet</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_into_unet</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L789</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "unet", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) -- | |
| A standard state dict containing the lora layer parameters. The keys can either be indexed directly | |
| into the unet or prefixed with an additional `unet` which can be used to distinguish between text | |
| encoder lora layers. | |
| - **network_alphas** (`Dict[str, float]`) -- | |
| The value of the network alpha used for stable learning and preventing underflow. This value has the | |
| same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this | |
| link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). | |
| - **unet** (`UNet2DConditionModel`) -- | |
| The UNet model to load the LoRA layers into. | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| Speed up model loading only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights). | |
| - **metadata** (`dict`) -- | |
| Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived | |
| from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| This will load the LoRA layers specified in `state_dict` into `unet`. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L603</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L677</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) -- | |
| Can be either: | |
| - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on | |
| the Hub. | |
| - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved | |
| with [ModelMixin.save_pretrained()](/docs/diffusers/pr_12229/en/api/models/overview#diffusers.ModelMixin.save_pretrained). | |
| - A [torch state | |
| dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict). | |
| - **cache_dir** (`Union[str, os.PathLike]`, *optional*) -- | |
| Path to a directory where a downloaded pretrained model configuration is cached if the standard cache | |
| is not used. | |
| - **force_download** (`bool`, *optional*, defaults to `False`) -- | |
| Whether or not to force the (re-)download of the model weights and configuration files, overriding the | |
| cached versions if they exist. | |
| - **proxies** (`Dict[str, str]`, *optional*) -- | |
| A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', | |
| 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. | |
| - **local_files_only** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to only load local model weights and configuration files or not. If set to `True`, the model | |
| won't be downloaded from the Hub. | |
| - **token** (`str` or *bool*, *optional*) -- | |
| The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from | |
| `diffusers-cli login` (stored in `~/.huggingface`) is used. | |
| - **revision** (`str`, *optional*, defaults to `"main"`) -- | |
| The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier | |
| allowed by Git. | |
| - **subfolder** (`str`, *optional*, defaults to `""`) -- | |
| The subfolder location of a model file within a larger model repository on the Hub or locally. | |
| - **weight_name** (`str`, *optional*, defaults to None) -- | |
| Name of the serialized state dict file. | |
| - **return_lora_metadata** (`bool`, *optional*, defaults to False) -- | |
| When enabled, additionally return the LoRA adapter metadata, typically found in the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Return state dict for lora weights and the network alphas. | |
| > [!WARNING] > We support loading A1111 formatted LoRA checkpoints in a limited capacity. > > This function is | |
| experimental and might change in the future. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L910</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_2_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_2_lora_adapter_metadata", "val": " = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L977</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['unet', 'text_encoder', 'text_encoder_2']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## SD3LoraLoaderMixin[[diffusers.loaders.SD3LoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.SD3LoraLoaderMixin</name><anchor>diffusers.loaders.SD3LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L984</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [SD3Transformer2DModel](/docs/diffusers/pr_12229/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel), | |
| [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and | |
| [`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection). | |
| Specific to [StableDiffusion3Pipeline](/docs/diffusers/pr_12229/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1256</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder', 'text_encoder_2']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1147</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) -- | |
| A standard state dict containing the lora layer parameters. The key should be prefixed with an | |
| additional `text_encoder` to distinguish between unet lora layers. | |
| - **network_alphas** (`Dict[str, float]`) -- | |
| The value of the network alpha used for stable learning and preventing underflow. This value has the | |
| same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this | |
| link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). | |
| - **text_encoder** (`CLIPTextModel`) -- | |
| The text encoder model to load the LoRA layers into. | |
| - **prefix** (`str`) -- | |
| Expected prefix of the `text_encoder` in the `state_dict`. | |
| - **lora_scale** (`float`) -- | |
| How much to scale the output of the lora linear layer before it is added with the output of the regular | |
| lora layer. | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| Speed up model loading by only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights). | |
| - **metadata** (`dict`) -- | |
| Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived | |
| from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| This will load the LoRA layers specified in `state_dict` into `text_encoder` | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1116</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1051</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": " = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L997</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1206</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_2_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_2_lora_adapter_metadata", "val": " = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1276</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder', 'text_encoder_2']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## FluxLoraLoaderMixin[[diffusers.loaders.FluxLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.FluxLoraLoaderMixin</name><anchor>diffusers.loaders.FluxLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1483</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [FluxTransformer2DModel](/docs/diffusers/pr_12229/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel), | |
| [`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel). | |
| Specific to [StableDiffusion3Pipeline](/docs/diffusers/pr_12229/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1955</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1832</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) -- | |
| A standard state dict containing the lora layer parameters. The key should be prefixed with an | |
| additional `text_encoder` to distinguish between unet lora layers. | |
| - **network_alphas** (`Dict[str, float]`) -- | |
| The value of the network alpha used for stable learning and preventing underflow. This value has the | |
| same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this | |
| link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning). | |
| - **text_encoder** (`CLIPTextModel`) -- | |
| The text encoder model to load the LoRA layers into. | |
| - **prefix** (`str`) -- | |
| Expected prefix of the `text_encoder` in the `state_dict`. | |
| - **lora_scale** (`float`) -- | |
| How much to scale the output of the lora linear layer before it is added with the output of the regular | |
| lora layer. | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| Speed up model loading by only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights). | |
| - **metadata** (`dict`) -- | |
| Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived | |
| from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| This will load the LoRA layers specified in `state_dict` into `text_encoder` | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1746</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "metadata", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1621</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) -- | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict). | |
| - **adapter_name** (`str`, *optional*) -- | |
| Adapter name to be used for referencing the loaded adapter model. If not specified, it will use | |
| `default_{i}` where i is the total number of adapters being loaded. | |
| - **low_cpu_mem_usage** (`bool`, *optional*) -- | |
| `Speed up model loading by only loading the pretrained LoRA weights and not initializing the random | |
| weights. | |
| - **hotswap** (`bool`, *optional*) -- | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights). | |
| - **kwargs** (`dict`, *optional*) -- | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and | |
| `self.text_encoder`. | |
| All kwargs are forwarded to `self.lora_state_dict`. | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is | |
| loaded. | |
| See `~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer` for more details on how the state | |
| dict is loaded into `self.transformer`. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1496</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "return_alphas", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1891</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) -- | |
| Directory to save LoRA parameters to. Will be created if it doesn't exist. | |
| - **transformer_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) -- | |
| State dict of the LoRA layers corresponding to the `transformer`. | |
| - **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) -- | |
| State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text | |
| encoder LoRA state dict because it comes from 🤗 Transformers. | |
| - **is_main_process** (`bool`, *optional*, defaults to `True`) -- | |
| Whether the process calling this is the main process or not. Useful during distributed training and you | |
| need to call this function on all processes. In this case, set `is_main_process=True` only on the main | |
| process to avoid race conditions. | |
| - **save_function** (`Callable`) -- | |
| The function to use to save the state dictionary. Useful during distributed training when you need to | |
| replace `torch.save` with another method. Can be configured with the environment variable | |
| `DIFFUSERS_SAVE_MODE`. | |
| - **safe_serialization** (`bool`, *optional*, defaults to `True`) -- | |
| Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. | |
| - **transformer_lora_adapter_metadata** -- | |
| LoRA adapter metadata associated with the transformer to be serialized with the state dict. | |
| - **text_encoder_lora_adapter_metadata** -- | |
| LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Save the LoRA parameters corresponding to the UNet and text encoder. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1987</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** (`List[str]`) -- List of LoRA-injectable components to unfuse LoRA from.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Reverses the effect of | |
| [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). | |
| > [!WARNING] > This is an experimental API. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unload_lora_weights</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.unload_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2004</source><parameters>[{"name": "reset_to_overwritten_params", "val": " = False"}]</parameters><paramsdesc>- **reset_to_overwritten_params** (`bool`, defaults to `False`) -- Whether to reset the LoRA-loaded modules | |
| to their original params. Refer to the [Flux | |
| documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) to learn more.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Unloads the LoRA parameters. | |
| <ExampleCodeBlock anchor="diffusers.loaders.FluxLoraLoaderMixin.unload_lora_weights.example"> | |
| Examples: | |
| ```python | |
| >>> # Assuming `pipeline` is already loaded with the LoRA parameters. | |
| >>> pipeline.unload_lora_weights() | |
| >>> ... | |
| ``` | |
| </ExampleCodeBlock> | |
| </div></div> | |
| ## CogVideoXLoraLoaderMixin[[diffusers.loaders.CogVideoXLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.CogVideoXLoraLoaderMixin</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2436</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [CogVideoXTransformer3DModel](/docs/diffusers/pr_12229/en/api/models/cogvideox_transformer3d#diffusers.CogVideoXTransformer3DModel). Specific to [CogVideoXPipeline](/docs/diffusers/pr_12229/en/api/pipelines/cogvideox#diffusers.CogVideoXPipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2606</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2540</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2499</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2444</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2572</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2625</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## Mochi1LoraLoaderMixin[[diffusers.loaders.Mochi1LoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.Mochi1LoraLoaderMixin</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2632</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [MochiTransformer3DModel](/docs/diffusers/pr_12229/en/api/models/mochi_transformer3d#diffusers.MochiTransformer3DModel). Specific to [MochiPipeline](/docs/diffusers/pr_12229/en/api/pipelines/mochi#diffusers.MochiPipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2805</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2737</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2696</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2640</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2769</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2825</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## AuraFlowLoraLoaderMixin[[diffusers.loaders.AuraFlowLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.AuraFlowLoraLoaderMixin</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1283</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [AuraFlowTransformer2DModel](/docs/diffusers/pr_12229/en/api/models/aura_flow_transformer2d#diffusers.AuraFlowTransformer2DModel) Specific to [AuraFlowPipeline](/docs/diffusers/pr_12229/en/api/pipelines/aura_flow#diffusers.AuraFlowPipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1456</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1388</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1347</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1291</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1420</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L1476</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## LTXVideoLoraLoaderMixin[[diffusers.loaders.LTXVideoLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.LTXVideoLoraLoaderMixin</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2832</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [LTXVideoTransformer3DModel](/docs/diffusers/pr_12229/en/api/models/ltx_video_transformer3d#diffusers.LTXVideoTransformer3DModel). Specific to [LTXPipeline](/docs/diffusers/pr_12229/en/api/pipelines/ltx_video#diffusers.LTXPipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3008</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2940</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2899</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2840</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2972</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3028</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## SanaLoraLoaderMixin[[diffusers.loaders.SanaLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.SanaLoraLoaderMixin</name><anchor>diffusers.loaders.SanaLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3035</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [SanaTransformer2DModel](/docs/diffusers/pr_12229/en/api/models/sana_transformer2d#diffusers.SanaTransformer2DModel). Specific to [SanaPipeline](/docs/diffusers/pr_12229/en/api/pipelines/sana#diffusers.SanaPipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3208</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3140</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3099</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3043</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3172</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3228</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## HunyuanVideoLoraLoaderMixin[[diffusers.loaders.HunyuanVideoLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.HunyuanVideoLoraLoaderMixin</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3235</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [HunyuanVideoTransformer3DModel](/docs/diffusers/pr_12229/en/api/models/hunyuan_video_transformer_3d#diffusers.HunyuanVideoTransformer3DModel). Specific to [HunyuanVideoPipeline](/docs/diffusers/pr_12229/en/api/pipelines/hunyuan_video#diffusers.HunyuanVideoPipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3411</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3343</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3302</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3243</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3375</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3431</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## Lumina2LoraLoaderMixin[[diffusers.loaders.Lumina2LoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.Lumina2LoraLoaderMixin</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3438</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [Lumina2Transformer2DModel](/docs/diffusers/pr_12229/en/api/models/lumina2_transformer2d#diffusers.Lumina2Transformer2DModel). Specific to `Lumina2Text2ImgPipeline`. | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3615</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3547</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3506</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3446</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3579</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3635</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## CogView4LoraLoaderMixin[[diffusers.loaders.CogView4LoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.CogView4LoraLoaderMixin</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4193</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [WanTransformer3DModel](/docs/diffusers/pr_12229/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel). Specific to [CogView4Pipeline](/docs/diffusers/pr_12229/en/api/pipelines/cogview4#diffusers.CogView4Pipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4366</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4298</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4257</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4201</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4330</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4386</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## WanLoraLoaderMixin[[diffusers.loaders.WanLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.WanLoraLoaderMixin</name><anchor>diffusers.loaders.WanLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3642</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [WanTransformer3DModel](/docs/diffusers/pr_12229/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel). Specific to [WanPipeline](/docs/diffusers/pr_12229/en/api/pipelines/wan#diffusers.WanPipeline) and `[WanImageToVideoPipeline`]. | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.WanLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3889</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.WanLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3821</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.WanLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3756</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.WanLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3650</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.WanLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3853</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.WanLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3909</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## SkyReelsV2LoraLoaderMixin[[diffusers.loaders.SkyReelsV2LoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.SkyReelsV2LoraLoaderMixin</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3916</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [SkyReelsV2Transformer3DModel](/docs/diffusers/pr_12229/en/api/models/skyreels_v2_transformer_3d#diffusers.SkyReelsV2Transformer3DModel). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4166</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4098</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4033</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L3924</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4130</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4186</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## AmusedLoraLoaderMixin[[diffusers.loaders.AmusedLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.AmusedLoraLoaderMixin</name><anchor>diffusers.loaders.AmusedLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2284</source><parameters>[]</parameters></docstring> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.AmusedLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2289</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "metadata", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.AmusedLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L2381</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) -- | |
| Directory to save LoRA parameters to. Will be created if it doesn't exist. | |
| - **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) -- | |
| State dict of the LoRA layers corresponding to the `unet`. | |
| - **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) -- | |
| State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text | |
| encoder LoRA state dict because it comes from 🤗 Transformers. | |
| - **is_main_process** (`bool`, *optional*, defaults to `True`) -- | |
| Whether the process calling this is the main process or not. Useful during distributed training and you | |
| need to call this function on all processes. In this case, set `is_main_process=True` only on the main | |
| process to avoid race conditions. | |
| - **save_function** (`Callable`) -- | |
| The function to use to save the state dictionary. Useful during distributed training when you need to | |
| replace `torch.save` with another method. Can be configured with the environment variable | |
| `DIFFUSERS_SAVE_MODE`. | |
| - **safe_serialization** (`bool`, *optional*, defaults to `True`) -- | |
| Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Save the LoRA parameters corresponding to the UNet and text encoder. | |
| </div></div> | |
| ## HiDreamImageLoraLoaderMixin[[diffusers.loaders.HiDreamImageLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.HiDreamImageLoraLoaderMixin</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4393</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [HiDreamImageTransformer2DModel](/docs/diffusers/pr_12229/en/api/models/hidream_image_transformer#diffusers.HiDreamImageTransformer2DModel). Specific to [HiDreamImagePipeline](/docs/diffusers/pr_12229/en/api/pipelines/hidream#diffusers.HiDreamImagePipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4569</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4501</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4460</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4401</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4533</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4589</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## QwenImageLoraLoaderMixin[[diffusers.loaders.QwenImageLoraLoaderMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.QwenImageLoraLoaderMixin</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4596</source><parameters>[]</parameters></docstring> | |
| Load LoRA layers into [QwenImageTransformer2DModel](/docs/diffusers/pr_12229/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel). Specific to [QwenImagePipeline](/docs/diffusers/pr_12229/en/api/pipelines/qwenimage#diffusers.QwenImagePipeline). | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4774</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `fuse_lora()` for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4706</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring> | |
| See [load_lora_into_unet()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>load_lora_weights</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4665</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [load_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>lora_state_dict</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4604</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See [lora_state_dict()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>save_lora_weights</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4738</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| See [save_lora_weights()](/docs/diffusers/pr_12229/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_pipeline.py#L4794</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| See `unfuse_lora()` for more details. | |
| </div></div> | |
| ## LoraBaseMixin[[diffusers.loaders.lora_base.LoraBaseMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.loaders.lora_base.LoraBaseMixin</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L478</source><parameters>[]</parameters></docstring> | |
| Utility class for handling LoRAs. | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>delete_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L838</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}]</parameters><paramsdesc>- **adapter_names** (`Union[List[str], str]`) -- | |
| The names of the adapters to delete.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Delete an adapter's LoRA layers from the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" | |
| ) | |
| pipeline.delete_adapters("cinematic") | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>disable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.disable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L778</source><parameters>[]</parameters></docstring> | |
| Disables the active LoRA layers of the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.disable_lora.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" | |
| ) | |
| pipeline.disable_lora() | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>enable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L808</source><parameters>[]</parameters></docstring> | |
| Enables the active LoRA layers of the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.enable_lora.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" | |
| ) | |
| pipeline.enable_lora() | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>enable_lora_hotswap</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L985</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **target_rank** (`int`) -- | |
| The highest rank among all the adapters that will be loaded. | |
| - **check_compiled** (`str`, *optional*, defaults to `"error"`) -- | |
| How to handle a model that is already compiled. The check can return the following messages: | |
| - "error" (default): raise an error | |
| - "warn": issue a warning | |
| - "ignore": do nothing</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Hotswap adapters without triggering recompilation of a model or if the ranks of the loaded adapters are | |
| different. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>fuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L536</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** -- (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into. | |
| - **lora_scale** (`float`, defaults to 1.0) -- | |
| Controls how much to influence the outputs with the LoRA parameters. | |
| - **safe_fusing** (`bool`, defaults to `False`) -- | |
| Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. | |
| - **adapter_names** (`List[str]`, *optional*) -- | |
| Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Fuses the LoRA parameters into the original parameters of the corresponding blocks. | |
| > [!WARNING] > This is an experimental API. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora.example"> | |
| Example: | |
| ```py | |
| from diffusers import DiffusionPipeline | |
| import torch | |
| pipeline = DiffusionPipeline.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") | |
| pipeline.fuse_lora(lora_scale=0.7) | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>get_active_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L876</source><parameters>[]</parameters></docstring> | |
| Gets the list of the current active adapters. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters.example"> | |
| Example: | |
| ```python | |
| from diffusers import DiffusionPipeline | |
| pipeline = DiffusionPipeline.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", | |
| ).to("cuda") | |
| pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") | |
| pipeline.get_active_adapters() | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>get_list_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_list_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L909</source><parameters>[]</parameters></docstring> | |
| Gets the current list of all available adapters in the pipeline. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>set_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L675</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}, {"name": "adapter_weights", "val": ": typing.Union[float, typing.Dict, typing.List[float], typing.List[typing.Dict], NoneType] = None"}]</parameters><paramsdesc>- **adapter_names** (`List[str]` or `str`) -- | |
| The names of the adapters to use. | |
| - **adapter_weights** (`Union[List[float], float]`, *optional*) -- | |
| The adapter(s) weights to use with the UNet. If `None`, the weights are set to `1.0` for all the | |
| adapters.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Set the currently active adapters for use in the pipeline. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_adapters.example"> | |
| Example: | |
| ```py | |
| from diffusers import AutoPipelineForText2Image | |
| import torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 | |
| ).to("cuda") | |
| pipeline.load_lora_weights( | |
| "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" | |
| ) | |
| pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") | |
| pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>set_lora_device</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L931</source><parameters>[{"name": "adapter_names", "val": ": typing.List[str]"}, {"name": "device", "val": ": typing.Union[torch.device, str, int]"}]</parameters><paramsdesc>- **adapter_names** (`List[str]`) -- | |
| List of adapters to send device to. | |
| - **device** (`Union[torch.device, str, int]`) -- | |
| Device to send the adapters to. Can be either a torch device, a str or an integer.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Moves the LoRAs listed in `adapter_names` to a target device. Useful for offloading the LoRA to the CPU in case | |
| you want to load multiple adapters and free some GPU memory. | |
| After offloading the LoRA adapters to CPU, as long as the rest of the model is still on GPU, the LoRA adapters | |
| can no longer be used for inference, as that would cause a device mismatch. Remember to set the device back to | |
| GPU before using those LoRA adapters for inference. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device.example"> | |
| ```python | |
| >>> pipe.load_lora_weights(path_1, adapter_name="adapter-1") | |
| >>> pipe.load_lora_weights(path_2, adapter_name="adapter-2") | |
| >>> pipe.set_adapters("adapter-1") | |
| >>> image_1 = pipe(**kwargs) | |
| >>> # switch to adapter-2, offload adapter-1 | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cpu") | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cuda:0") | |
| >>> pipe.set_adapters("adapter-2") | |
| >>> image_2 = pipe(**kwargs) | |
| >>> # switch back to adapter-1, offload adapter-2 | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cpu") | |
| >>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cuda:0") | |
| >>> pipe.set_adapters("adapter-1") | |
| >>> ... | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unfuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L622</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** (`List[str]`) -- List of LoRA-injectable components to unfuse LoRA from. | |
| - **unfuse_unet** (`bool`, defaults to `True`) -- Whether to unfuse the UNet LoRA parameters. | |
| - **unfuse_text_encoder** (`bool`, defaults to `True`) -- | |
| Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn't monkey-patched with the | |
| LoRA parameters then it won't have any effect.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Reverses the effect of | |
| [`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora). | |
| > [!WARNING] > This is an experimental API. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>unload_lora_weights</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L513</source><parameters>[]</parameters></docstring> | |
| Unloads the LoRA parameters. | |
| <ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights.example"> | |
| Examples: | |
| ```python | |
| >>> # Assuming `pipeline` is already loaded with the LoRA parameters. | |
| >>> pipeline.unload_lora_weights() | |
| >>> ... | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>write_lora_layers</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.write_lora_layers</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/loaders/lora_base.py#L1008</source><parameters>[{"name": "state_dict", "val": ": typing.Dict[str, torch.Tensor]"}, {"name": "save_directory", "val": ": str"}, {"name": "is_main_process", "val": ": bool"}, {"name": "weight_name", "val": ": str"}, {"name": "save_function", "val": ": typing.Callable"}, {"name": "safe_serialization", "val": ": bool"}, {"name": "lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring> | |
| Writes the state dict of the LoRA layers (optionally with metadata) to disk. | |
| </div></div> | |
| <EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/lora.md" /> |
Xet Storage Details
- Size:
- 146 kB
- Xet hash:
- 9ef18d0f46d8b44804bd62da35742d77493e4126df116e4116e0cd4d6e02b18c
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.