Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Merge LoRAs","local":"merge-loras","sections":[{"title":"set_adapters","local":"setadapters","sections":[],"depth":2},{"title":"add_weighted_adapter","local":"addweightedadapter","sections":[],"depth":2},{"title":"fuse_lora","local":"fuselora","sections":[{"title":"torch.compile","local":"torchcompile","sections":[],"depth":3}],"depth":2},{"title":"Next steps","local":"next-steps","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/diffusers/pr_10083/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/entry/start.3ed1a0f4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/scheduler.8c3d61f6.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/singletons.da8f2a2c.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/index.0997d446.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/paths.e585b256.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/entry/app.f0e18a17.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/index.da70eac4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/nodes/0.0db17d3a.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/nodes/237.ce64cf02.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/Tip.1d9b8c37.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/CodeBlock.00a903b3.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_10083/en/_app/immutable/chunks/EditOnGithub.1e64e623.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Merge LoRAs","local":"merge-loras","sections":[{"title":"set_adapters","local":"setadapters","sections":[],"depth":2},{"title":"add_weighted_adapter","local":"addweightedadapter","sections":[],"depth":2},{"title":"fuse_lora","local":"fuselora","sections":[{"title":"torch.compile","local":"torchcompile","sections":[],"depth":3}],"depth":2},{"title":"Next steps","local":"next-steps","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="merge-loras" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#merge-loras"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Merge LoRAs</span></h1> <p data-svelte-h="svelte-cyt3kz">It can be fun and creative to use multiple <a href="(https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora)">LoRAs</a> together to generate something entirely new and unique. This works by merging multiple LoRA weights together to produce images that are a blend of different styles. Diffusers provides a few methods to merge LoRAs depending on <em>how</em> you want to merge their weights, which can affect image quality.</p> <p data-svelte-h="svelte-1kqk8ok">This guide will show you how to merge LoRAs using the <a href="/docs/diffusers/pr_10083/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters">set_adapters()</a> and <a href="https://huggingface.co/docs/peft/package_reference/lora#peft.LoraModel.add_weighted_adapter" rel="nofollow">add_weighted_adapter</a> methods. To improve inference speed and reduce memory-usage of merged LoRAs, you’ll also see how to use the <code>fuse_lora()</code> method to fuse the LoRA weights with the original weights of the underlying model.</p> <p data-svelte-h="svelte-8s8xsb">For this guide, load a Stable Diffusion XL (SDXL) checkpoint and the <a href="https://huggingface.co/KappaNeuro/studio-ghibli-style" rel="nofollow">KappaNeuro/studio-ghibli-style</a> and <a href="https://huggingface.co/Norod78/sdxl-chalkboarddrawing-lora" rel="nofollow">Norod78/sdxl-chalkboarddrawing-lora</a> LoRAs with the <a href="/docs/diffusers/pr_10083/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights">load_lora_weights()</a> method. You’ll need to assign each LoRA an <code>adapter_name</code> to combine them later.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"ostris/ikea-instructions-lora-sdxl"</span>, weight_name=<span class="hljs-string">"ikea_instructions_xl_v1_5.safetensors"</span>, adapter_name=<span class="hljs-string">"ikea"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"lordjia/by-feng-zikai"</span>, weight_name=<span class="hljs-string">"fengzikai_v1.0_XL.safetensors"</span>, adapter_name=<span class="hljs-string">"feng"</span>)<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="setadapters" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#setadapters"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>set_adapters</span></h2> <p data-svelte-h="svelte-xr7oix">The <a href="/docs/diffusers/pr_10083/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters">set_adapters()</a> method merges LoRA adapters by concatenating their weighted matrices. Use the adapter name to specify which LoRAs to merge, and the <code>adapter_weights</code> parameter to control the scaling for each LoRA. For example, if <code>adapter_weights=[0.5, 0.5]</code>, then the merged LoRA output is an average of both LoRAs. Try adjusting the adapter weights to see how it affects the generated image!</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.set_adapters([<span class="hljs-string">"ikea"</span>, <span class="hljs-string">"feng"</span>], adapter_weights=[<span class="hljs-number">0.7</span>, <span class="hljs-number">0.8</span>]) | |
| generator = torch.manual_seed(<span class="hljs-number">0</span>) | |
| prompt = <span class="hljs-string">"A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai"</span> | |
| image = pipeline(prompt, generator=generator, cross_attention_kwargs={<span class="hljs-string">"scale"</span>: <span class="hljs-number">1.0</span>}).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-rp1f80"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lora_merge_set_adapters.png"></div> <h2 class="relative group"><a id="addweightedadapter" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#addweightedadapter"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>add_weighted_adapter</span></h2> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-ukm7y9">This is an experimental method that adds PEFTs <a href="https://huggingface.co/docs/peft/package_reference/lora#peft.LoraModel.add_weighted_adapter" rel="nofollow">add_weighted_adapter</a> method to Diffusers to enable more efficient merging methods. Check out this <a href="https://github.com/huggingface/diffusers/issues/6892" rel="nofollow">issue</a> if you’re interested in learning more about the motivation and design behind this integration.</p></div> <p data-svelte-h="svelte-159dnvr">The <a href="https://huggingface.co/docs/peft/package_reference/lora#peft.LoraModel.add_weighted_adapter" rel="nofollow">add_weighted_adapter</a> method provides access to more efficient merging method such as <a href="https://huggingface.co/docs/peft/developer_guides/model_merging" rel="nofollow">TIES and DARE</a>. To use these merging methods, make sure you have the latest stable version of Diffusers and PEFT installed.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pip install -U diffusers peft<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1pepmug">There are three steps to merge LoRAs with the <a href="https://huggingface.co/docs/peft/package_reference/lora#peft.LoraModel.add_weighted_adapter" rel="nofollow">add_weighted_adapter</a> method:</p> <ol data-svelte-h="svelte-1n5q3v7"><li>Create a <a href="https://huggingface.co/docs/peft/package_reference/peft_model#peft.PeftModel" rel="nofollow">PeftModel</a> from the underlying model and LoRA checkpoint.</li> <li>Load a base UNet model and the LoRA adapters.</li> <li>Merge the adapters using the <a href="https://huggingface.co/docs/peft/package_reference/lora#peft.LoraModel.add_weighted_adapter" rel="nofollow">add_weighted_adapter</a> method and the merging method of your choice.</li></ol> <p data-svelte-h="svelte-qks3mn">Let’s dive deeper into what these steps entail.</p> <ol data-svelte-h="svelte-143mnwl"><li>Load a UNet that corresponds to the UNet in the LoRA checkpoint. In this case, both LoRAs use the SDXL UNet as their base model.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> UNet2DConditionModel | |
| <span class="hljs-keyword">import</span> torch | |
| unet = UNet2DConditionModel.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| torch_dtype=torch.float16, | |
| use_safetensors=<span class="hljs-literal">True</span>, | |
| variant=<span class="hljs-string">"fp16"</span>, | |
| subfolder=<span class="hljs-string">"unet"</span>, | |
| ).to(<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-12jcnpd">Load the SDXL pipeline and the LoRA checkpoints, starting with the <a href="https://huggingface.co/ostris/ikea-instructions-lora-sdxl" rel="nofollow">ostris/ikea-instructions-lora-sdxl</a> LoRA.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| pipeline = DiffusionPipeline.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| variant=<span class="hljs-string">"fp16"</span>, | |
| torch_dtype=torch.float16, | |
| unet=unet | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"ostris/ikea-instructions-lora-sdxl"</span>, weight_name=<span class="hljs-string">"ikea_instructions_xl_v1_5.safetensors"</span>, adapter_name=<span class="hljs-string">"ikea"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-kj4wso">Now you’ll create a <a href="https://huggingface.co/docs/peft/package_reference/peft_model#peft.PeftModel" rel="nofollow">PeftModel</a> from the loaded LoRA checkpoint by combining the SDXL UNet and the LoRA UNet from the pipeline.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> peft <span class="hljs-keyword">import</span> get_peft_model, LoraConfig | |
| <span class="hljs-keyword">import</span> copy | |
| sdxl_unet = copy.deepcopy(unet) | |
| ikea_peft_model = get_peft_model( | |
| sdxl_unet, | |
| pipeline.unet.peft_config[<span class="hljs-string">"ikea"</span>], | |
| adapter_name=<span class="hljs-string">"ikea"</span> | |
| ) | |
| original_state_dict = {<span class="hljs-string">f"base_model.model.<span class="hljs-subst">{k}</span>"</span>: v <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> pipeline.unet.state_dict().items()} | |
| ikea_peft_model.load_state_dict(original_state_dict, strict=<span class="hljs-literal">True</span>)<!-- HTML_TAG_END --></pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1m0icqe">You can optionally push the ikea_peft_model to the Hub by calling <code>ikea_peft_model.push_to_hub("ikea_peft_model", token=TOKEN)</code>.</p></div> <p data-svelte-h="svelte-92zrkz">Repeat this process to create a <a href="https://huggingface.co/docs/peft/package_reference/peft_model#peft.PeftModel" rel="nofollow">PeftModel</a> from the <a href="https://huggingface.co/lordjia/by-feng-zikai" rel="nofollow">lordjia/by-feng-zikai</a> LoRA.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.delete_adapters(<span class="hljs-string">"ikea"</span>) | |
| sdxl_unet.delete_adapters(<span class="hljs-string">"ikea"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"lordjia/by-feng-zikai"</span>, weight_name=<span class="hljs-string">"fengzikai_v1.0_XL.safetensors"</span>, adapter_name=<span class="hljs-string">"feng"</span>) | |
| pipeline.set_adapters(adapter_names=<span class="hljs-string">"feng"</span>) | |
| feng_peft_model = get_peft_model( | |
| sdxl_unet, | |
| pipeline.unet.peft_config[<span class="hljs-string">"feng"</span>], | |
| adapter_name=<span class="hljs-string">"feng"</span> | |
| ) | |
| original_state_dict = {<span class="hljs-string">f"base_model.model.<span class="hljs-subst">{k}</span>"</span>: v <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> pipe.unet.state_dict().items()} | |
| feng_peft_model.load_state_dict(original_state_dict, strict=<span class="hljs-literal">True</span>)<!-- HTML_TAG_END --></pre></div> <ol start="2" data-svelte-h="svelte-1c06g8d"><li>Load a base UNet model and then load the adapters onto it.</li></ol> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> peft <span class="hljs-keyword">import</span> PeftModel | |
| base_unet = UNet2DConditionModel.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| torch_dtype=torch.float16, | |
| use_safetensors=<span class="hljs-literal">True</span>, | |
| variant=<span class="hljs-string">"fp16"</span>, | |
| subfolder=<span class="hljs-string">"unet"</span>, | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| model = PeftModel.from_pretrained(base_unet, <span class="hljs-string">"stevhliu/ikea_peft_model"</span>, use_safetensors=<span class="hljs-literal">True</span>, subfolder=<span class="hljs-string">"ikea"</span>, adapter_name=<span class="hljs-string">"ikea"</span>) | |
| model.load_adapter(<span class="hljs-string">"stevhliu/feng_peft_model"</span>, use_safetensors=<span class="hljs-literal">True</span>, subfolder=<span class="hljs-string">"feng"</span>, adapter_name=<span class="hljs-string">"feng"</span>)<!-- HTML_TAG_END --></pre></div> <ol start="3" data-svelte-h="svelte-4c31b3"><li>Merge the adapters using the <a href="https://huggingface.co/docs/peft/package_reference/lora#peft.LoraModel.add_weighted_adapter" rel="nofollow">add_weighted_adapter</a> method and the merging method of your choice (learn more about other merging methods in this <a href="https://huggingface.co/blog/peft_merging" rel="nofollow">blog post</a>). For this example, let’s use the <code>"dare_linear"</code> method to merge the LoRAs.</li></ol> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-1g1f7t1">Keep in mind the LoRAs need to have the same rank to be merged!</p></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->model.add_weighted_adapter( | |
| adapters=[<span class="hljs-string">"ikea"</span>, <span class="hljs-string">"feng"</span>], | |
| weights=[<span class="hljs-number">1.0</span>, <span class="hljs-number">1.0</span>], | |
| combination_type=<span class="hljs-string">"dare_linear"</span>, | |
| adapter_name=<span class="hljs-string">"ikea-feng"</span> | |
| ) | |
| model.set_adapters(<span class="hljs-string">"ikea-feng"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-18i0m5d">Now you can generate an image with the merged LoRA.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->model = model.to(dtype=torch.float16, device=<span class="hljs-string">"cuda"</span>) | |
| pipeline = DiffusionPipeline.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, unet=model, variant=<span class="hljs-string">"fp16"</span>, torch_dtype=torch.float16, | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| image = pipeline(<span class="hljs-string">"A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai"</span>, generator=torch.manual_seed(<span class="hljs-number">0</span>)).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-o7lfk9"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ikea-feng-dare-linear.png"></div> <h2 class="relative group"><a id="fuselora" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#fuselora"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>fuse_lora</span></h2> <p data-svelte-h="svelte-137rgaf">Both the <a href="/docs/diffusers/pr_10083/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters">set_adapters()</a> and <a href="https://huggingface.co/docs/peft/package_reference/lora#peft.LoraModel.add_weighted_adapter" rel="nofollow">add_weighted_adapter</a> methods require loading the base model and the LoRA adapters separately which incurs some overhead. The <a href="/docs/diffusers/pr_10083/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora">fuse_lora()</a> method allows you to fuse the LoRA weights directly with the original weights of the underlying model. This way, you’re only loading the model once which can increase inference and lower memory-usage.</p> <p data-svelte-h="svelte-13l1pwq">You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the <a href="/docs/diffusers/pr_10083/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora">fuse_lora()</a> method, which can lead to a speed-up in inference and lower VRAM usage.</p> <p data-svelte-h="svelte-clyzop">For example, if you have a base model and adapters loaded and set as active with the following adapter weights:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"ostris/ikea-instructions-lora-sdxl"</span>, weight_name=<span class="hljs-string">"ikea_instructions_xl_v1_5.safetensors"</span>, adapter_name=<span class="hljs-string">"ikea"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"lordjia/by-feng-zikai"</span>, weight_name=<span class="hljs-string">"fengzikai_v1.0_XL.safetensors"</span>, adapter_name=<span class="hljs-string">"feng"</span>) | |
| pipeline.set_adapters([<span class="hljs-string">"ikea"</span>, <span class="hljs-string">"feng"</span>], adapter_weights=[<span class="hljs-number">0.7</span>, <span class="hljs-number">0.8</span>])<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1kv26v4">Fuse these LoRAs into the UNet with the <a href="/docs/diffusers/pr_10083/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora">fuse_lora()</a> method. The <code>lora_scale</code> parameter controls how much to scale the output by with the LoRA weights. It is important to make the <code>lora_scale</code> adjustments in the <a href="/docs/diffusers/pr_10083/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora">fuse_lora()</a> method because it won’t work if you try to pass <code>scale</code> to the <code>cross_attention_kwargs</code> in the pipeline.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.fuse_lora(adapter_names=[<span class="hljs-string">"ikea"</span>, <span class="hljs-string">"feng"</span>], lora_scale=<span class="hljs-number">1.0</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-eaf3hj">Then you should use <a href="/docs/diffusers/pr_10083/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights">unload_lora_weights()</a> to unload the LoRA weights since they’ve already been fused with the underlying base model. Finally, call <a href="/docs/diffusers/pr_10083/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained">save_pretrained()</a> to save the fused pipeline locally or you could call <a href="/docs/diffusers/pr_10083/en/api/pipelines/overview#diffusers.utils.PushToHubMixin.push_to_hub">push_to_hub()</a> to push the fused pipeline to the Hub.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.unload_lora_weights() | |
| <span class="hljs-comment"># save locally</span> | |
| pipeline.save_pretrained(<span class="hljs-string">"path/to/fused-pipeline"</span>) | |
| <span class="hljs-comment"># save to the Hub</span> | |
| pipeline.push_to_hub(<span class="hljs-string">"fused-ikea-feng"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1nux4z4">Now you can quickly load the fused pipeline and use it for inference without needing to separately load the LoRA adapters.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline = DiffusionPipeline.from_pretrained( | |
| <span class="hljs-string">"username/fused-ikea-feng"</span>, torch_dtype=torch.float16, | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| image = pipeline(<span class="hljs-string">"A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai"</span>, generator=torch.manual_seed(<span class="hljs-number">0</span>)).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-hir9f5">You can call <code>~~loaders.lora_base.LoraBaseMixin.unfuse_lora</code> to restore the original model’s weights (for example, if you want to use a different <code>lora_scale</code> value). However, this only works if you’ve only fused one LoRA adapter to the original model. If you’ve fused multiple LoRAs, you’ll need to reload the model.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.unfuse_lora()<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="torchcompile" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#torchcompile"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>torch.compile</span></h3> <p data-svelte-h="svelte-h1d3ky"><a href="../optimization/torch2.0#torchcompile">torch.compile</a> can speed up your pipeline even more, but the LoRA weights must be fused first and then unloaded. Typically, the UNet is compiled because it is such a computationally intensive component of the pipeline.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| <span class="hljs-keyword">import</span> torch | |
| <span class="hljs-comment"># load base model and LoRAs</span> | |
| pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"ostris/ikea-instructions-lora-sdxl"</span>, weight_name=<span class="hljs-string">"ikea_instructions_xl_v1_5.safetensors"</span>, adapter_name=<span class="hljs-string">"ikea"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"lordjia/by-feng-zikai"</span>, weight_name=<span class="hljs-string">"fengzikai_v1.0_XL.safetensors"</span>, adapter_name=<span class="hljs-string">"feng"</span>) | |
| <span class="hljs-comment"># activate both LoRAs and set adapter weights</span> | |
| pipeline.set_adapters([<span class="hljs-string">"ikea"</span>, <span class="hljs-string">"feng"</span>], adapter_weights=[<span class="hljs-number">0.7</span>, <span class="hljs-number">0.8</span>]) | |
| <span class="hljs-comment"># fuse LoRAs and unload weights</span> | |
| pipeline.fuse_lora(adapter_names=[<span class="hljs-string">"ikea"</span>, <span class="hljs-string">"feng"</span>], lora_scale=<span class="hljs-number">1.0</span>) | |
| pipeline.unload_lora_weights() | |
| <span class="hljs-comment"># torch.compile</span> | |
| pipeline.unet.to(memory_format=torch.channels_last) | |
| pipeline.unet = torch.<span class="hljs-built_in">compile</span>(pipeline.unet, mode=<span class="hljs-string">"reduce-overhead"</span>, fullgraph=<span class="hljs-literal">True</span>) | |
| image = pipeline(<span class="hljs-string">"A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai"</span>, generator=torch.manual_seed(<span class="hljs-number">0</span>)).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-106h2ac">Learn more about torch.compile in the <a href="../tutorials/fast_diffusion#torchcompile">Accelerate inference of text-to-image diffusion models</a> guide.</p> <h2 class="relative group"><a id="next-steps" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#next-steps"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Next steps</span></h2> <p data-svelte-h="svelte-1j7n00j">For more conceptual details about how each merging method works, take a look at the <a href="https://huggingface.co/blog/peft_merging#concatenation-cat" rel="nofollow">🤗 PEFT welcomes new merging methods</a> blog post!</p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/merge_loras.md" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_1p97lbw = { | |
| assets: "/docs/diffusers/pr_10083/en", | |
| base: "/docs/diffusers/pr_10083/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/diffusers/pr_10083/en/_app/immutable/entry/start.3ed1a0f4.js"), | |
| import("/docs/diffusers/pr_10083/en/_app/immutable/entry/app.f0e18a17.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 237], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 52.2 kB
- Xet hash:
- 1054faaae3ef68096c0e4ef8745ebbf5f321c10e7e2408f4ac68810d5e811e82
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.