Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"T2I-Adapter","local":"t2i-adapter","sections":[{"title":"MultiAdapter","local":"multiadapter","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/diffusers/pr_11797/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/entry/start.2e1ee9a7.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/scheduler.8c3d61f6.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/singletons.702cc21f.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/index.0997d446.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/paths.ec7f49a7.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/entry/app.87c95e21.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/index.da70eac4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/nodes/0.7e207bc6.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/nodes/314.f3261d3d.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/CodeBlock.a9c4becf.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_11797/en/_app/immutable/chunks/getInferenceSnippets.725ed3d4.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"T2I-Adapter","local":"t2i-adapter","sections":[{"title":"MultiAdapter","local":"multiadapter","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="t2i-adapter" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#t2i-adapter"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>T2I-Adapter</span></h1> <p data-svelte-h="svelte-1p3jhu3"><a href="https://huggingface.co/papers/2302.08453" rel="nofollow">T2I-Adapter</a> is an adapter that enables controllable generation like <a href="./controlnet">ControlNet</a>. A T2I-Adapter works by learning a <em>mapping</em> between a control signal (for example, a depth map) and a pretrained model’s internal knowledge. The adapter is plugged in to the base model to provide extra guidance based on the control signal during generation.</p> <p data-svelte-h="svelte-jl2589">Load a T2I-Adapter conditioned on a specific control, such as canny edge, and pass it to the pipeline in <a href="/docs/diffusers/pr_11797/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained">from_pretrained()</a>.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> T2IAdapter, StableDiffusionXLAdapterPipeline, AutoencoderKL | |
| t2i_adapter = T2IAdapter.from_pretrained( | |
| <span class="hljs-string">"TencentARC/t2i-adapter-canny-sdxl-1.0"</span>, | |
| torch_dtype=torch.float16, | |
| )<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-odtmr5">Generate a canny image with <a href="https://github.com/opencv/opencv-python" rel="nofollow">opencv-python</a>.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> cv2 | |
| <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| <span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| original_image = load_image( | |
| <span class="hljs-string">"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png"</span> | |
| ) | |
| image = np.array(original_image) | |
| low_threshold = <span class="hljs-number">100</span> | |
| high_threshold = <span class="hljs-number">200</span> | |
| image = cv2.Canny(image, low_threshold, high_threshold) | |
| image = image[:, :, <span class="hljs-literal">None</span>] | |
| image = np.concatenate([image, image, image], axis=<span class="hljs-number">2</span>) | |
| canny_image = Image.fromarray(image)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-at49c">Pass the canny image to the pipeline to generate an image.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->vae = AutoencoderKL.from_pretrained(<span class="hljs-string">"madebyollin/sdxl-vae-fp16-fix"</span>, torch_dtype=torch.float16) | |
| pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| adapter=t2i_adapter, | |
| vae=vae, | |
| torch_dtype=torch.float16, | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| prompt = <span class="hljs-string">""" | |
| A photorealistic overhead image of a cat reclining sideways in a flamingo pool floatie holding a margarita. | |
| The cat is floating leisurely in the pool and completely relaxed and happy. | |
| """</span> | |
| pipeline( | |
| prompt, | |
| image=canny_image, | |
| num_inference_steps=<span class="hljs-number">100</span>, | |
| guidance_scale=<span class="hljs-number">10</span>, | |
| ).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;" data-svelte-h="svelte-riltjh"><figure><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png" width="300" alt="Generated image (prompt only)"> <figcaption style="text-align: center;">original image</figcaption></figure> <figure><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png" width="300" alt="Control image (Canny edges)"> <figcaption style="text-align: center;">canny image</figcaption></figure> <figure><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/t2i-canny-cat-generated.png" width="300" alt="Generated image (ControlNet + prompt)"> <figcaption style="text-align: center;">generated image</figcaption></figure></div> <h2 class="relative group"><a id="multiadapter" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#multiadapter"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>MultiAdapter</span></h2> <p data-svelte-h="svelte-tucbd1">You can compose multiple controls, such as canny image and a depth map, with the <code>MultiAdapter</code> class.</p> <p data-svelte-h="svelte-ueyms4">The example below composes a canny image and depth map.</p> <p data-svelte-h="svelte-1k0b7f8">Load the control images and T2I-Adapters as a list.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| <span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> StableDiffusionXLAdapterPipeline, AutoencoderKL, MultiAdapter, T2IAdapter | |
| canny_image = load_image( | |
| <span class="hljs-string">"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png"</span> | |
| ) | |
| depth_image = load_image( | |
| <span class="hljs-string">"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_depth_image.png"</span> | |
| ) | |
| controls = [canny_image, depth_image] | |
| prompt = [<span class="hljs-string">""" | |
| a relaxed rabbit sitting on a striped towel next to a pool with a tropical drink nearby, | |
| bright sunny day, vacation scene, 35mm photograph, film, professional, 4k, highly detailed | |
| """</span>] | |
| adapters = MultiAdapter( | |
| [ | |
| T2IAdapter.from_pretrained(<span class="hljs-string">"TencentARC/t2i-adapter-canny-sdxl-1.0"</span>, torch_dtype=torch.float16), | |
| T2IAdapter.from_pretrained(<span class="hljs-string">"TencentARC/t2i-adapter-depth-midas-sdxl-1.0"</span>, torch_dtype=torch.float16), | |
| ] | |
| )<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-aey41a">Pass the adapters, prompt, and control images to <a href="/docs/diffusers/pr_11797/en/api/pipelines/stable_diffusion/adapter#diffusers.StableDiffusionXLAdapterPipeline">StableDiffusionXLAdapterPipeline</a>. Use the <code>adapter_conditioning_scale</code> parameter to determine how much weight to assign to each control.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->vae = AutoencoderKL.from_pretrained(<span class="hljs-string">"madebyollin/sdxl-vae-fp16-fix"</span>, torch_dtype=torch.float16) | |
| pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| torch_dtype=torch.float16, | |
| vae=vae, | |
| adapter=adapters, | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline( | |
| prompt, | |
| image=controls, | |
| height=<span class="hljs-number">1024</span>, | |
| width=<span class="hljs-number">1024</span>, | |
| adapter_conditioning_scale=[<span class="hljs-number">0.7</span>, <span class="hljs-number">0.7</span>] | |
| ).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;" data-svelte-h="svelte-1ireybo"><figure><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png" width="300" alt="Generated image (prompt only)"> <figcaption style="text-align: center;">canny image</figcaption></figure> <figure><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_depth_image.png" width="300" alt="Control image (Canny edges)"> <figcaption style="text-align: center;">depth map</figcaption></figure> <figure><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/t2i-multi-rabbit.png" width="300" alt="Generated image (ControlNet + prompt)"> <figcaption style="text-align: center;">generated image</figcaption></figure></div> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/t2i_adapter.md" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_9fqn6m = { | |
| assets: "/docs/diffusers/pr_11797/en", | |
| base: "/docs/diffusers/pr_11797/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/diffusers/pr_11797/en/_app/immutable/entry/start.2e1ee9a7.js"), | |
| import("/docs/diffusers/pr_11797/en/_app/immutable/entry/app.87c95e21.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 314], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 19 kB
- Xet hash:
- b9c16743317db62fa8802fcbdfa1a9cae7cae1a0e733271547149fd8072d1a9d
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.