Buckets:

rtrm's picture
|
download
raw
24.7 kB

Depth-to-image

The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure.

Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

If you're interested in using one of the official checkpoints for a task, explore the CompVis and Stability AI Hub organizations!

StableDiffusionDepth2ImgPipeline[[diffusers.StableDiffusionDepth2ImgPipeline]]

diffusers.StableDiffusionDepth2ImgPipeline[[diffusers.StableDiffusionDepth2ImgPipeline]]

Source

Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:

__call__diffusers.StableDiffusionDepth2ImgPipeline.__call__https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py#L634[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "depth_map", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]- prompt (str or List[str], optional) -- The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.

  • image (torch.Tensor, PIL.Image.Image, np.ndarray, List[torch.Tensor], List[PIL.Image.Image], or List[np.ndarray]) -- Image or tensor representing an image batch to be used as the starting point. Can accept image latents as image only if depth_map is not None.
  • depth_map (torch.Tensor, optional) -- Depth prediction to be used as additional conditioning for the image generation process. If not defined, it automatically predicts the depth with self.depth_estimator.
  • strength (float, optional, defaults to 0.8) -- Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a starting point and more noise is added the higher the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising process runs for the full number of iterations specified in num_inference_steps. A value of 1 essentially ignores image.
  • num_inference_steps (int, optional, defaults to 50) -- The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. This parameter is modulated by strength.
  • guidance_scale (float, optional, defaults to 7.5) -- A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.
  • negative_prompt (str or List[str], optional) -- The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale 0[StableDiffusionPipelineOutput](/docs/diffusers/pr_11739/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or tupleIf return_dictisTrue, [StableDiffusionPipelineOutput](/docs/diffusers/pr_11739/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned, otherwise a tuple` is returned where the first element is a list with the generated images.

The call function to the pipeline for generation.

Examples:

>>> import torch
>>> import requests
>>> from PIL import Image

>>> from diffusers import StableDiffusionDepth2ImgPipeline

>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-depth",
...     torch_dtype=torch.float16,
... )
>>> pipe.to("cuda")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> init_image = Image.open(requests.get(url, stream=True).raw)
>>> prompt = "two tigers"
>>> n_prompt = "bad, deformed, ugly, bad anotomy"
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0]

Parameters:

vae (AutoencoderKL) : Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.

text_encoder (CLIPTextModel) : Frozen text-encoder (clip-vit-large-patch14).

tokenizer (CLIPTokenizer) : A CLIPTokenizer to tokenize text.

unet (UNet2DConditionModel) : A UNet2DConditionModel to denoise the encoded image latents.

scheduler (SchedulerMixin) : A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.

Returns:

[StableDiffusionPipelineOutput](/docs/diffusers/pr_11739/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or tuple``

If return_dict is True, StableDiffusionPipelineOutput is returned, otherwise a tuple is returned where the first element is a list with the generated images.

enable_attention_slicing[[diffusers.StableDiffusionDepth2ImgPipeline.enable_attention_slicing]]

Source

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> ⚠️ Don't enable attention slicing if you're already using scaled_dot_product_attention (SDPA) from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!

Examples:

>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]

Parameters:

slice_size (str or int, optional, defaults to "auto") : When "auto", halves the input to the attention heads, so attention will be computed in two steps. If "max", maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim must be a multiple of slice_size.

disable_attention_slicing[[diffusers.StableDiffusionDepth2ImgPipeline.disable_attention_slicing]]

Source

Disable sliced attention computation. If enable_attention_slicing was previously called, attention is computed in one step.

enable_xformers_memory_efficient_attention[[diffusers.StableDiffusionDepth2ImgPipeline.enable_xformers_memory_efficient_attention]]

Source

Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.

> ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes > precedent.

Examples:

>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)

Parameters:

attention_op (Callable, optional) : Override the default None operator for use as op argument to the memory_efficient_attention() function of xFormers.

disable_xformers_memory_efficient_attention[[diffusers.StableDiffusionDepth2ImgPipeline.disable_xformers_memory_efficient_attention]]

Source

Disable memory efficient attention from xFormers.

load_textual_inversion[[diffusers.StableDiffusionDepth2ImgPipeline.load_textual_inversion]]

Source

Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and Automatic1111 formats are supported).

Example:

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A  backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first (for example from civitAI) and then load the vector

locally:

from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")

Parameters:

pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) : Can be either one of the following or a list of them: - A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a pretrained model hosted on the Hub. - A path to a directory (for example ./my_text_inversion_directory/) containing the textual inversion weights. - A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. - A torch state dict.

token (str or List[str], optional) : Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a list, then token must also be a list of equal length.

text_encoder (CLIPTextModel, optional) : Frozen text-encoder (clip-vit-large-patch14). If not specified, function will take self.tokenizer.

tokenizer (CLIPTokenizer, optional) : A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer.

weight_name (str, optional) : Name of a custom weight file. This should be used when: - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight name such as text_inv.bin. - The saved textual inversion file is in the Automatic1111 format.

cache_dir (Union[str, os.PathLike], optional) : Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.

force_download (bool, optional, defaults to False) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (Dict[str, str], optional) : A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.

local_files_only (bool, optional, defaults to False) : Whether to only load local model weights and configuration files or not. If set to True, the model won't be downloaded from the Hub.

hf_token (str or bool, optional) : The token to use as HTTP bearer authorization for remote files. If True, the token generated from diffusers-cli login (stored in ~/.huggingface) is used.

revision (str, optional, defaults to "main") : The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.

subfolder (str, optional, defaults to "") : The subfolder location of a model file within a larger model repository on the Hub or locally.

mirror (str, optional) : Mirror source to resolve accessibility issues if you're downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.

load_lora_weights[[diffusers.StableDiffusionDepth2ImgPipeline.load_lora_weights]]

Source

Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and self.text_encoder.

All kwargs are forwarded to self.lora_state_dict.

See lora_state_dict() for more details on how the state dict is loaded.

See load_lora_into_unet() for more details on how the state dict is loaded into self.unet.

See load_lora_into_text_encoder() for more details on how the state dict is loaded into self.text_encoder.

Parameters:

pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) : See lora_state_dict().

adapter_name (str, optional) : Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.

low_cpu_mem_usage (bool, optional) : Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

hotswap (bool, optional) : Defaults to False. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter in-place. This means that, instead of loading an additional adapter, this will take the existing adapter weights and replace them with the weights of the new adapter. This can be faster and more memory efficient. However, the main advantage of hotswapping is that when the model is compiled with torch.compile, loading the new adapter does not require recompilation of the model. When using hotswapping, the passed adapter_name should be the name of an already loaded adapter. If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need to call an additional method before loading the adapter: py pipeline = ... # load diffusers pipeline max_rank = ... # the highest rank among all LoRAs that you want to load # call *before* compiling and loading the LoRA adapter pipeline.enable_lora_hotswap(target_rank=max_rank) pipeline.load_lora_weights(file_name) # optionally compile the model now Note that hotswapping adapters of the text encoder is not yet supported. There are some further limitations to this technique, which are documented here: https://huggingface.co/docs/peft/main/en/package_reference/hotswap

kwargs (dict, optional) : See lora_state_dict().

save_lora_weights[[diffusers.StableDiffusionDepth2ImgPipeline.save_lora_weights]]

Source

Save the LoRA parameters corresponding to the UNet and text encoder.

Parameters:

save_directory (str or os.PathLike) : Directory to save LoRA parameters to. Will be created if it doesn't exist.

unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) : State dict of the LoRA layers corresponding to the unet.

text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) : State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers.

is_main_process (bool, optional, defaults to True) : Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.

save_function (Callable) : The function to use to save the state dictionary. Useful during distributed training when you need to replace torch.save with another method. Can be configured with the environment variable DIFFUSERS_SAVE_MODE.

safe_serialization (bool, optional, defaults to True) : Whether to save the model using safetensors or the traditional PyTorch way with pickle.

unet_lora_adapter_metadata : LoRA adapter metadata associated with the unet to be serialized with the state dict.

text_encoder_lora_adapter_metadata : LoRA adapter metadata associated with the text encoder to be serialized with the state dict.

encode_prompt[[diffusers.StableDiffusionDepth2ImgPipeline.encode_prompt]]

Source

Encodes the prompt into text encoder hidden states.

Parameters:

prompt (str or List[str], optional) : prompt to be encoded

device : (torch.device): torch device

num_images_per_prompt (int) : number of images that should be generated per prompt

do_classifier_free_guidance (bool) : whether to use classifier free guidance or not

negative_prompt (str or List[str], optional) : The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).

prompt_embeds (torch.Tensor, optional) : Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.

negative_prompt_embeds (torch.Tensor, optional) : Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.

lora_scale (float, optional) : A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

clip_skip (int, optional) : Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.

StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

Source

Output class for Stable Diffusion pipelines.

Parameters:

images (List[PIL.Image.Image] or np.ndarray) : List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels).

nsfw_content_detected (List[bool]) : List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or None if safety checking could not be performed.

Xet Storage Details

Size:
24.7 kB
·
Xet hash:
cff6093c09c9763b9c41d69fae7db1e318cf3b59222632fb80060e0a0f3e6091

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.