Buckets:

rtrm's picture
|
download
raw
7.9 kB

SD3ControlNetModel

SD3ControlNetModel is an implementation of ControlNet for Stable Diffusion 3.

The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.

The abstract from the paper is:

We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.

Loading from the original format

By default the SD3ControlNetModel should be loaded with from_pretrained().

from diffusers import StableDiffusion3ControlNetPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel

controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny")
pipe = StableDiffusion3ControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet)

SD3ControlNetModel[[diffusers.SD3ControlNetModel]]

diffusers.SD3ControlNetModel[[diffusers.SD3ControlNetModel]]

Source

ControlNet model for Stable Diffusion 3.

enable_forward_chunkingdiffusers.SD3ControlNetModel.enable_forward_chunkinghttps://github.com/huggingface/diffusers/blob/vr_12249/src/diffusers/models/controlnets/controlnet_sd3.py#L178[{"name": "chunk_size", "val": ": typing.Optional[int] = None"}, {"name": "dim", "val": ": int = 0"}]- chunk_size (int, optional) -- The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually over each tensor of dim=dim.

  • dim (int, optional, defaults to 0) -- The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) or dim=1 (sequence length).0

Sets the attention processor to use feed forward chunking.

Parameters:

sample_size (int, defaults to 128) : The width/height of the latents. This is fixed during training since it is used to learn a number of position embeddings.

patch_size (int, defaults to 2) : Patch size to turn the input data into small patches.

in_channels (int, defaults to 16) : The number of latent channels in the input.

num_layers (int, defaults to 18) : The number of layers of transformer blocks to use.

attention_head_dim (int, defaults to 64) : The number of channels in each head.

num_attention_heads (int, defaults to 18) : The number of heads to use for multi-head attention.

joint_attention_dim (int, defaults to 4096) : The embedding dimension to use for joint text-image attention.

caption_projection_dim (int, defaults to 1152) : The embedding dimension of caption embeddings.

pooled_projection_dim (int, defaults to 2048) : The embedding dimension of pooled text projections.

out_channels (int, defaults to 16) : The number of latent channels in the output.

pos_embed_max_size (int, defaults to 96) : The maximum latent height/width of positional embeddings.

extra_conditioning_channels (int, defaults to 0) : The number of extra channels to use for conditioning for patch embedding.

dual_attention_layers (Tuple[int, ...], defaults to ()) : The number of dual-stream transformer blocks to use.

qk_norm (str, optional, defaults to None) : The normalization to use for query and key in the attention layer. If None, no normalization is used.

pos_embed_type (str, defaults to "sincos") : The type of positional embedding to use. Choose between "sincos" and None.

use_pos_embed (bool, defaults to True) : Whether to use positional embeddings.

force_zeros_for_pooled_projection (bool, defaults to True) : Whether to force zeros for pooled projection embeddings. This is handled in the pipelines by reading the config value of the ControlNet model.

forward[[diffusers.SD3ControlNetModel.forward]]

Source

The SD3Transformer2DModel forward method.

Parameters:

hidden_states (torch.Tensor of shape (batch size, channel, height, width)) : Input hidden_states.

controlnet_cond (torch.Tensor) : The conditional input tensor of shape (batch_size, sequence_length, hidden_size).

conditioning_scale (float, defaults to 1.0) : The scale factor for ControlNet outputs.

encoder_hidden_states (torch.Tensor of shape (batch size, sequence_len, embed_dims)) : Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.

pooled_projections (torch.Tensor of shape (batch_size, projection_dim)) : Embeddings projected from the embeddings of input conditions.

timestep ( torch.LongTensor) : Used to indicate denoising step.

joint_attention_kwargs (dict, optional) : A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.

return_dict (bool, optional, defaults to True) : Whether or not to return a ~models.transformer_2d.Transformer2DModelOutput instead of a plain tuple.

Returns:

If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a tuple where the first element is the sample tensor.

fuse_qkv_projections[[diffusers.SD3ControlNetModel.fuse_qkv_projections]]

Source

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.

> This API is 🧪 experimental.

unfuse_qkv_projections[[diffusers.SD3ControlNetModel.unfuse_qkv_projections]]

Source

Disables the fused QKV projection if enabled.

> This API is 🧪 experimental.

SD3ControlNetOutput[[diffusers.models.controlnets.SD3ControlNetOutput]]

diffusers.models.controlnets.SD3ControlNetOutput[[diffusers.models.controlnets.SD3ControlNetOutput]]

Source

Xet Storage Details

Size:
7.9 kB
·
Xet hash:
7b5828509ec4951db100b2b9cf0ff38526007d072460b55362537f2fadfd9e1b

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.