Buckets:
SD3ControlNetModel
SD3ControlNetModel is an implementation of ControlNet for Stable Diffusion 3.
The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.
The abstract from the paper is:
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.
Loading from the original format
By default the SD3ControlNetModel should be loaded with from_pretrained().
from diffusers import StableDiffusion3ControlNetPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel
controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny")
pipe = StableDiffusion3ControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet)
SD3ControlNetModel[[diffusers.SD3ControlNetModel]]
class diffusers.SD3ControlNetModeldiffusers.SD3ControlNetModelint, defaults to 128) --
The width/height of the latents. This is fixed during training since it is used to learn a number of
position embeddings.
- patch_size (
int, defaults to2) -- Patch size to turn the input data into small patches. - in_channels (
int, defaults to16) -- The number of latent channels in the input. - num_layers (
int, defaults to18) -- The number of layers of transformer blocks to use. - attention_head_dim (
int, defaults to64) -- The number of channels in each head. - num_attention_heads (
int, defaults to18) -- The number of heads to use for multi-head attention. - joint_attention_dim (
int, defaults to4096) -- The embedding dimension to use for joint text-image attention. - caption_projection_dim (
int, defaults to1152) -- The embedding dimension of caption embeddings. - pooled_projection_dim (
int, defaults to2048) -- The embedding dimension of pooled text projections. - out_channels (
int, defaults to16) -- The number of latent channels in the output. - pos_embed_max_size (
int, defaults to96) -- The maximum latent height/width of positional embeddings. - extra_conditioning_channels (
int, defaults to0) -- The number of extra channels to use for conditioning for patch embedding. - dual_attention_layers (
Tuple[int, ...], defaults to()) -- The number of dual-stream transformer blocks to use. - qk_norm (
str, optional, defaults toNone) -- The normalization to use for query and key in the attention layer. IfNone, no normalization is used. - pos_embed_type (
str, defaults to"sincos") -- The type of positional embedding to use. Choose between"sincos"andNone. - use_pos_embed (
bool, defaults toTrue) -- Whether to use positional embeddings. - force_zeros_for_pooled_projection (
bool, defaults toTrue) -- Whether to force zeros for pooled projection embeddings. This is handled in the pipelines by reading the config value of the ControlNet model.0
ControlNet model for Stable Diffusion 3.
enable_forward_chunkingdiffusers.SD3ControlNetModel.enable_forward_chunkingint, optional) --
The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
over each tensor of dim=dim.
- dim (
int, optional, defaults to0) -- The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) or dim=1 (sequence length).0
Sets the attention processor to use feed forward chunking.
forwarddiffusers.SD3ControlNetModel.forwardtorch.Tensor of shape (batch size, channel, height, width)) --
Input hidden_states.
- controlnet_cond (
torch.Tensor) -- The conditional input tensor of shape(batch_size, sequence_length, hidden_size). - conditioning_scale (
float, defaults to1.0) -- The scale factor for ControlNet outputs. - encoder_hidden_states (
torch.Tensorof shape(batch size, sequence_len, embed_dims)) -- Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. - pooled_projections (
torch.Tensorof shape(batch_size, projection_dim)) -- Embeddings projected from the embeddings of input conditions. - timestep (
torch.LongTensor) -- Used to indicate denoising step. - joint_attention_kwargs (
dict, optional) -- A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a~models.transformer_2d.Transformer2DModelOutputinstead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The SD3Transformer2DModel forward method.
fuse_qkv_projectionsdiffusers.SD3ControlNetModel.fuse_qkv_projections
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
> This API is 🧪 experimental.
set_attn_processordiffusers.SD3ControlNetModel.set_attn_processordict of AttentionProcessor or only AttentionProcessor) --
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for all Attention layers.
If processor is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors.0
Sets the attention processor to use to compute attention.
unfuse_qkv_projectionsdiffusers.SD3ControlNetModel.unfuse_qkv_projections
> This API is 🧪 experimental.
SD3ControlNetOutput[[diffusers.models.controlnets.SD3ControlNetOutput]]
class diffusers.models.controlnets.SD3ControlNetOutputdiffusers.models.controlnets.SD3ControlNetOutput
Xet Storage Details
- Size:
- 18.6 kB
- Xet hash:
- 76808c8668f56972b7ee8b4ce0b6bfd461b3bbc51758abc8d113bba496b3ebed
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.