Buckets:

hf-doc-build/doc-dev / diffusers /pr_12595 /en /api /models /controlnet_sparsectrl.md
rtrm's picture
|
download
raw
23.3 kB
# SparseControlNetModel
SparseControlNetModel is an implementation of ControlNet for [AnimateDiff](https://huggingface.co/papers/2307.04725).
ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.
The SparseCtrl version of ControlNet was introduced in [SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models](https://huggingface.co/papers/2311.16933) for achieving controlled generation in text-to-video diffusion models by Yuwei Guo, Ceyuan Yang, Anyi Rao, Maneesh Agrawala, Dahua Lin, and Bo Dai.
The abstract from the paper is:
*The development of text-to-video (T2V), i.e., generating videos with a given text prompt, has been significantly advanced in recent years. However, relying solely on text prompts often results in ambiguous frame composition due to spatial uncertainty. The research community thus leverages the dense structure signals, e.g., per-frame depth/edge sequences, to enhance controllability, whose collection accordingly increases the burden of inference. In this work, we present SparseCtrl to enable flexible structure control with temporally sparse signals, requiring only one or a few inputs, as shown in Figure 1. It incorporates an additional condition encoder to process these sparse signals while leaving the pre-trained T2V model untouched. The proposed approach is compatible with various modalities, including sketches, depth maps, and RGB images, providing more practical control for video generation and promoting applications such as storyboarding, depth rendering, keyframe animation, and interpolation. Extensive experiments demonstrate the generalization of SparseCtrl on both original and personalized T2V generators. Codes and models will be publicly available at [this https URL](https://guoyww.github.io/projects/SparseCtrl).*
## Example for loading SparseControlNetModel
```python
import torch
from diffusers import SparseControlNetModel
# fp32 variant in float16
# 1. Scribble checkpoint
controlnet = SparseControlNetModel.from_pretrained("guoyww/animatediff-sparsectrl-scribble", torch_dtype=torch.float16)
# 2. RGB checkpoint
controlnet = SparseControlNetModel.from_pretrained("guoyww/animatediff-sparsectrl-rgb", torch_dtype=torch.float16)
# For loading fp16 variant, pass `variant="fp16"` as an additional parameter
```
## SparseControlNetModel[[diffusers.SparseControlNetModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>class diffusers.SparseControlNetModel</name><anchor>diffusers.SparseControlNetModel</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L96</source><parameters>[{"name": "in_channels", "val": ": int = 4"}, {"name": "conditioning_channels", "val": ": int = 4"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion')"}, {"name": "only_cross_attention", "val": ": typing.Union[bool, typing.Tuple[bool]] = False"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": typing.Optional[int] = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": int = 768"}, {"name": "transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 1"}, {"name": "transformer_layers_per_mid_block", "val": ": typing.Union[int, typing.Tuple[int], NoneType] = None"}, {"name": "temporal_transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 1"}, {"name": "attention_head_dim", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 8"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int, ...], NoneType] = None"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "resnet_time_scale_shift", "val": ": str = 'default'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (16, 32, 96, 256)"}, {"name": "global_pool_conditions", "val": ": bool = False"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "motion_max_seq_length", "val": ": int = 32"}, {"name": "motion_num_attention_heads", "val": ": int = 8"}, {"name": "concat_conditioning_mask", "val": ": bool = True"}, {"name": "use_simplified_condition_embedding", "val": ": bool = True"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to 4) --
The number of channels in the input sample.
- **conditioning_channels** (`int`, defaults to 4) --
The number of input channels in the controlnet conditional embedding module. If
`concat_condition_embedding` is True, the value provided here is incremented by 1.
- **flip_sin_to_cos** (`bool`, defaults to `True`) --
Whether to flip the sin to cos in the time embedding.
- **freq_shift** (`int`, defaults to 0) --
The frequency shift to apply to the time embedding.
- **down_block_types** (`tuple[str]`, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`) --
The tuple of downsample blocks to use.
- **only_cross_attention** (`Union[bool, Tuple[bool]]`, defaults to `False`) --
- **block_out_channels** (`tuple[int]`, defaults to `(320, 640, 1280, 1280)`) --
The tuple of output channels for each block.
- **layers_per_block** (`int`, defaults to 2) --
The number of layers per block.
- **downsample_padding** (`int`, defaults to 1) --
The padding to use for the downsampling convolution.
- **mid_block_scale_factor** (`float`, defaults to 1) --
The scale factor to use for the mid block.
- **act_fn** (`str`, defaults to "silu") --
The activation function to use.
- **norm_num_groups** (`int`, *optional*, defaults to 32) --
The number of groups to use for the normalization. If None, normalization and activation layers is skipped
in post-processing.
- **norm_eps** (`float`, defaults to 1e-5) --
The epsilon to use for the normalization.
- **cross_attention_dim** (`int`, defaults to 1280) --
The dimension of the cross attention features.
- **transformer_layers_per_block** (`int` or `Tuple[int]`, *optional*, defaults to 1) --
The number of transformer blocks of type `BasicTransformerBlock`. Only relevant for
`~models.unet_2d_blocks.CrossAttnDownBlock2D`, `~models.unet_2d_blocks.CrossAttnUpBlock2D`,
`~models.unet_2d_blocks.UNetMidBlock2DCrossAttn`.
- **transformer_layers_per_mid_block** (`int` or `Tuple[int]`, *optional*, defaults to 1) --
The number of transformer layers to use in each layer in the middle block.
- **attention_head_dim** (`int` or `Tuple[int]`, defaults to 8) --
The dimension of the attention heads.
- **num_attention_heads** (`int` or `Tuple[int]`, *optional*) --
The number of heads to use for multi-head attention.
- **use_linear_projection** (`bool`, defaults to `False`) --
- **upcast_attention** (`bool`, defaults to `False`) --
- **resnet_time_scale_shift** (`str`, defaults to `"default"`) --
Time scale shift config for ResNet blocks (see `ResnetBlock2D`). Choose from `default` or `scale_shift`.
- **conditioning_embedding_out_channels** (`Tuple[int]`, defaults to `(16, 32, 96, 256)`) --
The tuple of output channel for each block in the `conditioning_embedding` layer.
- **global_pool_conditions** (`bool`, defaults to `False`) --
TODO(Patrick) - unused parameter
- **controlnet_conditioning_channel_order** (`str`, defaults to `rgb`) --
- **motion_max_seq_length** (`int`, defaults to `32`) --
The maximum sequence length to use in the motion module.
- **motion_num_attention_heads** (`int` or `Tuple[int]`, defaults to `8`) --
The number of heads to use in each attention layer of the motion module.
- **concat_conditioning_mask** (`bool`, defaults to `True`) --
- **use_simplified_condition_embedding** (`bool`, defaults to `True`) --</paramsdesc><paramgroups>0</paramgroups></docstring>
A SparseControlNet model as described in [SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion
Models](https://huggingface.co/papers/2311.16933).
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>forward</name><anchor>diffusers.SparseControlNetModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L593</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "controlnet_cond", "val": ": Tensor"}, {"name": "conditioning_scale", "val": ": float = 1.0"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "conditioning_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
The noisy input tensor.
- **timestep** (`Union[torch.Tensor, float, int]`) --
The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
The encoder hidden states.
- **controlnet_cond** (`torch.Tensor`) --
The conditional input tensor of shape `(batch_size, sequence_length, hidden_size)`.
- **conditioning_scale** (`float`, defaults to `1.0`) --
The scale factor for ControlNet outputs.
- **class_labels** (`torch.Tensor`, *optional*, defaults to `None`) --
Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
- **timestep_cond** (`torch.Tensor`, *optional*, defaults to `None`) --
Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the
timestep_embedding passed through the `self.time_embedding` layer to obtain the final timestep
embeddings.
- **attention_mask** (`torch.Tensor`, *optional*, defaults to `None`) --
An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
negative values to the attention scores corresponding to "discard" tokens.
- **added_cond_kwargs** (`dict`) --
Additional conditions for the Stable Diffusion XL UNet.
- **cross_attention_kwargs** (`dict[str]`, *optional*, defaults to `None`) --
A kwargs dictionary that if specified is passed along to the `AttnProcessor`.
- **guess_mode** (`bool`, defaults to `False`) --
In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if
you remove all prompts. A `guidance_scale` between 3.0 and 5.0 is recommended.
- **return_dict** (`bool`, defaults to `True`) --
Whether or not to return a `ControlNetOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`ControlNetOutput` **or** `tuple`</rettype><retdesc>If `return_dict` is `True`, a `ControlNetOutput` is returned, otherwise a tuple is
returned where the first element is the sample tensor.</retdesc></docstring>
The [SparseControlNetModel](/docs/diffusers/pr_12595/en/api/models/controlnet_sparsectrl#diffusers.SparseControlNetModel) forward method.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>from_unet</name><anchor>diffusers.SparseControlNetModel.from_unet</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L387</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (16, 32, 96, 256)"}, {"name": "load_weights_from_unet", "val": ": bool = True"}, {"name": "conditioning_channels", "val": ": int = 3"}]</parameters><paramsdesc>- **unet** (`UNet2DConditionModel`) --
The UNet model weights to copy to the [SparseControlNetModel](/docs/diffusers/pr_12595/en/api/models/controlnet_sparsectrl#diffusers.SparseControlNetModel). All configuration options are also
copied where applicable.</paramsdesc><paramgroups>0</paramgroups></docstring>
Instantiate a [SparseControlNetModel](/docs/diffusers/pr_12595/en/api/models/controlnet_sparsectrl#diffusers.SparseControlNetModel) from [UNet2DConditionModel](/docs/diffusers/pr_12595/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel).
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>set_attention_slice</name><anchor>diffusers.SparseControlNetModel.set_attention_slice</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L528</source><parameters>[{"name": "slice_size", "val": ": typing.Union[str, int, typing.List[int]]"}]</parameters><paramsdesc>- **slice_size** (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`) --
When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
`"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Enable sliced attention computation.
When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>set_attn_processor</name><anchor>diffusers.SparseControlNetModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L477</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for **all** `Attention` layers.
If `processor` is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>
Sets the attention processor to use to compute attention.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>set_default_attn_processor</name><anchor>diffusers.SparseControlNetModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L512</source><parameters>[]</parameters></docstring>
Disables custom attention processors and sets the default attention implementation.
</div></div>
## SparseControlNetOutput[[diffusers.models.controlnet_sparsectrl.SparseControlNetOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>class diffusers.models.controlnet_sparsectrl.SparseControlNetOutput</name><anchor>diffusers.models.controlnet_sparsectrl.SparseControlNetOutput</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/controlnet_sparsectrl.py#L30</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
</div>
<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet_sparsectrl.md" />

Xet Storage Details

Size:
23.3 kB
·
Xet hash:
77644e53e3650e57741e7ef6e64f4703dbe2be972cd39cf5d460b5d5ad4865b6

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.