Buckets:
| # ControlNet with Stable Diffusion 3 | |
| StableDiffusion3ControlNetPipeline is an implementation of ControlNet for Stable Diffusion 3. | |
| ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. | |
| With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. | |
| The abstract from the paper is: | |
| *We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.* | |
| This controlnet code is mainly implemented by [The InstantX Team](https://huggingface.co/InstantX). The inpainting-related code was developed by [The Alimama Creative Team](https://huggingface.co/alimama-creative). You can find pre-trained checkpoints for SD3-ControlNet in the table below: | |
| | ControlNet type | Developer | Link | | |
| | -------- | ---------- | ---- | | |
| | Canny | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Canny) | | |
| | Depth | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Depth) | | |
| | Pose | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Pose) | | |
| | Tile | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Tile) | | |
| | Inpainting | [The AlimamaCreative Team](https://huggingface.co/alimama-creative) | [link](https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting) | | |
| > [!TIP] | |
| > Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. | |
| ## StableDiffusion3ControlNetPipeline[[diffusers.StableDiffusion3ControlNetPipeline]] | |
| #### diffusers.StableDiffusion3ControlNetPipeline[[diffusers.StableDiffusion3ControlNetPipeline]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L143) | |
| __call__diffusers.StableDiffusion3ControlNetPipeline.__call__https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L818[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "controlnet_pooled_projections", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}]- **prompt** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. | |
| instead. | |
| - **prompt_2** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is | |
| will be used instead | |
| - **prompt_3** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is | |
| will be used instead | |
| - **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) -- | |
| The height in pixels of the generated image. This is set to 1024 by default for the best results. | |
| - **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) -- | |
| The width in pixels of the generated image. This is set to 1024 by default for the best results. | |
| - **num_inference_steps** (`int`, *optional*, defaults to 50) -- | |
| The number of denoising steps. More denoising steps usually lead to a higher quality image at the | |
| expense of slower inference. | |
| - **sigmas** (`List[float]`, *optional*) -- | |
| Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in | |
| their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed | |
| will be used. | |
| - **guidance_scale** (`float`, *optional*, defaults to 5.0) -- | |
| Guidance scale as defined in [Classifier-Free Diffusion | |
| Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. | |
| of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting | |
| `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to | |
| the text `prompt`, usually at the expense of lower image quality. | |
| - **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) -- | |
| The percentage of total steps at which the ControlNet starts applying. | |
| - **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) -- | |
| The percentage of total steps at which the ControlNet stops applying. | |
| - **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, -- | |
| `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`): | |
| The ControlNet input condition to provide guidance to the `unet` for generation. If the type is | |
| specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted | |
| as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or | |
| width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`, | |
| images must be passed as a list such that each element of the list can be correctly batched for input | |
| to a single ControlNet. | |
| - **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) -- | |
| The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added | |
| to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set | |
| the corresponding scale as a list. | |
| - **controlnet_pooled_projections** (`torch.FloatTensor` of shape `(batch_size, projection_dim)`) -- | |
| Embeddings projected from the embeddings of controlnet input conditions. | |
| - **negative_prompt** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts not to guide the image generation. If not defined, one has to pass | |
| `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is | |
| less than `1`). | |
| - **negative_prompt_2** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and | |
| `text_encoder_2`. If not defined, `negative_prompt` is used instead | |
| - **negative_prompt_3** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and | |
| `text_encoder_3`. If not defined, `negative_prompt` is used instead | |
| - **num_images_per_prompt** (`int`, *optional*, defaults to 1) -- | |
| The number of images to generate per prompt. | |
| - **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) -- | |
| One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) | |
| to make generation deterministic. | |
| - **latents** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image | |
| generation. Can be used to tweak the same generation with different prompts. If not provided, a latents | |
| tensor will be generated by sampling using the supplied random `generator`. | |
| - **prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not | |
| provided, text embeddings will be generated from `prompt` input argument. | |
| - **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt | |
| weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input | |
| argument. | |
| - **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. | |
| If not provided, pooled text embeddings will be generated from `prompt` input argument. | |
| - **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt | |
| weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` | |
| input argument. | |
| - **ip_adapter_image** (`PipelineImageInput`, *optional*) -- | |
| Optional image input to work with IP Adapters. | |
| - **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) -- | |
| Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images, | |
| emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to | |
| `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument. | |
| - **output_type** (`str`, *optional*, defaults to `"pil"`) -- | |
| The output format of the generate image. Choose between | |
| [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. | |
| - **return_dict** (`bool`, *optional*, defaults to `True`) -- | |
| Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead | |
| of a plain tuple. | |
| - **joint_attention_kwargs** (`dict`, *optional*) -- | |
| A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under | |
| `self.processor` in | |
| [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). | |
| - **callback_on_step_end** (`Callable`, *optional*) -- | |
| A function that calls at the end of each denoising steps during the inference. The function is called | |
| with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, | |
| callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by | |
| `callback_on_step_end_tensor_inputs`. | |
| - **callback_on_step_end_tensor_inputs** (`List`, *optional*) -- | |
| The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list | |
| will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the | |
| `._callback_tensor_inputs` attribute of your pipeline class. | |
| - **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.0`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple``~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a | |
| `tuple`. When returning a tuple, the first element is a list with the generated images. | |
| Function invoked when calling the pipeline for generation. | |
| Examples: | |
| ```py | |
| >>> import torch | |
| >>> from diffusers import StableDiffusion3ControlNetPipeline | |
| >>> from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel | |
| >>> from diffusers.utils import load_image | |
| >>> controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny", torch_dtype=torch.float16) | |
| >>> pipe = StableDiffusion3ControlNetPipeline.from_pretrained( | |
| ... "stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet, torch_dtype=torch.float16 | |
| ... ) | |
| >>> pipe.to("cuda") | |
| >>> control_image = load_image( | |
| ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png" | |
| ... ) | |
| >>> prompt = "A bird in space" | |
| >>> image = pipe( | |
| ... prompt, control_image=control_image, height=1024, width=768, controlnet_conditioning_scale=0.7 | |
| ... ).images[0] | |
| >>> image.save("sd3.png") | |
| ``` | |
| **Parameters:** | |
| transformer ([SD3Transformer2DModel](/docs/diffusers/pr_11739/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)) : Conditional Transformer (MMDiT) architecture to denoise the encoded image latents. | |
| scheduler ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/pr_11739/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) : A scheduler to be used in combination with `transformer` to denoise the encoded image latents. | |
| vae ([AutoencoderKL](/docs/diffusers/pr_11739/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) : Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. | |
| text_encoder (`CLIPTextModelWithProjection`) : [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant, with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size` as its dimension. | |
| text_encoder_2 (`CLIPTextModelWithProjection`) : [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. | |
| text_encoder_3 (`T5EncoderModel`) : Frozen text-encoder. Stable Diffusion 3 uses [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant. | |
| tokenizer (`CLIPTokenizer`) : Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). | |
| tokenizer_2 (`CLIPTokenizer`) : Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). | |
| tokenizer_3 (`T5TokenizerFast`) : Tokenizer of class [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer). | |
| controlnet ([SD3ControlNetModel](/docs/diffusers/pr_11739/en/api/models/controlnet_sd3#diffusers.SD3ControlNetModel) or `List[SD3ControlNetModel]` or `SD3MultiControlNetModel`) : Provides additional conditioning to the `unet` during the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning. | |
| image_encoder (`SiglipVisionModel`, *optional*) : Pre-trained Vision Model for IP Adapter. | |
| feature_extractor (`SiglipImageProcessor`, *optional*) : Image processor for IP Adapter. | |
| **Returns:** | |
| ``~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`` | |
| `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a | |
| `tuple`. When returning a tuple, the first element is a list with the generated images. | |
| #### encode_image[[diffusers.StableDiffusion3ControlNetPipeline.encode_image]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L741) | |
| Encodes the given image into a feature representation using a pre-trained image encoder. | |
| **Parameters:** | |
| image (`PipelineImageInput`) : Input image to be encoded. | |
| device : (`torch.device`): Torch device. | |
| **Returns:** | |
| ``torch.Tensor`` | |
| The encoded image feature representation. | |
| #### encode_prompt[[diffusers.StableDiffusion3ControlNetPipeline.encode_prompt]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L364) | |
| **Parameters:** | |
| prompt (`str` or `List[str]`, *optional*) : prompt to be encoded | |
| prompt_2 (`str` or `List[str]`, *optional*) : The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is used in all text-encoders | |
| prompt_3 (`str` or `List[str]`, *optional*) : The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is used in all text-encoders | |
| device : (`torch.device`): torch device | |
| num_images_per_prompt (`int`) : number of images that should be generated per prompt | |
| do_classifier_free_guidance (`bool`) : whether to use classifier free guidance or not | |
| negative_prompt (`str` or `List[str]`, *optional*) : The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). | |
| negative_prompt_2 (`str` or `List[str]`, *optional*) : The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders. | |
| negative_prompt_3 (`str` or `List[str]`, *optional*) : The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders. | |
| prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. | |
| negative_prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. | |
| pooled_prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled text embeddings will be generated from `prompt` input argument. | |
| negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` input argument. | |
| clip_skip (`int`, *optional*) : Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. | |
| lora_scale (`float`, *optional*) : A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. | |
| #### prepare_ip_adapter_image_embeds[[diffusers.StableDiffusion3ControlNetPipeline.prepare_ip_adapter_image_embeds]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L761) | |
| Prepares image embeddings for use in the IP-Adapter. | |
| Either `ip_adapter_image` or `ip_adapter_image_embeds` must be passed. | |
| **Parameters:** | |
| ip_adapter_image (`PipelineImageInput`, *optional*) : The input image to extract features from for IP-Adapter. | |
| ip_adapter_image_embeds (`torch.Tensor`, *optional*) : Precomputed image embeddings. | |
| device : (`torch.device`, *optional*): Torch device. | |
| num_images_per_prompt (`int`, defaults to 1) : Number of images that should be generated per prompt. | |
| do_classifier_free_guidance (`bool`, defaults to True) : Whether to use classifier free guidance or not. | |
| ## StableDiffusion3ControlNetInpaintingPipeline[[diffusers.StableDiffusion3ControlNetInpaintingPipeline]] | |
| #### diffusers.StableDiffusion3ControlNetInpaintingPipeline[[diffusers.StableDiffusion3ControlNetInpaintingPipeline]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L166) | |
| __call__diffusers.StableDiffusion3ControlNetInpaintingPipeline.__call__https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L985[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_mask", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "controlnet_pooled_projections", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}]- **prompt** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. | |
| instead. | |
| - **prompt_2** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is | |
| will be used instead | |
| - **prompt_3** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is | |
| will be used instead | |
| - **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) -- | |
| The height in pixels of the generated image. This is set to 1024 by default for the best results. | |
| - **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) -- | |
| The width in pixels of the generated image. This is set to 1024 by default for the best results. | |
| - **num_inference_steps** (`int`, *optional*, defaults to 50) -- | |
| The number of denoising steps. More denoising steps usually lead to a higher quality image at the | |
| expense of slower inference. | |
| - **sigmas** (`List[float]`, *optional*) -- | |
| Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in | |
| their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed | |
| will be used. | |
| - **guidance_scale** (`float`, *optional*, defaults to 5.0) -- | |
| Guidance scale as defined in [Classifier-Free Diffusion | |
| Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. | |
| of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting | |
| `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to | |
| the text `prompt`, usually at the expense of lower image quality. | |
| - **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) -- | |
| The percentage of total steps at which the ControlNet starts applying. | |
| - **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) -- | |
| The percentage of total steps at which the ControlNet stops applying. | |
| - **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`) -- | |
| `Image`, numpy array or tensor representing an image batch to be inpainted (which parts of the image to | |
| be masked out with `control_mask` and repainted according to `prompt`). For both numpy array and | |
| pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list or tensors, the | |
| expected shape should be `(B, C, H, W)`. If it is a numpy array or a list of arrays, the expected shape | |
| should be `(B, H, W, C)` or `(H, W, C)`. | |
| - **control_mask** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`) -- | |
| `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask | |
| are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a | |
| single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one | |
| color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`. And | |
| for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W, 1)`, or `(H, W)`. | |
| - **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) -- | |
| The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added | |
| to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set | |
| the corresponding scale as a list. | |
| - **controlnet_pooled_projections** (`torch.FloatTensor` of shape `(batch_size, projection_dim)`) -- | |
| Embeddings projected from the embeddings of controlnet input conditions. | |
| - **negative_prompt** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts not to guide the image generation. If not defined, one has to pass | |
| `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is | |
| less than `1`). | |
| - **negative_prompt_2** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and | |
| `text_encoder_2`. If not defined, `negative_prompt` is used instead | |
| - **negative_prompt_3** (`str` or `List[str]`, *optional*) -- | |
| The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and | |
| `text_encoder_3`. If not defined, `negative_prompt` is used instead | |
| - **num_images_per_prompt** (`int`, *optional*, defaults to 1) -- | |
| The number of images to generate per prompt. | |
| - **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) -- | |
| One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) | |
| to make generation deterministic. | |
| - **latents** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image | |
| generation. Can be used to tweak the same generation with different prompts. If not provided, a latents | |
| tensor will be generated by sampling using the supplied random `generator`. | |
| - **prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not | |
| provided, text embeddings will be generated from `prompt` input argument. | |
| - **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt | |
| weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input | |
| argument. | |
| - **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. | |
| If not provided, pooled text embeddings will be generated from `prompt` input argument. | |
| - **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) -- | |
| Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt | |
| weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` | |
| input argument. | |
| - **ip_adapter_image** (`PipelineImageInput`, *optional*) -- | |
| Optional image input to work with IP Adapters. | |
| - **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) -- | |
| Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images, | |
| emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to | |
| `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument. | |
| - **output_type** (`str`, *optional*, defaults to `"pil"`) -- | |
| The output format of the generate image. Choose between | |
| [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. | |
| - **return_dict** (`bool`, *optional*, defaults to `True`) -- | |
| Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead | |
| of a plain tuple. | |
| - **joint_attention_kwargs** (`dict`, *optional*) -- | |
| A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under | |
| `self.processor` in | |
| [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). | |
| - **callback_on_step_end** (`Callable`, *optional*) -- | |
| A function that calls at the end of each denoising steps during the inference. The function is called | |
| with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, | |
| callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by | |
| `callback_on_step_end_tensor_inputs`. | |
| - **callback_on_step_end_tensor_inputs** (`List`, *optional*) -- | |
| The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list | |
| will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the | |
| `._callback_tensor_inputs` attribute of your pipeline class. | |
| - **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.0`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple``~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a | |
| `tuple`. When returning a tuple, the first element is a list with the generated images. | |
| Function invoked when calling the pipeline for generation. | |
| Examples: | |
| ```py | |
| >>> import torch | |
| >>> from diffusers.utils import load_image, check_min_version | |
| >>> from diffusers.pipelines import StableDiffusion3ControlNetInpaintingPipeline | |
| >>> from diffusers.models.controlnet_sd3 import SD3ControlNetModel | |
| >>> controlnet = SD3ControlNetModel.from_pretrained( | |
| ... "alimama-creative/SD3-Controlnet-Inpainting", use_safetensors=True, extra_conditioning_channels=1 | |
| ... ) | |
| >>> pipe = StableDiffusion3ControlNetInpaintingPipeline.from_pretrained( | |
| ... "stabilityai/stable-diffusion-3-medium-diffusers", | |
| ... controlnet=controlnet, | |
| ... torch_dtype=torch.float16, | |
| ... ) | |
| >>> pipe.text_encoder.to(torch.float16) | |
| >>> pipe.controlnet.to(torch.float16) | |
| >>> pipe.to("cuda") | |
| >>> image = load_image( | |
| ... "https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog.png" | |
| ... ) | |
| >>> mask = load_image( | |
| ... "https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog_mask.png" | |
| ... ) | |
| >>> width = 1024 | |
| >>> height = 1024 | |
| >>> prompt = "A cat is sitting next to a puppy." | |
| >>> generator = torch.Generator(device="cuda").manual_seed(24) | |
| >>> res_image = pipe( | |
| ... negative_prompt="deformed, distorted, disfigured, poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, mutated hands and fingers, disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, NSFW", | |
| ... prompt=prompt, | |
| ... height=height, | |
| ... width=width, | |
| ... control_image=image, | |
| ... control_mask=mask, | |
| ... num_inference_steps=28, | |
| ... generator=generator, | |
| ... controlnet_conditioning_scale=0.95, | |
| ... guidance_scale=7, | |
| ... ).images[0] | |
| >>> res_image.save(f"sd3.png") | |
| ``` | |
| **Parameters:** | |
| transformer ([SD3Transformer2DModel](/docs/diffusers/pr_11739/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)) : Conditional Transformer (MMDiT) architecture to denoise the encoded image latents. | |
| scheduler ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/pr_11739/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) : A scheduler to be used in combination with `transformer` to denoise the encoded image latents. | |
| vae ([AutoencoderKL](/docs/diffusers/pr_11739/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) : Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. | |
| text_encoder (`CLIPTextModelWithProjection`) : [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant, with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size` as its dimension. | |
| text_encoder_2 (`CLIPTextModelWithProjection`) : [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. | |
| text_encoder_3 (`T5EncoderModel`) : Frozen text-encoder. Stable Diffusion 3 uses [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant. | |
| tokenizer (`CLIPTokenizer`) : Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). | |
| tokenizer_2 (`CLIPTokenizer`) : Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). | |
| tokenizer_3 (`T5TokenizerFast`) : Tokenizer of class [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer). | |
| controlnet ([SD3ControlNetModel](/docs/diffusers/pr_11739/en/api/models/controlnet_sd3#diffusers.SD3ControlNetModel) or `List[SD3ControlNetModel]` or `SD3MultiControlNetModel`) : Provides additional conditioning to the `transformer` during the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning. | |
| image_encoder (`PreTrainedModel`, *optional*) : Pre-trained Vision Model for IP Adapter. | |
| feature_extractor (`BaseImageProcessor`, *optional*) : Image processor for IP Adapter. | |
| **Returns:** | |
| ``~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`` | |
| `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a | |
| `tuple`. When returning a tuple, the first element is a list with the generated images. | |
| #### encode_image[[diffusers.StableDiffusion3ControlNetInpaintingPipeline.encode_image]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L908) | |
| Encodes the given image into a feature representation using a pre-trained image encoder. | |
| **Parameters:** | |
| image (`PipelineImageInput`) : Input image to be encoded. | |
| device : (`torch.device`): Torch device. | |
| **Returns:** | |
| ``torch.Tensor`` | |
| The encoded image feature representation. | |
| #### encode_prompt[[diffusers.StableDiffusion3ControlNetInpaintingPipeline.encode_prompt]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L386) | |
| **Parameters:** | |
| prompt (`str` or `List[str]`, *optional*) : prompt to be encoded | |
| prompt_2 (`str` or `List[str]`, *optional*) : The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is used in all text-encoders | |
| prompt_3 (`str` or `List[str]`, *optional*) : The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is used in all text-encoders | |
| device : (`torch.device`): torch device | |
| num_images_per_prompt (`int`) : number of images that should be generated per prompt | |
| do_classifier_free_guidance (`bool`) : whether to use classifier free guidance or not | |
| negative_prompt (`str` or `List[str]`, *optional*) : The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). | |
| negative_prompt_2 (`str` or `List[str]`, *optional*) : The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders. | |
| negative_prompt_3 (`str` or `List[str]`, *optional*) : The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders. | |
| prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. | |
| negative_prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. | |
| pooled_prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled text embeddings will be generated from `prompt` input argument. | |
| negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*) : Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` input argument. | |
| clip_skip (`int`, *optional*) : Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. | |
| lora_scale (`float`, *optional*) : A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. | |
| #### prepare_ip_adapter_image_embeds[[diffusers.StableDiffusion3ControlNetInpaintingPipeline.prepare_ip_adapter_image_embeds]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L928) | |
| Prepares image embeddings for use in the IP-Adapter. | |
| Either `ip_adapter_image` or `ip_adapter_image_embeds` must be passed. | |
| **Parameters:** | |
| ip_adapter_image (`PipelineImageInput`, *optional*) : The input image to extract features from for IP-Adapter. | |
| ip_adapter_image_embeds (`torch.Tensor`, *optional*) : Precomputed image embeddings. | |
| device : (`torch.device`, *optional*): Torch device. | |
| num_images_per_prompt (`int`, defaults to 1) : Number of images that should be generated per prompt. | |
| do_classifier_free_guidance (`bool`, defaults to True) : Whether to use classifier free guidance or not. | |
| ## StableDiffusion3PipelineOutput[[diffusers.pipelines.stable_diffusion_3.pipeline_output.StableDiffusion3PipelineOutput]] | |
| #### diffusers.pipelines.stable_diffusion_3.pipeline_output.StableDiffusion3PipelineOutput[[diffusers.pipelines.stable_diffusion_3.pipeline_output.StableDiffusion3PipelineOutput]] | |
| [Source](https://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/pipelines/stable_diffusion_3/pipeline_output.py#L11) | |
| Output class for Stable Diffusion pipelines. | |
| **Parameters:** | |
| images (`List[PIL.Image.Image]` or `np.ndarray`) : List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. | |
Xet Storage Details
- Size:
- 45.2 kB
- Xet hash:
- c0d2eb70b08607644fe3570892197bc6bbd9499e86db9ce7628b97eee4fe4748
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.