Buckets:

rtrm's picture
|
download
raw
23.2 kB
# AutoencoderKL
The variational autoencoder (VAE) model with KL loss was introduced in [Auto-Encoding Variational Bayes](https://huggingface.co/papers/1312.6114v11) by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images.
The abstract from the paper is:
*How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.*
## Loading from the original format
By default the [AutoencoderKL](/docs/diffusers/pr_12229/en/api/models/autoencoderkl#diffusers.AutoencoderKL) should be loaded with [from_pretrained()](/docs/diffusers/pr_12229/en/api/models/overview#diffusers.ModelMixin.from_pretrained), but it can also be loaded
from the original format using `FromOriginalModelMixin.from_single_file` as follows:
```py
from diffusers import AutoencoderKL
url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file
model = AutoencoderKL.from_single_file(url)
```
## AutoencoderKL[[diffusers.AutoencoderKL]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>class diffusers.AutoencoderKL</name><anchor>diffusers.AutoencoderKL</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L38</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "down_block_types", "val": ": typing.Tuple[str] = ('DownEncoderBlock2D',)"}, {"name": "up_block_types", "val": ": typing.Tuple[str] = ('UpDecoderBlock2D',)"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (64,)"}, {"name": "layers_per_block", "val": ": int = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "latent_channels", "val": ": int = 4"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "scaling_factor", "val": ": float = 0.18215"}, {"name": "shift_factor", "val": ": typing.Optional[float] = None"}, {"name": "latents_mean", "val": ": typing.Optional[typing.Tuple[float]] = None"}, {"name": "latents_std", "val": ": typing.Optional[typing.Tuple[float]] = None"}, {"name": "force_upcast", "val": ": bool = True"}, {"name": "use_quant_conv", "val": ": bool = True"}, {"name": "use_post_quant_conv", "val": ": bool = True"}, {"name": "mid_block_add_attention", "val": ": bool = True"}]</parameters><paramsdesc>- **in_channels** (int, *optional*, defaults to 3) -- Number of channels in the input image.
- **out_channels** (int, *optional*, defaults to 3) -- Number of channels in the output.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`) --
Tuple of downsample block types.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`) --
Tuple of upsample block types.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64,)`) --
Tuple of block output channels.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **latent_channels** (`int`, *optional*, defaults to 4) -- Number of channels in the latent space.
- **sample_size** (`int`, *optional*, defaults to `32`) -- Sample input size.
- **scaling_factor** (`float`, *optional*, defaults to 0.18215) --
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
/ scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
- **force_upcast** (`bool`, *optional*, default to `True`) --
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
- **mid_block_add_attention** (`bool`, *optional*, default to `True`) --
If enabled, the mid_block of the Encoder and Decoder will have attention blocks. If set to false, the
mid_block will only have resnet blocks</paramsdesc><paramgroups>0</paramgroups></docstring>
A VAE model with KL loss for encoding images into latents and decoding latent representations into images.
This model inherits from [ModelMixin](/docs/diffusers/pr_12229/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKL.decode</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKL.encode</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKL.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L163</source><parameters>[]</parameters></docstring>
Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKL.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L149</source><parameters>[]</parameters></docstring>
Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKL.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L156</source><parameters>[]</parameters></docstring>
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKL.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L141</source><parameters>[{"name": "use_tiling", "val": ": bool = True"}]</parameters></docstring>
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>forward</name><anchor>diffusers.AutoencoderKL.forward</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L501</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>fuse_qkv_projections</name><anchor>diffusers.AutoencoderKL.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L530</source><parameters>[]</parameters></docstring>
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.
> [!WARNING] > This API is 🧪 experimental.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>set_attn_processor</name><anchor>diffusers.AutoencoderKL.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L196</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for **all** `Attention` layers.
If `processor` is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>
Sets the attention processor to use to compute attention.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>set_default_attn_processor</name><anchor>diffusers.AutoencoderKL.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L231</source><parameters>[]</parameters></docstring>
Disables custom attention processors and sets the default attention implementation.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKL.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L452</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>
Decode a batch of images using a tiled decoder.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKL.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L390</source><parameters>[{"name": "x", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of images.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
Whether or not to return a `~models.autoencoder_kl.AutoencoderKLOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.autoencoder_kl.AutoencoderKLOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.autoencoder_kl.AutoencoderKLOutput` is returned, otherwise a plain
`tuple` is returned.</retdesc></docstring>
Encode a batch of images using a tiled encoder.
When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several
steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is
different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the
tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the
output, but they should be much less noticeable.
</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.AutoencoderKL.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/autoencoder_kl.py#L552</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.
> [!WARNING] > This API is 🧪 experimental.
</div></div>
## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
`DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>
Output of AutoencoderKL encoding method.
</div>
## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">
<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>
Output of decoding method.
</div>
<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl.md" />

Xet Storage Details

Size:
23.2 kB
·
Xet hash:
4e6b2a811f22618db952897c42e1db45d94291744fc13e71502654fa7cb24844

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.