Buckets:

hf-doc-build/doc-dev / diffusers /pr_12595 /en /api /models /autoencoder_kl_wan.md
rtrm's picture
|
download
raw
7.86 kB

AutoencoderKLWan

The 3D variational autoencoder (VAE) model with KL loss used in Wan 2.1 by the Alibaba Wan Team.

The model can be loaded with the following code snippet.

from diffusers import AutoencoderKLWan

vae = AutoencoderKLWan.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="vae", torch_dtype=torch.float32)

AutoencoderKLWan[[diffusers.AutoencoderKLWan]]

class diffusers.AutoencoderKLWandiffusers.AutoencoderKLWanhttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L954[{"name": "base_dim", "val": ": int = 96"}, {"name": "decoder_base_dim", "val": ": typing.Optional[int] = None"}, {"name": "z_dim", "val": ": int = 16"}, {"name": "dim_mult", "val": ": typing.Tuple[int] = [1, 2, 4, 4]"}, {"name": "num_res_blocks", "val": ": int = 2"}, {"name": "attn_scales", "val": ": typing.List[float] = []"}, {"name": "temperal_downsample", "val": ": typing.List[bool] = [False, True, True]"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "latents_mean", "val": ": typing.List[float] = [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921]"}, {"name": "latents_std", "val": ": typing.List[float] = [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916]"}, {"name": "is_residual", "val": ": bool = False"}, {"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "scale_factor_temporal", "val": ": typing.Optional[int] = 4"}, {"name": "scale_factor_spatial", "val": ": typing.Optional[int] = 8"}]

A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Introduced in [Wan 2.1].

This model inherits from ModelMixin. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving).

wrapperdiffusers.AutoencoderKLWan.decodehttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/utils/accelerate_utils.py#L43[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]

enable_tilingdiffusers.AutoencoderKLWan.enable_tilinghttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1086[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}]- tile_sample_min_height (int, optional) -- The minimum height required for a sample to be separated into tiles across the height dimension.

  • tile_sample_min_width (int, optional) -- The minimum width required for a sample to be separated into tiles across the width dimension.
  • tile_sample_stride_height (int, optional) -- The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are no tiling artifacts produced across the height dimension.
  • tile_sample_stride_width (int, optional) -- The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling artifacts produced across the width dimension.0

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

forwarddiffusers.AutoencoderKLWan.forwardhttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1396[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]- sample (torch.Tensor) -- Input sample.

  • return_dict (bool, optional, defaults to True) -- Whether or not to return a DecoderOutput instead of a plain tuple.0

tiled_decodediffusers.AutoencoderKLWan.tiled_decodehttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1318[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]- z (torch.Tensor) -- Input batch of latent vectors.

  • return_dict (bool, optional, defaults to True) -- Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple.0~models.vae.DecoderOutput or tupleIf return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is returned.

Decode a batch of images using a tiled decoder.

tiled_encodediffusers.AutoencoderKLWan.tiled_encodehttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1252[{"name": "x", "val": ": Tensor"}]- x (torch.Tensor) -- Input batch of videos.0torch.TensorThe latent representation of the encoded videos. Encode a batch of images using a tiled encoder.

DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

class diffusers.models.autoencoders.vae.DecoderOutputdiffusers.models.autoencoders.vae.DecoderOutputhttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/autoencoders/vae.py#L47[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]- sample (torch.Tensor of shape (batch_size, num_channels, height, width)) -- The decoded output sample from the last layer of the model.0

Output of decoding method.

Xet Storage Details

Size:
7.86 kB
·
Xet hash:
2631745cf136b9dc08f9fe28ff1eabeaccda21322affcde68f9eb69c2a5a8145

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.