Buckets:
AutoencoderKLKVAEVideo
The 3D variational autoencoder (VAE) model with KL loss.
The model can be loaded with the following code snippet.
import torch
from diffusers import AutoencoderKLKVAEVideo
vae = AutoencoderKLKVAEVideo.from_pretrained("kandinskylab/KVAE-3D-1.0", subfolder="diffusers", torch_dtype=torch.float16)
AutoencoderKLKVAEVideo[[diffusers.AutoencoderKLKVAEVideo]]
diffusers.AutoencoderKLKVAEVideo[[diffusers.AutoencoderKLKVAEVideo]]
A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Used in KVAE.
This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for all models (such as downloading or saving).
wrapperdiffusers.AutoencoderKLKVAEVideo.decodehttps://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]
Parameters:
ch (int, optional, defaults to 128) : Base channel count.
ch_mult (Tuple[int], optional, defaults to (1, 2, 4, 8)) : Channel multipliers per level.
num_res_blocks (int, optional, defaults to 2) : Number of residual blocks per level.
in_channels (int, optional, defaults to 3) : Number of input channels.
out_ch (int, optional, defaults to 3) : Number of output channels.
z_channels (int, optional, defaults to 16) : Number of latent channels.
temporal_compress_times (int, optional, defaults to 4) : Temporal compression factor.
disable_slicing[[diffusers.AutoencoderKLKVAEVideo.disable_slicing]]
Disable sliced VAE decoding.
enable_slicing[[diffusers.AutoencoderKLKVAEVideo.enable_slicing]]
Enable sliced VAE decoding.
Xet Storage Details
- Size:
- 2.25 kB
- Xet hash:
- b6adafc795744374fbdc126a82217ac6a5de4188568950493467c4f0a12a0afb
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.