Buckets:
CogView3PlusTransformer2DModel
A Diffusion Transformer model for 2D data from CogView3Plus was introduced in CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion by Tsinghua University & ZhipuAI.
The model can be loaded with the following code snippet.
from diffusers import CogView3PlusTransformer2DModel
transformer = CogView3PlusTransformer2DModel.from_pretrained("THUDM/CogView3Plus-3b", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
CogView3PlusTransformer2DModel[[diffusers.CogView3PlusTransformer2DModel]]
diffusers.CogView3PlusTransformer2DModel[[diffusers.CogView3PlusTransformer2DModel]]
The Transformer model introduced in CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion.
forwarddiffusers.CogView3PlusTransformer2DModel.forwardhttps://github.com/huggingface/diffusers/blob/vr_12652/src/diffusers/models/transformers/transformer_cogview3plus.py#L225[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": LongTensor"}, {"name": "original_size", "val": ": Tensor"}, {"name": "target_size", "val": ": Tensor"}, {"name": "crop_coords", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]- hidden_states (torch.Tensor) --
Input hidden_states of shape (batch size, channel, height, width).
- encoder_hidden_states (
torch.Tensor) -- Conditional embeddings (embeddings computed from the input conditions such as prompts) of shape(batch_size, sequence_len, text_embed_dim) - timestep (
torch.LongTensor) -- Used to indicate denoising step. - original_size (
torch.Tensor) -- CogView3 uses SDXL-like micro-conditioning for original image size as explained in section 2.2 of https://huggingface.co/papers/2307.01952. - target_size (
torch.Tensor) -- CogView3 uses SDXL-like micro-conditioning for target image size as explained in section 2.2 of https://huggingface.co/papers/2307.01952. - crop_coords (
torch.Tensor) -- CogView3 uses SDXL-like micro-conditioning for crop coordinates as explained in section 2.2 of https://huggingface.co/papers/2307.01952. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a~models.transformer_2d.Transformer2DModelOutputinstead of a plain tuple.0torch.Tensoror~models.transformer_2d.Transformer2DModelOutputThe denoised latents using provided inputs as conditioning.
The CogView3PlusTransformer2DModel forward method.
Parameters:
patch_size (int, defaults to 2) : The size of the patches to use in the patch embedding layer.
in_channels (int, defaults to 16) : The number of channels in the input.
num_layers (int, defaults to 30) : The number of layers of Transformer blocks to use.
attention_head_dim (int, defaults to 40) : The number of channels in each head.
num_attention_heads (int, defaults to 64) : The number of heads to use for multi-head attention.
out_channels (int, defaults to 16) : The number of channels in the output.
text_embed_dim (int, defaults to 4096) : Input dimension of text embeddings from the text encoder.
time_embed_dim (int, defaults to 512) : Output dimension of timestep embeddings.
condition_dim (int, defaults to 256) : The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size, crop_coords).
pos_embed_max_size (int, defaults to 128) : The maximum resolution of the positional embeddings, from which slices of shape H x W are taken and added to input patched latents, where H and W are the latent height and width respectively. A value of 128 means that the maximum supported height and width for image generation is 128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048.
sample_size (int, defaults to 128) : The base resolution of input latents. If height/width is not provided during generation, this value is used to determine the resolution as sample_size * vae_scale_factor => 128 * 8 => 1024
Returns:
torch.Tensor` or `~models.transformer_2d.Transformer2DModelOutput
The denoised latents using provided inputs as conditioning.
Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
The output of Transformer2DModel.
Parameters:
sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) : The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.
Xet Storage Details
- Size:
- 5.61 kB
- Xet hash:
- fa8d0c65f1c4167e25618fc3061c35accbddb69e2352aab7173bf989eb07b554
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.