Buckets:
CosmosTransformer3DModel
A Diffusion Transformer model for 3D video-like data was introduced in Cosmos World Foundation Model Platform for Physical AI by NVIDIA.
The model can be loaded with the following code snippet.
from diffusers import CosmosTransformer3DModel
transformer = CosmosTransformer3DModel.from_pretrained("nvidia/Cosmos-1.0-Diffusion-7B-Text2World", subfolder="transformer", torch_dtype=torch.bfloat16)
CosmosTransformer3DModel[[diffusers.CosmosTransformer3DModel]]
diffusers.CosmosTransformer3DModel[[diffusers.CosmosTransformer3DModel]]
A Transformer model for video-like data used in Cosmos.
Parameters:
in_channels (int, defaults to 16) : The number of channels in the input.
out_channels (int, defaults to 16) : The number of channels in the output.
num_attention_heads (int, defaults to 32) : The number of heads to use for multi-head attention.
attention_head_dim (int, defaults to 128) : The number of channels in each attention head.
num_layers (int, defaults to 28) : The number of layers of transformer blocks to use.
mlp_ratio (float, defaults to 4.0) : The ratio of the hidden layer size to the input size in the feedforward network.
text_embed_dim (int, defaults to 4096) : Input dimension of text embeddings from the text encoder.
adaln_lora_dim (int, defaults to 256) : The hidden dimension of the Adaptive LayerNorm LoRA layer.
max_size (Tuple[int, int, int], defaults to (128, 240, 240)) : The maximum size of the input latent tensors in the temporal, height, and width dimensions.
patch_size (Tuple[int, int, int], defaults to (1, 2, 2)) : The patch size to use for patchifying the input latent tensors in the temporal, height, and width dimensions.
rope_scale (Tuple[float, float, float], defaults to (2.0, 1.0, 1.0)) : The scaling factor to use for RoPE in the temporal, height, and width dimensions.
concat_padding_mask (bool, defaults to True) : Whether to concatenate the padding mask to the input latent tensors.
extra_pos_embed_type (str, optional, defaults to learnable) : The type of extra positional embeddings to use. Can be one of None or learnable.
Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
The output of Transformer2DModel.
Parameters:
sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) : The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.
Xet Storage Details
- Size:
- 3.29 kB
- Xet hash:
- 497269a9b7b75148edec39b087e4ef92a6d77fa6afd8c4af1501b78f16837f1e
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.