Buckets:
DiTTransformer2DModel
A Transformer model for image-like data from DiT.
DiTTransformer2DModel[[diffusers.DiTTransformer2DModel]]
diffusers.DiTTransformer2DModel[[diffusers.DiTTransformer2DModel]]
A 2D Transformer model as introduced in DiT (https://huggingface.co/papers/2212.09748).
forwarddiffusers.DiTTransformer2DModel.forwardhttps://github.com/huggingface/diffusers/blob/vr_12249/src/diffusers/models/transformers/dit_transformer_2d.py#L148[{"name": "hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "class_labels", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Dict[str, typing.Any] = None"}, {"name": "return_dict", "val": ": bool = True"}]- hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) --
Input hidden_states.
- timestep (
torch.LongTensor, optional) -- Used to indicate denoising step. Optional timestep to be applied as an embedding inAdaLayerNorm. - class_labels (
torch.LongTensorof shape(batch size, num classes), optional) -- Used to indicate class labels conditioning. Optional class labels to be applied as an embedding inAdaLayerZeroNorm. - cross_attention_kwargs (
Dict[str, Any], optional) -- A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a UNet2DConditionOutput instead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The DiTTransformer2DModel forward method.
Parameters:
num_attention_heads (int, optional, defaults to 16) : The number of heads to use for multi-head attention.
attention_head_dim (int, optional, defaults to 72) : The number of channels in each head.
in_channels (int, defaults to 4) : The number of channels in the input.
out_channels (int, optional) : The number of channels in the output. Specify this parameter if the output channel number differs from the input.
num_layers (int, optional, defaults to 28) : The number of layers of Transformer blocks to use.
dropout (float, optional, defaults to 0.0) : The dropout probability to use within the Transformer blocks.
norm_num_groups (int, optional, defaults to 32) : Number of groups for group normalization within Transformer blocks.
attention_bias (bool, optional, defaults to True) : Configure if the Transformer blocks' attention should contain a bias parameter.
sample_size (int, defaults to 32) : The width of the latent images. This parameter is fixed during training.
patch_size (int, defaults to 2) : Size of the patches the model processes, relevant for architectures working on non-sequential data.
activation_fn (str, optional, defaults to "gelu-approximate") : Activation function to use in feed-forward networks within Transformer blocks.
num_embeds_ada_norm (int, optional, defaults to 1000) : Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during inference.
upcast_attention (bool, optional, defaults to False) : If true, upcasts the attention mechanism dimensions for potentially improved performance.
norm_type (str, optional, defaults to "ada_norm_zero") : Specifies the type of normalization used, can be 'ada_norm_zero'.
norm_elementwise_affine (bool, optional, defaults to False) : If true, enables element-wise affine parameters in the normalization layers.
norm_eps (float, optional, defaults to 1e-5) : A small constant added to the denominator in normalization layers to prevent division by zero.
Returns:
If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a
tuple where the first element is the sample tensor.
Xet Storage Details
- Size:
- 4.6 kB
- Xet hash:
- 4e2e70cbe371a43423c38a514c5d3f929ff0fb406df0b360b8b1223769e768b2
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.