Buckets:
DiTTransformer2DModel
A Transformer model for image-like data from DiT.
DiTTransformer2DModel[[diffusers.DiTTransformer2DModel]]
class diffusers.DiTTransformer2DModeldiffusers.DiTTransformer2DModel
- attention_head_dim (int, optional, defaults to 72) -- The number of channels in each head.
- in_channels (int, defaults to 4) -- The number of channels in the input.
- out_channels (int, optional) -- The number of channels in the output. Specify this parameter if the output channel number differs from the input.
- num_layers (int, optional, defaults to 28) -- The number of layers of Transformer blocks to use.
- dropout (float, optional, defaults to 0.0) -- The dropout probability to use within the Transformer blocks.
- norm_num_groups (int, optional, defaults to 32) -- Number of groups for group normalization within Transformer blocks.
- attention_bias (bool, optional, defaults to True) -- Configure if the Transformer blocks' attention should contain a bias parameter.
- sample_size (int, defaults to 32) -- The width of the latent images. This parameter is fixed during training.
- patch_size (int, defaults to 2) -- Size of the patches the model processes, relevant for architectures working on non-sequential data.
- activation_fn (str, optional, defaults to "gelu-approximate") -- Activation function to use in feed-forward networks within Transformer blocks.
- num_embeds_ada_norm (int, optional, defaults to 1000) -- Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during inference.
- upcast_attention (bool, optional, defaults to False) -- If true, upcasts the attention mechanism dimensions for potentially improved performance.
- norm_type (str, optional, defaults to "ada_norm_zero") -- Specifies the type of normalization used, can be 'ada_norm_zero'.
- norm_elementwise_affine (bool, optional, defaults to False) -- If true, enables element-wise affine parameters in the normalization layers.
- norm_eps (float, optional, defaults to 1e-5) -- A small constant added to the denominator in normalization layers to prevent division by zero.0
A 2D Transformer model as introduced in DiT (https://huggingface.co/papers/2212.09748).
forwarddiffusers.DiTTransformer2DModel.forwardtorch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) --
Input hidden_states.
- timestep (
torch.LongTensor, optional) -- Used to indicate denoising step. Optional timestep to be applied as an embedding inAdaLayerNorm. - class_labels (
torch.LongTensorof shape(batch size, num classes), optional) -- Used to indicate class labels conditioning. Optional class labels to be applied as an embedding inAdaLayerZeroNorm. - cross_attention_kwargs (
Dict[str, Any], optional) -- A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a UNet2DConditionOutput instead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The DiTTransformer2DModel forward method.
Xet Storage Details
- Size:
- 6.04 kB
- Xet hash:
- 887e217e681520f2cf555284ef2edce95eb8ff507f23893986f934693bd4f7a4
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.