Buckets:

hf-doc-build/doc / diffusers /main /en /api /models /dit_transformer2d.md
HuggingFaceDocBuilder's picture
|
download
raw
4.54 kB
# DiTTransformer2DModel
A Transformer model for image-like data from [DiT](https://huggingface.co/papers/2212.09748).
## DiTTransformer2DModel[[diffusers.DiTTransformer2DModel]]
#### diffusers.DiTTransformer2DModel[[diffusers.DiTTransformer2DModel]]
[Source](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/dit_transformer_2d.py#L31)
A 2D Transformer model as introduced in DiT (https://huggingface.co/papers/2212.09748).
forwarddiffusers.DiTTransformer2DModel.forwardhttps://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/dit_transformer_2d.py#L148[{"name": "hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": torch.LongTensor | None = None"}, {"name": "class_labels", "val": ": torch.LongTensor | None = None"}, {"name": "cross_attention_kwargs", "val": ": dict = None"}, {"name": "return_dict", "val": ": bool = True"}]- **hidden_states** (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous) --
Input `hidden_states`.
- **timestep** ( `torch.LongTensor`, *optional*) --
Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
- **class_labels** ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*) --
Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
`AdaLayerZeroNorm`.
- **cross_attention_kwargs** ( `dict[str, Any]`, *optional*) --
A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
`self.processor` in
[diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
Whether or not to return a [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) instead of a plain
tuple.0If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.
The [DiTTransformer2DModel](/docs/diffusers/main/en/api/models/dit_transformer2d#diffusers.DiTTransformer2DModel) forward method.
**Parameters:**
num_attention_heads (int, optional, defaults to 16) : The number of heads to use for multi-head attention.
attention_head_dim (int, optional, defaults to 72) : The number of channels in each head.
in_channels (int, defaults to 4) : The number of channels in the input.
out_channels (int, optional) : The number of channels in the output. Specify this parameter if the output channel number differs from the input.
num_layers (int, optional, defaults to 28) : The number of layers of Transformer blocks to use.
dropout (float, optional, defaults to 0.0) : The dropout probability to use within the Transformer blocks.
norm_num_groups (int, optional, defaults to 32) : Number of groups for group normalization within Transformer blocks.
attention_bias (bool, optional, defaults to True) : Configure if the Transformer blocks' attention should contain a bias parameter.
sample_size (int, defaults to 32) : The width of the latent images. This parameter is fixed during training.
patch_size (int, defaults to 2) : Size of the patches the model processes, relevant for architectures working on non-sequential data.
activation_fn (str, optional, defaults to "gelu-approximate") : Activation function to use in feed-forward networks within Transformer blocks.
num_embeds_ada_norm (int, optional, defaults to 1000) : Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during inference.
upcast_attention (bool, optional, defaults to False) : If true, upcasts the attention mechanism dimensions for potentially improved performance.
norm_type (str, optional, defaults to "ada_norm_zero") : Specifies the type of normalization used, can be 'ada_norm_zero'.
norm_elementwise_affine (bool, optional, defaults to False) : If true, enables element-wise affine parameters in the normalization layers.
norm_eps (float, optional, defaults to 1e-5) : A small constant added to the denominator in normalization layers to prevent division by zero.
**Returns:**
If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.

Xet Storage Details

Size:
4.54 kB
·
Xet hash:
21e8da45c8ba74ecb38eb0783b773bb8ece19d644e27d61e712d402283aa2903

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.