Buckets:

hf-doc-build/doc-dev / diffusers /pr_12229 /en /api /models /dit_transformer2d.md
rtrm's picture
|
download
raw
6.04 kB

DiTTransformer2DModel

A Transformer model for image-like data from DiT.

DiTTransformer2DModel[[diffusers.DiTTransformer2DModel]]

class diffusers.DiTTransformer2DModeldiffusers.DiTTransformer2DModelhttps://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/transformers/dit_transformer_2d.py#L364[{"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 72"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 28"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "attention_bias", "val": ": bool = True"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "num_embeds_ada_norm", "val": ": typing.Optional[int] = 1000"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "norm_type", "val": ": str = 'ada_norm_zero'"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-05"}]- num_attention_heads (int, optional, defaults to 16) -- The number of heads to use for multi-head attention.

  • attention_head_dim (int, optional, defaults to 72) -- The number of channels in each head.
  • in_channels (int, defaults to 4) -- The number of channels in the input.
  • out_channels (int, optional) -- The number of channels in the output. Specify this parameter if the output channel number differs from the input.
  • num_layers (int, optional, defaults to 28) -- The number of layers of Transformer blocks to use.
  • dropout (float, optional, defaults to 0.0) -- The dropout probability to use within the Transformer blocks.
  • norm_num_groups (int, optional, defaults to 32) -- Number of groups for group normalization within Transformer blocks.
  • attention_bias (bool, optional, defaults to True) -- Configure if the Transformer blocks' attention should contain a bias parameter.
  • sample_size (int, defaults to 32) -- The width of the latent images. This parameter is fixed during training.
  • patch_size (int, defaults to 2) -- Size of the patches the model processes, relevant for architectures working on non-sequential data.
  • activation_fn (str, optional, defaults to "gelu-approximate") -- Activation function to use in feed-forward networks within Transformer blocks.
  • num_embeds_ada_norm (int, optional, defaults to 1000) -- Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during inference.
  • upcast_attention (bool, optional, defaults to False) -- If true, upcasts the attention mechanism dimensions for potentially improved performance.
  • norm_type (str, optional, defaults to "ada_norm_zero") -- Specifies the type of normalization used, can be 'ada_norm_zero'.
  • norm_elementwise_affine (bool, optional, defaults to False) -- If true, enables element-wise affine parameters in the normalization layers.
  • norm_eps (float, optional, defaults to 1e-5) -- A small constant added to the denominator in normalization layers to prevent division by zero.0

A 2D Transformer model as introduced in DiT (https://huggingface.co/papers/2212.09748).

forwarddiffusers.DiTTransformer2DModel.forwardhttps://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/transformers/dit_transformer_2d.py#L481[{"name": "hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "class_labels", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Dict[str, typing.Any] = None"}, {"name": "return_dict", "val": ": bool = True"}]- hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) -- Input hidden_states.

  • timestep ( torch.LongTensor, optional) -- Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm.
  • class_labels ( torch.LongTensor of shape (batch size, num classes), optional) -- Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in AdaLayerZeroNorm.
  • cross_attention_kwargs ( Dict[str, Any], optional) -- A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • return_dict (bool, optional, defaults to True) -- Whether or not to return a UNet2DConditionOutput instead of a plain tuple.0If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a tuple where the first element is the sample tensor.

The DiTTransformer2DModel forward method.

Xet Storage Details

Size:
6.04 kB
·
Xet hash:
887e217e681520f2cf555284ef2edce95eb8ff507f23893986f934693bd4f7a4

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.