Buckets:
AuraFlowTransformer2DModel
A Transformer model for image-like data from AuraFlow.
AuraFlowTransformer2DModel[[diffusers.AuraFlowTransformer2DModel]]
diffusers.AuraFlowTransformer2DModel[[diffusers.AuraFlowTransformer2DModel]]
A 2D Transformer model as introduced in AuraFlow (https://blog.fal.ai/auraflow/).
fuse_qkv_projectionsdiffusers.AuraFlowTransformer2DModel.fuse_qkv_projectionshttps://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/models/transformers/auraflow_transformer_2d.py#L369[]
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
> This API is 🧪 experimental.
Parameters:
sample_size (int) : The width of the latent images. This is fixed during training since it is used to learn a number of position embeddings.
patch_size (int) : Patch size to turn the input data into small patches.
in_channels (int, optional, defaults to 4) : The number of channels in the input.
num_mmdit_layers (int, optional, defaults to 4) : The number of layers of MMDiT Transformer blocks to use.
num_single_dit_layers (int, optional, defaults to 32) : The number of layers of Transformer blocks to use. These blocks use concatenated image and text representations.
attention_head_dim (int, optional, defaults to 256) : The number of channels in each head.
num_attention_heads (int, optional, defaults to 12) : The number of heads to use for multi-head attention.
joint_attention_dim (int, optional) : The number of encoder_hidden_states dimensions to use.
caption_projection_dim (int) : Number of dimensions to use when projecting the encoder_hidden_states.
out_channels (int, defaults to 4) : Number of output channels.
pos_embed_max_size (int, defaults to 1024) : Maximum positions to embed from the image latents.
unfuse_qkv_projections[[diffusers.AuraFlowTransformer2DModel.unfuse_qkv_projections]]
Disables the fused QKV projection if enabled.
> This API is 🧪 experimental.
Xet Storage Details
- Size:
- 2.44 kB
- Xet hash:
- 6e0159e02e8fc6250c656910522d55f88501445560aea77c3a217f58e71505de
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.