Buckets:
Normalization layers
Customized normalization layers for supporting various models in 🤗 Diffusers.
AdaLayerNorm[[diffusers.models.normalization.AdaLayerNorm]]
class diffusers.models.normalization.AdaLayerNormdiffusers.models.normalization.AdaLayerNormint) -- The size of each embedding vector.
- num_embeddings (
int, optional) -- The size of the embeddings dictionary. - output_dim (
int, optional) -- - norm_elementwise_affine (
bool, defaults to `False) -- - norm_eps (
bool, defaults toFalse) -- - chunk_dim (
int, defaults to0) --0
Norm layer modified to incorporate timestep embeddings.
AdaLayerNormZero[[diffusers.models.normalization.AdaLayerNormZero]]
class diffusers.models.normalization.AdaLayerNormZerodiffusers.models.normalization.AdaLayerNormZeroint) -- The size of each embedding vector.
- num_embeddings (
int) -- The size of the embeddings dictionary.0
Norm layer adaptive layer norm zero (adaLN-Zero).
AdaLayerNormSingle[[diffusers.models.normalization.AdaLayerNormSingle]]
class diffusers.models.normalization.AdaLayerNormSinglediffusers.models.normalization.AdaLayerNormSingleint) -- The size of each embedding vector.
- use_additional_conditions (
bool) -- To use additional conditions for normalization or not.0
Norm layer adaptive layer norm single (adaLN-single).
As proposed in PixArt-Alpha (see: https://huggingface.co/papers/2310.00426; Section 2.3).
AdaGroupNorm[[diffusers.models.normalization.AdaGroupNorm]]
class diffusers.models.normalization.AdaGroupNormdiffusers.models.normalization.AdaGroupNormint) -- The size of each embedding vector.
- num_embeddings (
int) -- The size of the embeddings dictionary. - num_groups (
int) -- The number of groups to separate the channels into. - act_fn (
str, optional, defaults toNone) -- The activation function to use. - eps (
float, optional, defaults to1e-5) -- The epsilon value to use for numerical stability.0
GroupNorm layer modified to incorporate timestep embeddings.
AdaLayerNormContinuous[[diffusers.models.normalization.AdaLayerNormContinuous]]
class diffusers.models.normalization.AdaLayerNormContinuousdiffusers.models.normalization.AdaLayerNormContinuousint) -- Embedding dimension to use during projection.
- conditioning_embedding_dim (
int) -- Dimension of the input condition. - elementwise_affine (
bool, defaults toTrue) -- Boolean flag to denote if affine transformation should be applied. - eps (
float, defaults to 1e-5) -- Epsilon factor. - bias (
bias, defaults toTrue) -- Boolean flag to denote if bias should be use. - norm_type (
str, defaults to"layer_norm") -- Normalization layer to use. Values supported: "layer_norm", "rms_norm".0
Adaptive normalization layer with a norm layer (layer_norm or rms_norm).
RMSNorm[[diffusers.models.normalization.RMSNorm]]
class diffusers.models.normalization.RMSNormdiffusers.models.normalization.RMSNormint) -- Number of dimensions to use for weights. Only effective when elementwise_affine is True.
- eps (
float) -- Small value to use when calculating the reciprocal of the square-root. - elementwise_affine (
bool, defaults toTrue) -- Boolean flag to denote if affine transformation should be applied. - bias (
bool, defaults to False) -- If also training thebiasparam.0
RMS Norm as introduced in https://huggingface.co/papers/1910.07467 by Zhang et al.
GlobalResponseNorm[[diffusers.models.normalization.GlobalResponseNorm]]
class diffusers.models.normalization.GlobalResponseNormdiffusers.models.normalization.GlobalResponseNormint) -- Number of dimensions to use for the gamma and beta.0
Global response normalization as introduced in ConvNeXt-v2 (https://huggingface.co/papers/2301.00808).
LuminaLayerNormContinuous[[diffusers.models.normalization.LuminaLayerNormContinuous]]
class diffusers.models.normalization.LuminaLayerNormContinuousdiffusers.models.normalization.LuminaLayerNormContinuous
SD35AdaLayerNormZeroX[[diffusers.models.normalization.SD35AdaLayerNormZeroX]]
class diffusers.models.normalization.SD35AdaLayerNormZeroXdiffusers.models.normalization.SD35AdaLayerNormZeroXint) -- The size of each embedding vector.
- num_embeddings (
int) -- The size of the embeddings dictionary.0
Norm layer adaptive layer norm zero (AdaLN-Zero).
AdaLayerNormZeroSingle[[diffusers.models.normalization.AdaLayerNormZeroSingle]]
class diffusers.models.normalization.AdaLayerNormZeroSinglediffusers.models.normalization.AdaLayerNormZeroSingleint) -- The size of each embedding vector.
- num_embeddings (
int) -- The size of the embeddings dictionary.0
Norm layer adaptive layer norm zero (adaLN-Zero).
LuminaRMSNormZero[[diffusers.models.normalization.LuminaRMSNormZero]]
class diffusers.models.normalization.LuminaRMSNormZerodiffusers.models.normalization.LuminaRMSNormZeroint) -- The size of each embedding vector.0
Norm layer adaptive RMS normalization zero.
LpNorm[[diffusers.models.normalization.LpNorm]]
class diffusers.models.normalization.LpNormdiffusers.models.normalization.LpNorm
CogView3PlusAdaLayerNormZeroTextImage[[diffusers.models.normalization.CogView3PlusAdaLayerNormZeroTextImage]]
class diffusers.models.normalization.CogView3PlusAdaLayerNormZeroTextImagediffusers.models.normalization.CogView3PlusAdaLayerNormZeroTextImageint) -- The size of each embedding vector.
- num_embeddings (
int) -- The size of the embeddings dictionary.0
Norm layer adaptive layer norm zero (adaLN-Zero).
CogVideoXLayerNormZero[[diffusers.models.normalization.CogVideoXLayerNormZero]]
class diffusers.models.normalization.CogVideoXLayerNormZerodiffusers.models.normalization.CogVideoXLayerNormZero
MochiRMSNormZero[[diffusers.models.transformers.transformer_mochi.MochiRMSNormZero]]
class diffusers.models.transformers.transformer_mochi.MochiRMSNormZerodiffusers.models.transformers.transformer_mochi.MochiRMSNormZeroint) -- The size of each embedding vector.0
Adaptive RMS Norm used in Mochi.
MochiRMSNorm[[diffusers.models.normalization.MochiRMSNorm]]
class diffusers.models.normalization.MochiRMSNormdiffusers.models.normalization.MochiRMSNorm
Xet Storage Details
- Size:
- 14.9 kB
- Xet hash:
- 884d8328be63a0cc279f0ecd3e78e5e1b1f0042edb83f28e1d450e2ecd40ad17
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.