Buckets:
Quantization
Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference.
Learn how to quantize models in the Quantization guide.
PipelineQuantizationConfig[[diffusers.PipelineQuantizationConfig]]
class diffusers.PipelineQuantizationConfigdiffusers.PipelineQuantizationConfigstr) -- Quantization backend to be used. When using this option, we assume that the backend
is available to both diffusers and transformers.
- quant_kwargs (
dict) -- Params to initialize the quantization backend class. - components_to_quantize (
list) -- Components of a pipeline to be quantized. - quant_mapping (
dict) -- Mapping defining the quantization specs to be used for the pipeline components. When using this argument, users are not expected to providequant_backend,quant_kawargs, andcomponents_to_quantize.0
Configuration class to be used when applying quantization on-the-fly to from_pretrained().
BitsAndBytesConfig[[diffusers.BitsAndBytesConfig]]
class diffusers.BitsAndBytesConfigdiffusers.BitsAndBytesConfigbool, optional, defaults to False) --
This flag is used to enable 8-bit quantization with LLM.int8().
- load_in_4bit (
bool, optional, defaults toFalse) -- This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers frombitsandbytes. - llm_int8_threshold (
float, optional, defaults to 6.0) -- This corresponds to the outlier threshold for outlier detection as described inLLM.int8() : 8-bit Matrix Multiplication for Transformers at Scalepaper: https://huggingface.co/papers/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning). - llm_int8_skip_modules (
List[str], optional) -- An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example forCausalLMmodels, the lastlm_headis typically kept in its originaldtype. - llm_int8_enable_fp32_cpu_offload (
bool, optional, defaults toFalse) -- This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such asgoogle/flan-t5-xxl. Note that the int8 operations will not be run on CPU. - llm_int8_has_fp16_weight (
bool, optional, defaults toFalse) -- This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not have to be converted back and forth for the backward pass. - bnb_4bit_compute_dtype (
torch.dtypeor str, optional, defaults totorch.float32) -- This sets the computational type which might be different than the input type. For example, inputs might be fp32, but computation can be set to bf16 for speedups. - bnb_4bit_quant_type (
str, optional, defaults to"fp4") -- This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types which are specified byfp4ornf4. - bnb_4bit_use_double_quant (
bool, optional, defaults toFalse) -- This flag is used for nested quantization where the quantization constants from the first quantization are quantized again. - bnb_4bit_quant_storage (
torch.dtypeor str, optional, defaults totorch.uint8) -- This sets the storage type to pack the quanitzed 4-bit prarams. - kwargs (
Dict[str, Any], optional) -- Additional parameters from which to initialize the configuration object.0
This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using bitsandbytes.
This replaces load_in_8bit or load_in_4bit therefore both options are mutually exclusive.
Currently only supports LLM.int8(), FP4, and NF4 quantization. If more methods are added to bitsandbytes,
then more arguments will be added to this class.
is_quantizablediffusers.BitsAndBytesConfig.is_quantizable
Returns True if the model is quantizable, False otherwise.
post_initdiffusers.BitsAndBytesConfig.post_init
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
quantization_methoddiffusers.BitsAndBytesConfig.quantization_method
This method returns the quantization method used for the model. If the model is not quantizable, it returns
None.
to_diff_dictdiffusers.BitsAndBytesConfig.to_diff_dictDict[str, Any]Dictionary of all the attributes that make up this configuration instance,
Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.
GGUFQuantizationConfig[[diffusers.GGUFQuantizationConfig]]
class diffusers.GGUFQuantizationConfigdiffusers.GGUFQuantizationConfigtorch.dtype, defaults to torch.float32):
This sets the computational type which might be different than the input type. For example, inputs might be
fp32, but computation can be set to bf16 for speedups.0
This is a config class for GGUF Quantization techniques.
QuantoConfig[[diffusers.QuantoConfig]]
class diffusers.QuantoConfigdiffusers.QuantoConfigstr, optional, defaults to "int8") --
The target dtype for the weights after quantization. Supported values are ("float8","int8","int4","int2")0
This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using quanto.
modules_to_not_convert (list, optional, default to None):
The list of modules to not quantize, useful for quantizing models that explicitly require to have some
modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).
post_initdiffusers.QuantoConfig.post_init
Safety checker that arguments are correct
TorchAoConfig[[diffusers.TorchAoConfig]]
class diffusers.TorchAoConfigdiffusers.TorchAoConfigstr, AOBaseConfig]) --
The type of quantization we want to use, currently supporting:
Integer quantization:
- Full function names:
int4_weight_only,int8_dynamic_activation_int4_weight,int8_weight_only,int8_dynamic_activation_int8_weight - Shorthands:
int4wo,int4dq,int8wo,int8dq
- Full function names:
Floating point 8-bit quantization:
- Full function names:
float8_weight_only,float8_dynamic_activation_float8_weight,float8_static_activation_float8_weight - Shorthands:
float8wo,float8wo_e5m2,float8wo_e4m3,float8dq,float8dq_e4m3,float8_e4m3_tensor,float8_e4m3_row,
- Full function names:
Floating point X-bit quantization:
- Full function names:
fpx_weight_only - Shorthands:
fpX_eAwB, whereXis the number of bits (between1to7),Ais the number of exponent bits andBis the number of mantissa bits. The constraint ofX == A + B + 1must be satisfied for a given shorthand notation.
- Full function names:
Unsigned Integer quantization:
- Full function names:
uintx_weight_only - Shorthands:
uint1wo,uint2wo,uint3wo,uint4wo,uint5wo,uint6wo,uint7wo
- Full function names:
An AOBaseConfig instance: for more advanced configuration options.
modules_to_not_convert (
List[str], optional, default toNone) -- The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision.kwargs (
Dict[str, Any], optional) -- The keyword arguments for the chosen type of quantization, for example, int4_weight_only quantization supports two keyword argumentsgroup_sizeandinner_k_tilescurrently. More API examples and documentation of arguments can be found in https://github.com/pytorch/ao/tree/main/torchao/quantization#other-available-quantization-techniques0 This is a config class for torchao quantization/sparsity techniques.
Example:
from diffusers import FluxTransformer2DModel, TorchAoConfig
# AOBaseConfig-based configuration
from torchao.quantization import Int8WeightOnlyConfig
quantization_config = TorchAoConfig(Int8WeightOnlyConfig())
# String-based config
quantization_config = TorchAoConfig("int8wo")
transformer = FluxTransformer2DModel.from_pretrained(
"black-forest-labs/Flux.1-Dev",
subfolder="transformer",
quantization_config=quantization_config,
torch_dtype=torch.bfloat16,
)
from_dictdiffusers.TorchAoConfig.from_dict
get_apply_tensor_subclassdiffusers.TorchAoConfig.get_apply_tensor_subclass
to_dictdiffusers.TorchAoConfig.to_dict
DiffusersQuantizer[[diffusers.DiffusersQuantizer]]
class diffusers.DiffusersQuantizerdiffusers.DiffusersQuantizer
Abstract class of the HuggingFace quantizer. Supports for now quantizing HF diffusers models for inference and/or quantization. This class is used only for diffusers.models.modeling_utils.ModelMixin.from_pretrained and cannot be easily used outside the scope of that method yet.
Attributes
quantization_config (diffusers.quantizers.quantization_config.QuantizationConfigMixin):
The quantization config that defines the quantization parameters of your model that you want to quantize.
modules_to_not_convert (List[str], optional):
The list of module names to not convert when quantizing the model.
required_packages (List[str], optional):
The list of required pip packages to install prior to using the quantizer
requires_calibration (bool):
Whether the quantization method requires to calibrate the model before using it.
adjust_max_memorydiffusers.DiffusersQuantizer.adjust_max_memory
adjust_target_dtypediffusers.DiffusersQuantizer.adjust_target_dtypetorch.dtype, optional) --
The torch_dtype that is used to compute the device_map.0
Override this method if you want to adjust the target_dtype variable used in from_pretrained to compute the
device_map in case the device_map is a str. E.g. for bitsandbytes we force-set target_dtype to torch.int8
and for 4-bit we pass a custom enum accelerate.CustomDtype.int4.
check_if_quantized_paramdiffusers.DiffusersQuantizer.check_if_quantized_param
checks if a loaded state_dict component is part of quantized param + some validation; only defined for quantization methods that require to create a new parameters for quantization.
check_quantized_param_shapediffusers.DiffusersQuantizer.check_quantized_param_shape
checks if the quantized param has expected shape.
create_quantized_paramdiffusers.DiffusersQuantizer.create_quantized_param
takes needed components from state_dict and creates quantized param.
dequantizediffusers.DiffusersQuantizer.dequantize
Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance. Note not all quantization schemes support this.
get_cuda_warm_up_factordiffusers.DiffusersQuantizer.get_cuda_warm_up_factor
The factor to be used in caching_allocator_warmup to get the number of bytes to pre-allocate to warm up cuda.
A factor of 2 means we allocate all bytes in the empty model (since we allocate in fp16), a factor of 4 means
we allocate half the memory of the weights residing in the empty model, etc...
get_special_dtypes_updatediffusers.DiffusersQuantizer.get_special_dtypes_update~diffusers.models.modeling_utils.ModelMixin) --
The model to quantize
- torch_dtype (
torch.dtype) -- The dtype passed infrom_pretrainedmethod.0
returns dtypes for modules that are not quantized - used for the computation of the device_map in case one
passes a str as a device_map. The method will use the modules_to_not_convert that is modified in
_process_model_before_weight_loading. diffusers models don't have any modules_to_not_convert attributes
yet but this can change soon in the future.
postprocess_modeldiffusers.DiffusersQuantizer.postprocess_model~diffusers.models.modeling_utils.ModelMixin) --
The model to quantize
- kwargs (
dict, optional) -- The keyword arguments that are passed along_process_model_after_weight_loading.0
Post-process the model post weights loading. Make sure to override the abstract method
_process_model_after_weight_loading.
preprocess_modeldiffusers.DiffusersQuantizer.preprocess_model~diffusers.models.modeling_utils.ModelMixin) --
The model to quantize
- kwargs (
dict, optional) -- The keyword arguments that are passed along_process_model_before_weight_loading.0
Setting model attributes and/or converting model before weights loading. At this point the model should be
initialized on the meta device so you can freely manipulate the skeleton of the model in order to replace
modules in-place. Make sure to override the abstract method _process_model_before_weight_loading.
update_device_mapdiffusers.DiffusersQuantizer.update_device_mapUnion[dict, str], optional) --
The device_map that is passed through the from_pretrained method.0
Override this method if you want to pass a override the existing device map with a new one. E.g. for
bitsandbytes, since accelerate is a hard requirement, if no device_map is passed, the device_map is set to
`"auto"``
update_missing_keysdiffusers.DiffusersQuantizer.update_missing_keysList[str], optional) --
The list of missing keys in the checkpoint compared to the state dict of the model0
Override this method if you want to adjust the missing_keys.
update_torch_dtypediffusers.DiffusersQuantizer.update_torch_dtypetorch.dtype) --
The input dtype that is passed in from_pretrained0
Some quantization methods require to explicitly set the dtype of the model to a target dtype. You need to override this method in case you want to make sure that behavior is preserved
validate_environmentdiffusers.DiffusersQuantizer.validate_environment
This method is used to potentially check for potential conflicts with arguments that are passed in
from_pretrained. You need to define it for all future quantizers that are integrated with diffusers. If no
explicit check are needed, simply return nothing.
Xet Storage Details
- Size:
- 27.7 kB
- Xet hash:
- d7527e4418400eba22a2773caed283bd5f35a4f845d0cef2ceaf564366412473
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.