diffusion_model_quanto_Fp16_&_int8.safetensors

#3
by haihung - opened

As the title suggests, if possible, could you please provide a version for older graphics card architectures?

Is a mixed precision FP16 and INT8 feasible?

ltx-2.3-22b-distilled-1.1_diffusion_model_quanto_ "Fp16_int8" .safetensors

It should work already with older cards as the bf16 is converted to fp16 when needed. Do you have any issue ?

Sign up or log in to comment