How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("second-state/3dAnimationDiffusion_v10-GGUF", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

3Danimation-GGUF

Original Model

Yntec/3Danimation

Run with sd-api-server

Go to the sd-api-server repository for more information.

Quantized GGUF Models

Name Quant method Bits Size Use case
3dAnimationDiffusion_v10-Q4_0.gguf Q4_0 4 1.57 GB
3dAnimationDiffusion_v10-Q4_1.gguf Q4_1 4 1.59 GB
3dAnimationDiffusion_v10-Q5_0.gguf Q5_0 5 1.62 GB
3dAnimationDiffusion_v10-Q5_1.gguf Q5_1 5 1.64 GB
3dAnimationDiffusion_v10-Q8_0.gguf Q8_0 8 1.76 GB
3dAnimationDiffusion_v10-f16.gguf f16 16 2.13 GB
vae-Q8_0.gguf Q8_0 8 165 MB
vae-f16.gguf f16 16 167 MB

Quantized with stable-diffusion.cpp master-697d000.

Downloads last month
209
GGUF
Model size
1B params
Architecture
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for second-state/3dAnimationDiffusion_v10-GGUF

Quantized
(2)
this model