Instructions to use Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Sana
How to use Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers with Sana:
# Load the model and infer image from text import torch from app.sana_pipeline import SanaPipeline from torchvision.utils import save_image sana = SanaPipeline("configs/sana_config/1024ms/Sana_1600M_img1024.yaml") sana.from_pretrained("hf://Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers") image = sana( prompt='a cyberpunk cat with a neon sign that says "Sana"', height=1024, width=1024, guidance_scale=5.0, pag_guidance_scale=2.0, num_inference_steps=18, ) - Diffusers
How to use Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
File size: 1,280 Bytes
edbbb69 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | {
"_class_name": "AutoencoderDC",
"_diffusers_version": "0.33.0.dev0",
"_name_or_path": "mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers",
"attention_head_dim": 32,
"decoder_act_fns": "silu",
"decoder_block_out_channels": [
128,
256,
512,
512,
1024,
1024
],
"decoder_block_types": [
"ResBlock",
"ResBlock",
"ResBlock",
"EfficientViTBlock",
"EfficientViTBlock",
"EfficientViTBlock"
],
"decoder_layers_per_block": [
3,
3,
3,
3,
3,
3
],
"decoder_norm_types": "rms_norm",
"decoder_qkv_multiscales": [
[],
[],
[],
[
5
],
[
5
],
[
5
]
],
"downsample_block_type": "Conv",
"encoder_block_out_channels": [
128,
256,
512,
512,
1024,
1024
],
"encoder_block_types": [
"ResBlock",
"ResBlock",
"ResBlock",
"EfficientViTBlock",
"EfficientViTBlock",
"EfficientViTBlock"
],
"encoder_layers_per_block": [
2,
2,
2,
3,
3,
3
],
"encoder_qkv_multiscales": [
[],
[],
[],
[
5
],
[
5
],
[
5
]
],
"in_channels": 3,
"latent_channels": 32,
"scaling_factor": 0.41407,
"upsample_block_type": "interpolate"
}
|