How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("AEmotionStudio/acestep-models", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

ACE-Step 1.5 Models (FFMPEGA Mirror)

Mirror of ACE-Step/Ace-Step1.5 models for use with ComfyUI-FFMPEGA.

Contents

Directory Size Description
acestep-v15-turbo/ ~4.8 GB DiT turbo model (8-step inference)
acestep-5Hz-lm-1.7B/ ~3.8 GB 1.7B language model for lyric planning
acestep-5Hz-lm-0.6B/ ~1.4 GB 0.6B language model (lower VRAM option)
Qwen3-Embedding-0.6B/ ~1.2 GB Text encoder
vae/ ~0.3 GB VAE decoder

Usage

These models are automatically downloaded by ComfyUI-FFMPEGA's ACE-Step integration. Place the checkpoints/ contents in:

ComfyUI/models/acestep/checkpoints/

License

MIT — same as the original ACE-Step 1.5 release.

Credits

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support