How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("LyliaEngine/Pony_Diffusion_V6_XL", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("uncropped/ruby")

prompt = "zPDXL3, rubyj4y, realistic, brown hair, long hair, brown eyes, jewelry, lips, earrings, breasts, midriff, bare shoulders, sitting, freckles, smile, teeth,  <lora:ruby_jay:1>"
image = pipe(prompt).images[0]

ruby

Prompt
zPDXL3, rubyj4y, realistic, brown hair, long hair, brown eyes, jewelry, lips, earrings, breasts, midriff, bare shoulders, sitting, freckles, smile, teeth, <lora:ruby_jay:1>
Negative Prompt
NEGATIVE_HANDS, watermark, web address,

Trigger words

You should use rubyj4y to trigger the image generation.

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
2
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for uncropped/ruby