How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("stablediffusionapi/my-stablediffusion-lora-6484")

prompt = "photo of ambika0 man"
image = pipe(prompt).images[0]

ModelsLab LoRA DreamBooth Training - stablediffusionapi/my-stablediffusion-lora-6484

These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on photo of ambika0 man using ModelsLab. LoRA for the text encoder was enabled: False.

Use it with the 🧨 diffusers library

!pip install -q transformers accelerate peft diffusers
from diffusers import DiffusionPipeline
import torch

pipe_id = "Lykon/DreamShaper"
pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda")
pipe.load_lora_weights("stablediffusionapi/my-stablediffusion-lora-6484", weight_name="pytorch_lora_weights.safetensors", adapter_name="abc")
prompt = "abc of a hacker with a hoodie"
lora_scale = 0.9
image = pipe(
    prompt,
    num_inference_steps=30,
    cross_attention_kwargs={"scale": 0.9},
    generator=torch.manual_seed(0)
).images[0]
image
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for stablediffusionapi/my-stablediffusion-lora-6484

Adapter
(636)
this model