How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("carsonkatri/stable-diffusion-2-depth-diffusers", dtype=torch.bfloat16, device_map="cuda")

prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

This model is converted from stable-diffusion-2-depth using the conversion script from 🤗 Diffusers.

You can use it with the pipeline here: https://gist.github.com/carson-katri/f51532b9d5162928d5cacbaee081a799

# class StableDiffusionDepthPipeline: ...

from PIL import Image

model_id = "carsonkatri/stable-diffusion-2-depth-diffusers"

# Use the pipeline from this GH Gist: https://gist.github.com/carson-katri/f51532b9d5162928d5cacbaee081a799
pipe = StableDiffusionDepthPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")

image = pipe(
    prompt="a photo of a stormtrooper from star wars",
    depth_image=Image.open('depth.png'), # Black and white depth map
    image=Image.open("emad.png"), # Optional init image and strength.
    width=768,
    height=512
).images[0]

image.save('stormtrooper.png')
Downloads last month
21
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support