How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("GreeneryScenery/SheepsControlV4", dtype=torch.bfloat16, device_map="cuda")

prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

V4

3 epochs. 🤗 Best model yet. Check out the model on Replicate as well.

Examples

Click to expand

1. Conditioning image:

Images:

arafed airplane flying in the sky with a green tail

arafed jet flying in the air with a royal air force logo on it

Jet

Plane

  1. Conditioning image:

Image:

Cute turtle

  1. Conditioning image:

Image:

A sheep

  1. Conditioning image:

Image:

A dog sitting down

Downloads last month
289
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train GreeneryScenery/SheepsControlV4