How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Yw22/BlobCtrl", dtype=torch.bfloat16, device_map="cuda")

prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

BlobCtrl

Overview

BlobCtrl enables precise, user-friendly element-level visual manipulation. Main Features: 🦉Element-level Add/Remove/Move/Replace/Enlarge/Shrink.

Video

Watch the introduction video in our project page or YouTube.

Code

Please check our GitHub repository for code.

Model

Download the model checkpoint using huggingface_hub (Version 0.1 as example):

import os
from huggingface_hub import snapshot_download

# download blobctrl models
BlobCtrl_path = "examples/blobctrl/models"
if not (os.path.exists(f"{BlobCtrl_path}/blobnet") and os.path.exists(f"{BlobCtrl_path}/unet_lora")):
    BlobCtrl_path = snapshot_download(
        repo_id="Yw22/BlobCtrl",
        local_dir=BlobCtrl_path,
        token=os.getenv("HF_TOKEN"),
    )
print(f"BlobCtrl checkpoints downloaded to {BlobCtrl_path}")    )

The downloaded BlobCtrl checkpoint file (blobnet&unet_lora) can be found at BlobCtrl_path.

Demo

You can try the demo here.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Yw22/BlobCtrl

Finetuned
(370)
this model

Spaces using Yw22/BlobCtrl 2

Collection including Yw22/BlobCtrl

Paper for Yw22/BlobCtrl