Image-to-Image
Diffusers
flux
lora
replicate
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("RIAL-AI/wolf-kontext")

prompt = "Replace him with WHNK"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]

Wolf Kontext

About this LoRA

This is a LoRA for the FLUX.1-Kontext-dev image-to-image model. It can be used with diffusers or ComfyUI.

It was trained on Replicate using: https://replicate.com/replicate/fast-flux-kontext-trainer/train

Prompt instruction

You should use Replace him with WHNK as part of the prompt instruction for your image-to-image editing.

Training details

  • Steps: 3000
  • Learning rate: 0.001
  • LoRA rank: 16

Contribute your own examples

You can use the community tab to add images that show off what you’ve made with this LoRA.

Downloads last month
1
Inference Providers NEW

Model tree for RIAL-AI/wolf-kontext

Adapter
(238)
this model