How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("leonel4rd/DBZFLUX")

prompt = "UNICODE\u0000\u0000<\u0000l\u0000o\u0000r\u0000a\u0000:\u0000A\u0000k\u0000i\u0000r\u0000a\u0000_\u0000T\u0000o\u0000r\u0000i\u0000y\u0000a\u0000m\u0000a\u0000_\u0000S\u0000t\u0000y\u0000l\u0000e\u0000_\u0000F\u0000X\u0000-\u00000\u00000\u00000\u00000\u00000\u00001\u0000:\u00001\u0000>\u0000B\u0000e\u0000n\u0000d\u0000e\u0000r\u0000 \u0000i\u0000n\u0000 \u0000t\u0000o\u0000r\u0000i\u0000y\u0000a\u0000m\u0000a\u0000_\u0000s\u0000t\u0000y\u0000l\u0000e\u0000"
image = pipe(prompt).images[0]

DBZFLUX

Prompt
UNICODE<lora:Akira_Toriyama_Style_FX-000001:1>Bender in toriyama_style

Trigger words

You should use toriyama_style to trigger the image generation.

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
5
Inference Providers NEW
Examples

Model tree for leonel4rd/DBZFLUX

Adapter
(36820)
this model