Text-to-Image
Diffusers
Safetensors
English
Text-to-Image
ControlNet
Diffusers
Flux.1-dev
image-generation
Stable Diffusion
Instructions to use Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
why my pose does not work?
#28
by YearW - opened
how to use pose to control image
I am just doing this
prompt = 'A fashion model'
control_image_pose = load_image("pose_images/standing_19.png")
control_mode_pose = 4
width, height = 720, 720
image = pipe(
prompt,
control_image=[control_image_pose],
control_mode=[control_mode_pose],
width=width,
height=height,
controlnet_conditioning_scale=[0.6],
num_inference_steps=24,
guidance_scale=3.5,
generator=torch.manual_seed(1),
).images[0]