Text-to-Image
Diffusers
Safetensors
English
Text-to-Image
ControlNet
Diffusers
Flux.1-dev
image-generation
Stable Diffusion
Instructions to use Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
How to use in comfyui
#1
by VLRevolution - opened
Is there a node that is verified working with this controlnet model? :)
Help much appreciated!
And I have to say, amazing work with this! It works so much better than controlnet v3
We are not expert on ComfyUI, so any community support would be greatly appreciated.
这个只能用于xlabs的
May I know how to install FLUX.1-dev-ControlNet-Union-Pro ?
恩,感谢。
how to stack multiple contronet together canny+depth?
how to stack multiple contronet together canny+depth?
I also want to know it. Is there anyone who can provide a workflow for using canny+depth simultaneously?
This comment has been hidden (marked as Resolved)






