Text-to-Image
Diffusers
Safetensors
English
Text-to-Image
ControlNet
Diffusers
Flux.1-dev
image-generation
Stable Diffusion
Instructions to use Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
"Model type not found" error
#18
by tript - opened
Getting error when deploying it to Spaces as well when the container starts to run
There's an error in the input stream and the logs cannot be accessed.
For the past few hours, almost all Spaces in HF have been buildable but not working. I don't know if it's a problem with this model or not.
https://status.huggingface.co/
https://discuss.huggingface.co/t/504-gateway-time-out/107971/
I looked into it. There are several problems with the setup of this repo.
- Pipeline is incorrectly configured.
https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro/edit/main/README.md
pipeline_tag: text-to-image
This maybe should be:
pipeline_tag: image-to-image
https://huggingface.co/tasks/image-to-image - Serverless Inference of the controlnet does not work in many cases, so in some cases it is better to turn off Inference itself
- Other minor issues
So, is there any workaround to run the inference?
Unless the repo author or HF fixes it, the only way is a paid Endpoint API.
