Instructions to use ostris/OpenFLUX.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ostris/OpenFLUX.1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("ostris/OpenFLUX.1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update diffusers
do you plan to submit your changes to flux pipeline upstream to diffusers lib? that would be preferred instead of maintaining a separate pipeline and your changes can be easily marked as conditional based on underlying model.
if not, using your pipeline is not an issue, but then same changes would need to be propagated to all other linked pipelines - this is txt2img only and there are img2img, controlnet, etc...
there's already a FluxCFGPipeline
yes, as a community example (diffusers/examples/community/pipeline_flux_with_cfg.py), not as an actual top-level pipeline.
my question remains, especially since this pipeline would need img, inpaint, txt+control, img+control - etc. - all the variations for it to be useful in the long run.
i guess the diffusers team might not have wanted to support it; if it is a community pipeline that is typically the reason.
from what i gather, author didn't want to go through much stricter built-in pipeline modification, he opted for community pipeline as that is easier: https://github.com/huggingface/diffusers/pull/9445
anyhow, thats not really critical here, question is how will openflux be supported in a long run - via standalone code as it is today, via community pipeline, via built-in pipelines?
and how will all pipeline variations be created/maintained?