Update README.md
#10
by
hinairo - opened
README.md
CHANGED
|
@@ -1,45 +1,63 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
base_model:
|
| 4 |
- black-forest-labs/FLUX.1-dev
|
| 5 |
base_model_relation: quantized
|
| 6 |
pipeline_tag: text-to-image
|
| 7 |
---
|
| 8 |
|
|
|
|
| 9 |
|
| 10 |
-
#
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
-
|
| 21 |
|
|
|
|
| 22 |
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
* Provide the fastest models and service for self-hosting.
|
| 26 |
-
* Provide flexibility in cost vs quality selection for inference.
|
| 27 |
-
* Provide clear quality and latency benchmarks.
|
| 28 |
-
* Provide interface of HF libraries: transformers and diffusers with a single line of code.
|
| 29 |
-
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
|
|
|
| 35 |
|
| 36 |
-
|
| 37 |
-

|
| 38 |
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
-
|
| 42 |
-
To infer our models, you just need to replace `diffusers` import with `elastic_models.diffusers`:
|
| 43 |
|
| 44 |
```python
|
| 45 |
import torch
|
|
@@ -53,6 +71,8 @@ pipeline = FluxPipeline.from_pretrained(
|
|
| 53 |
mode_name,
|
| 54 |
torch_dtype=torch.bfloat16,
|
| 55 |
token=hf_token,
|
|
|
|
|
|
|
| 56 |
mode='S'
|
| 57 |
)
|
| 58 |
pipeline.to(device)
|
|
@@ -64,69 +84,387 @@ for prompt, output_image in zip(prompts, output.images):
|
|
| 64 |
output_image.save((prompt.replace(' ', '_') + '.png'))
|
| 65 |
```
|
| 66 |
|
| 67 |
-
### Installation
|
| 68 |
|
|
|
|
| 69 |
|
| 70 |
-
|
| 71 |
-
* GPUs: H100, L40s, B200, 5090
|
| 72 |
-
* CPU: AMD, Intel
|
| 73 |
-
* Python: 3.10-3.12
|
| 74 |
|
|
|
|
| 75 |
|
| 76 |
-
|
| 77 |
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
-
|
| 83 |
-
pip install 'thestage-elastic-models[blackwell]' --extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple
|
| 84 |
-
pip install -U --pre torch --index-url https://download.pytorch.org/whl/nightly/cu128
|
| 85 |
-
pip install -U --pre torchvision --index-url https://download.pytorch.org/whl/nightly/cu128
|
| 86 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
-
|
| 89 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
```
|
| 91 |
|
| 92 |
-
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
|
| 93 |
|
| 94 |
-
|
| 95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
```
|
| 97 |
|
| 98 |
-
|
| 99 |
|
| 100 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
|
| 102 |
-
##
|
| 103 |
|
| 104 |
-
|
|
|
|
|
|
|
| 105 |
|
| 106 |
-
|
|
|
|
|
|
|
| 107 |
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
| PSNR | 30.22 | 30.24 | 30.38 | inf | inf |
|
| 112 |
-
| SSIM | 0.72 | 0.72 | 0.76 | 1.0 | 1.0 |
|
| 113 |
-
| CLIP | 12.49 | 12.51 | 12.69 | 12.41 | 12.41|
|
| 114 |
|
|
|
|
| 115 |
|
| 116 |
-
|
|
|
|
|
|
|
| 117 |
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
|
| 126 |
|
| 127 |
## Links
|
| 128 |
|
| 129 |
* __Platform__: [app.thestage.ai](https://app.thestage.ai)
|
| 130 |
-
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
|
| 131 |
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
|
| 132 |
-
* __Contact email__: contact@thestage.ai
|
|
|
|
| 1 |
---
|
| 2 |
+
license: other
|
| 3 |
base_model:
|
| 4 |
- black-forest-labs/FLUX.1-dev
|
| 5 |
base_model_relation: quantized
|
| 6 |
pipeline_tag: text-to-image
|
| 7 |
---
|
| 8 |
|
| 9 |
+
# Elastic model: FLUX.1-dev
|
| 10 |
|
| 11 |
+
## Overview
|
| 12 |
|
| 13 |
+
----
|
| 14 |
+
|
| 15 |
+
ElasticModels are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement, routing different compression algorithms to different layers. For each model, we have produced a series of optimized models:
|
| 16 |
|
| 17 |
+
- **XL**: Mathematically equivalent neural network, optimized with our DNN compiler.
|
| 18 |
+
- **L**: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
|
| 19 |
+
- **M**: Faster model, with accuracy degradation less than 1.5%.
|
| 20 |
+
- **S**: The fastest model, with accuracy degradation less than 2%.
|
| 21 |
|
| 22 |
+
Models can be accessed via TheStage AI Python SDK: ElasticModels, or deployed as Docker containers with REST API endpoints (see Deploy section).
|
| 23 |
|
| 24 |
+
## Installation
|
| 25 |
|
| 26 |
+
---
|
| 27 |
|
| 28 |
+
### System Requirements
|
| 29 |
|
| 30 |
+
| **Property**| **Value** |
|
| 31 |
+
| --- | --- |
|
| 32 |
+
| **GPU** | L40s, RTX 5090, H100, B200 |
|
| 33 |
+
| **Python Version** | 3.10-3.12 |
|
| 34 |
+
| **CPU** | Intel/AMD x86_64 |
|
| 35 |
+
| **CUDA Version** | 12.8+ |
|
| 36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
### TheStage AI Access token setup
|
| 39 |
|
| 40 |
+
Install TheStage AI CLI and setup API token:
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
pip install thestage
|
| 44 |
+
thestage config set --access-token <YOUR_ACCESS_TOKEN>
|
| 45 |
+
```
|
| 46 |
|
| 47 |
+
### ElasticModels installation
|
| 48 |
|
| 49 |
+
Install TheStage Elastic Models package:
|
|
|
|
| 50 |
|
| 51 |
+
```bash
|
| 52 |
+
pip install 'thestage-elastic-models[nvidia]' \
|
| 53 |
+
--extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
## Usage example
|
| 57 |
+
|
| 58 |
+
----
|
| 59 |
|
| 60 |
+
Elastic Models provides the same interface as HuggingFace Diffusers. Here is an example of how to use the FLUX.1-dev model:
|
|
|
|
| 61 |
|
| 62 |
```python
|
| 63 |
import torch
|
|
|
|
| 71 |
mode_name,
|
| 72 |
torch_dtype=torch.bfloat16,
|
| 73 |
token=hf_token,
|
| 74 |
+
# 'original' for original model
|
| 75 |
+
# 'S', 'M', 'L', 'XL' for accelerated models
|
| 76 |
mode='S'
|
| 77 |
)
|
| 78 |
pipeline.to(device)
|
|
|
|
| 84 |
output_image.save((prompt.replace(' ', '_') + '.png'))
|
| 85 |
```
|
| 86 |
|
|
|
|
| 87 |
|
| 88 |
+
## Quality Benchmarks
|
| 89 |
|
| 90 |
+
------------
|
|
|
|
|
|
|
|
|
|
| 91 |
|
| 92 |
+
We have used PartiPrompts and DrawBench datasets to evaluate the quality of images generated by different sizes of FLUX.1-dev models (S, M, L, XL) compared to the original model. The evaluation metrics include ARNIQA, CLIP IQA, PSNR, SSIM, and VQA Faithfulness.
|
| 93 |
|
| 94 |
+

|
| 95 |
|
| 96 |
+
### Quality Benchmark Results
|
| 97 |
+
|
| 98 |
+
| **Metric/Model Size**| **S**| **M**| **L**| **XL**| **Original** |
|
| 99 |
+
| --- | --- | --- | --- | --- | --- |
|
| 100 |
+
| **ARNIQA (PartiPrompts)** | 64.1 | 63.2 | 61.9 | 66.8 | 66.9 |
|
| 101 |
+
| **ARNIQA (DrawBench)** | 64.3 | 63.5 | 63.6 | 68.2 | 68.5 |
|
| 102 |
+
| **CLIP IQA (PartiPrompts)** | 85.5 | 86.4 | 83.8 | 88.3 | 87.9 |
|
| 103 |
+
| **CLIP IQA (DrawBench)** | 86.4 | 86.5 | 84.5 | 89.5 | 90.0 |
|
| 104 |
+
| **VQA Faithfulness (PartiPrompts)** | 87.5 | 85.5 | 85.5 | 85.5 | 88.6 |
|
| 105 |
+
| **VQA Faithfulness (DrawBench)** | 69.3 | 64.7 | 64.8 | 67.8 | 65.2 |
|
| 106 |
+
| **PSNR (PartiPrompts)** | 30.22 | 30.24 | 30.38 | N/A | N/A |
|
| 107 |
+
| **SSIM (PartiPrompts)** | 0.72 | 0.72 | 0.76 | 1.0 | 1.0 |
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
## Datasets
|
| 111 |
+
|
| 112 |
+
-------
|
| 113 |
+
|
| 114 |
+
- **PartiPrompts**: A benchmark dataset created by Google Research, containing 1,632 diverse and challenging prompts that test various aspects of text-to-image generation models. It includes categories such as abstract concepts, complex compositions, properties and attributes, counting and numbers, text rendering, artistic styles, and fine-grained details.
|
| 115 |
+
|
| 116 |
+
- **DrawBench**: A comprehensive benchmark dataset developed by Google Research, containing 200 carefully curated prompts designed to test specific capabilities and challenge areas of diffusion models. It includes categories such as colors, counting, conflicting requirements, DALL-E inspired prompts, detailed descriptions, misspellings, positional relationships, rare words, Reddit user prompts, and text generation.
|
| 117 |
+
|
| 118 |
+
## Metrics
|
| 119 |
+
|
| 120 |
+
----------
|
| 121 |
+
|
| 122 |
+
- **ARNIQA**: No-reference image quality assessment metric that predicts perceptual quality without reference images.
|
| 123 |
+
- **CLIP_IQA**: No-reference image quality metric using contrastive learning to assess image quality without references.
|
| 124 |
+
- **VQA Faithfulness**: Metric measuring how accurately generated images represent the text prompts.
|
| 125 |
+
- **PSNR**: Peak Signal-to-Noise Ratio measuring similarity between generated by accelerated model and original model images.
|
| 126 |
+
- **SSIM**: Structural Similarity Index measuring perceptual similarity between generated by accelerated model and original model images.
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
## Latency Benchmarks
|
| 130 |
+
|
| 131 |
+
-----
|
| 132 |
+
|
| 133 |
+
We have measured the latency of different sizes of FLUX.1-dev model (S, M, L, XL, original) on various GPUs. The measurements were taken for generating images of size 1024x1024 pixels.
|
| 134 |
|
| 135 |
+

|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
+
### Latency Benchmark Results
|
| 138 |
+
|
| 139 |
+
Latency (in seconds) for generating a 1024x1024 image using different model sizes on various hardware setups.
|
| 140 |
+
|
| 141 |
+
| **GPU/Model Size**| **S**| **M**| **L**| **XL**| **Original** |
|
| 142 |
+
| --- | --- | --- | --- | --- | --- |
|
| 143 |
+
| **H100** | 2.88 | 3.06 | 3.25 | 4.18 | 6.46 |
|
| 144 |
+
| **L40s** | 9.22 | 10.07 | 10.67 | 14.39 | 16 |
|
| 145 |
+
| **B200** | 1.93 | 2.04 | 2.15 | 2.77 | 4.52 |
|
| 146 |
+
| **GeForce RTX 5090** | 5.79 | N/A | N/A | N/A | N/A |
|
| 147 |
+
|
| 148 |
+
|
| 149 |
+
## Benchmarking Methodology
|
| 150 |
+
|
| 151 |
+
----
|
| 152 |
+
|
| 153 |
+
The benchmarking was performed on a single GPU with a batch size of 1. Each model was run for 10 iterations, and the average latency was calculated.
|
| 154 |
+
|
| 155 |
+
> **Algorithm summary:**
|
| 156 |
+
> 1. Load the FLUX.1-dev model with the specified size (S, M, L, XL, original).
|
| 157 |
+
> 2. Move the model to the GPU.
|
| 158 |
+
> 3. Prepare a sample prompt for image generation.
|
| 159 |
+
> 4. Run the model for a number of iterations (e.g., 10) and measure the time taken for each iteration. On each iteration:
|
| 160 |
+
> - Synchronize the GPU to flush any previous operations.
|
| 161 |
+
> - Record the start time.
|
| 162 |
+
> - Generate the image using the model.
|
| 163 |
+
> - Synchronize the GPU again.
|
| 164 |
+
> - Record the end time and calculate the latency for that iteration.
|
| 165 |
+
> 5. Calculate the average latency over all iterations.
|
| 166 |
+
|
| 167 |
+
## Reproduce benchmarking
|
| 168 |
+
|
| 169 |
+
----
|
| 170 |
+
|
| 171 |
+
```python
|
| 172 |
+
import torch
|
| 173 |
+
from elastic_models.diffusers import FluxPipeline
|
| 174 |
+
|
| 175 |
+
mode_name = 'black-forest-labs/FLUX.1-dev'
|
| 176 |
+
hf_token = ''
|
| 177 |
+
device = torch.device("cuda")
|
| 178 |
+
|
| 179 |
+
pipeline = FluxPipeline.from_pretrained(
|
| 180 |
+
mode_name,
|
| 181 |
+
torch_dtype=torch.bfloat16,
|
| 182 |
+
token=hf_token,
|
| 183 |
+
# 'original' for original model
|
| 184 |
+
# 'S', 'M', 'L', 'XL' for accelerated models
|
| 185 |
+
mode='S'
|
| 186 |
+
)
|
| 187 |
+
pipeline.to(device)
|
| 188 |
|
| 189 |
+
prompt = ["Kitten eating a banana"]
|
| 190 |
+
generate_kwargs={
|
| 191 |
+
"height": 1024,
|
| 192 |
+
"width": 1024,
|
| 193 |
+
"num_inference_steps": 28,
|
| 194 |
+
"cfg_scale": 0.0
|
| 195 |
+
}
|
| 196 |
+
|
| 197 |
+
def evaluate_pipline():
|
| 198 |
+
torch.cuda.synchronize()
|
| 199 |
+
start_time = time.time()
|
| 200 |
+
output = pipeline(
|
| 201 |
+
prompt=prompt,
|
| 202 |
+
**generate_kwargs
|
| 203 |
+
)
|
| 204 |
+
torch.cuda.synchronize()
|
| 205 |
+
end_time = time.time()
|
| 206 |
+
|
| 207 |
+
return end_time - start_time
|
| 208 |
+
|
| 209 |
+
# Warm-up
|
| 210 |
+
for _ in range(5):
|
| 211 |
+
evaluate_pipline()
|
| 212 |
+
|
| 213 |
+
# Benchmarking
|
| 214 |
+
num_runs = 10
|
| 215 |
+
total_time = 0.0
|
| 216 |
+
|
| 217 |
+
for _ in range(num_runs):
|
| 218 |
+
latency = evaluate_pipline()
|
| 219 |
+
total_time += latency
|
| 220 |
+
|
| 221 |
+
average_latency = total_time / num_runs
|
| 222 |
+
print(f"Average Latency over {num_runs} runs: {average_latency} seconds")
|
| 223 |
```
|
| 224 |
|
|
|
|
| 225 |
|
| 226 |
+
## Serving with Docker Image
|
| 227 |
+
|
| 228 |
+
------------
|
| 229 |
+
|
| 230 |
+
For serving with Nvidia GPUs, we provide ready-to-go Docker containers with OpenAI-compatible API endpoints.
|
| 231 |
+
Using our containers you can set up an inference endpoint on any desired cloud/serverless providers as well as on-premise servers.
|
| 232 |
+
You can also use this container to run inference through TheStage AI platform.
|
| 233 |
+
|
| 234 |
+
### Prebuilt image from ECR
|
| 235 |
+
|
| 236 |
+
| **GPU** | **Docker image name** |
|
| 237 |
+
| --- | --- |
|
| 238 |
+
| H100, L40s | `public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-nvidia-24.09b` |
|
| 239 |
+
| B200, RTX 5090 | `public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-blackwell-24.09b` |
|
| 240 |
+
|
| 241 |
+
Pull docker image for your Nvidia GPU and start inference container:
|
| 242 |
+
|
| 243 |
+
```bash
|
| 244 |
+
docker pull <IMAGE_NAME>
|
| 245 |
+
```
|
| 246 |
+
```bash
|
| 247 |
+
docker run --rm -ti \
|
| 248 |
+
--name serving_thestage_model \
|
| 249 |
+
-p 8000:80 \
|
| 250 |
+
-e AUTH_TOKEN=<AUTH_TOKEN> \
|
| 251 |
+
-e MODEL_REPO=black-forest-labs/FLUX.1-dev \
|
| 252 |
+
-e MODEL_SIZE=<MODEL_SIZE> \
|
| 253 |
+
-e MODEL_BATCH=<MAX_BATCH_SIZE> \
|
| 254 |
+
-e HUGGINGFACE_ACCESS_TOKEN=<HUGGINGFACE_ACCESS_TOKEN> \
|
| 255 |
+
-e THESTAGE_AUTH_TOKEN=<THESTAGE_ACCESS_TOKEN> \
|
| 256 |
+
-v /mnt/hf_cache:/root/.cache/huggingface \
|
| 257 |
+
<IMAGE_NAME_DEPNDING_ON_YOUR_GPU>
|
| 258 |
+
```
|
| 259 |
+
|
| 260 |
+
| **Parameter** | **Description** |
|
| 261 |
+
|----------------------------|------------------------------------------------------------------------------------------------------|
|
| 262 |
+
| `<MODEL_SIZE>` | Available: S, M, L, XL. |
|
| 263 |
+
| `<MAX_BATCH_SIZE>` | Maximum batch size to process in parallel. |
|
| 264 |
+
| `<HUGGINGFACE_ACCESS_TOKEN>` | Hugging Face access token. |
|
| 265 |
+
| `<THESTAGE_ACCESS_TOKEN>` | TheStage token generated on the platform (Profile -> Access tokens). |
|
| 266 |
+
| `<AUTH_TOKEN>` | Token for endpoint authentication. You can set it to any random string; it must match the value used by the client. |
|
| 267 |
+
| `<IMAGE_NAME>` | Image name which you have pulled. |
|
| 268 |
+
|
| 269 |
+
## Invocation
|
| 270 |
+
|
| 271 |
+
------
|
| 272 |
+
|
| 273 |
+
You can invoke the endpoint using CURL as follows:
|
| 274 |
+
|
| 275 |
+
```bash
|
| 276 |
+
curl -X POST <http://127.0.0.1:8000/v1/images/generations> \
|
| 277 |
+
-H "Authorization: Bearer <AUTH_TOKEN>" \
|
| 278 |
+
-H "Content-Type: application/json" \
|
| 279 |
+
-H "X-Model-Name: flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>" \
|
| 280 |
+
-d '{
|
| 281 |
+
"prompt": "Cat eating banana",
|
| 282 |
+
"seed": 12,
|
| 283 |
+
"aspect_ratio": "1:1",
|
| 284 |
+
"guidance_scale": 6.5,
|
| 285 |
+
"num_inference_steps": 4
|
| 286 |
+
}' \
|
| 287 |
+
--output sunset.webp -D -
|
| 288 |
```
|
| 289 |
|
| 290 |
+
Or using Python requests:
|
| 291 |
|
| 292 |
+
```python
|
| 293 |
+
import requests
|
| 294 |
+
import json
|
| 295 |
+
url = "http://127.0.0.1:8000/v1/images/generations"
|
| 296 |
+
payload = json.dumps({
|
| 297 |
+
"prompt": "sunset",
|
| 298 |
+
"seed": 12,
|
| 299 |
+
"aspect_ratio": "1:1",
|
| 300 |
+
"guidance_scale": 6.5,
|
| 301 |
+
"num_inference_steps": 4
|
| 302 |
+
})
|
| 303 |
+
headers = {
|
| 304 |
+
'Authorization: ''Bearer <AUTH_TOKEN>'',
|
| 305 |
+
'Content-Type': 'application/json',
|
| 306 |
+
'X-Model-Name': 'flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>'
|
| 307 |
+
}
|
| 308 |
+
response = requests.request("POST", url, headers=headers, data=payload)
|
| 309 |
+
with open("sunset.webp", "wb") as f:
|
| 310 |
+
f.write(response.content)
|
| 311 |
+
```
|
| 312 |
+
|
| 313 |
+
Or using OpenAI python client:
|
| 314 |
+
|
| 315 |
+
```python
|
| 316 |
+
import os, base64, pathlib, json
|
| 317 |
+
from openai import OpenAI
|
| 318 |
+
|
| 319 |
+
BASE_URL = "http://<your_ip>/v1"
|
| 320 |
+
API_KEY = ""
|
| 321 |
+
MODEL = "flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>"
|
| 322 |
+
|
| 323 |
+
client = OpenAI(
|
| 324 |
+
api_key=API_KEY,
|
| 325 |
+
base_url=BASE_URL,
|
| 326 |
+
default_headers={"X-Model-Name": MODEL}
|
| 327 |
+
)
|
| 328 |
+
|
| 329 |
+
response = client.with_raw_response.images.generate(
|
| 330 |
+
model=MODEL,
|
| 331 |
+
prompt="Cat eating banana",
|
| 332 |
+
n=1,
|
| 333 |
+
extra_body={
|
| 334 |
+
"seed": 111,
|
| 335 |
+
"aspect_ratio": "1:1",
|
| 336 |
+
"guidance_scale": 3.5,
|
| 337 |
+
"num_inference_steps": 4
|
| 338 |
+
},
|
| 339 |
+
)
|
| 340 |
+
|
| 341 |
+
with open("thestage_image.webp", "wb") as f:
|
| 342 |
+
f.write(response.content)
|
| 343 |
+
```
|
| 344 |
+
|
| 345 |
+
## Endpoint Parameters
|
| 346 |
+
|
| 347 |
+
-------------
|
| 348 |
+
|
| 349 |
+
### Method
|
| 350 |
+
|
| 351 |
+
> **POST** `/v1/images/generations`
|
| 352 |
|
| 353 |
+
### Header Parameters
|
| 354 |
|
| 355 |
+
> `Authorization`: `string`
|
| 356 |
+
>
|
| 357 |
+
> Bearer token for authentication. Should match the `AUTH_TOKEN` set during container startup.
|
| 358 |
|
| 359 |
+
> `Content-Type`: `string`
|
| 360 |
+
>
|
| 361 |
+
> Must be set to `application/json`.
|
| 362 |
|
| 363 |
+
> `X-Model-Name`: `string`
|
| 364 |
+
>
|
| 365 |
+
> Specifies the model to use for generation. Format: `flux-1-dev-<size>-bs<batch_size>`, where `<size>` is one of `S`, `M`, `L`, `XL`, `original` and `<batch_size>` is the maximum batch size configured during container startup.
|
|
|
|
|
|
|
|
|
|
| 366 |
|
| 367 |
+
### Input Body
|
| 368 |
|
| 369 |
+
> `prompt` : `string`
|
| 370 |
+
>
|
| 371 |
+
> The text prompt to generate an image for.
|
| 372 |
|
| 373 |
+
> `seed`: `int32`
|
| 374 |
+
>
|
| 375 |
+
> Random seed for generation.
|
| 376 |
+
|
| 377 |
+
> `num_inference_steps`: `int32`
|
| 378 |
+
>
|
| 379 |
+
> Number of diffusion steps to use for generation. Higher values yield better quality but take longer. Default is 28
|
| 380 |
+
|
| 381 |
+
> `aspect_ratio`: `string`
|
| 382 |
+
>
|
| 383 |
+
> Aspect ratio of the generated image. Supported values:
|
| 384 |
+
> ```
|
| 385 |
+
> "1:1": (1024, 1024),
|
| 386 |
+
> "16:9": (1280, 736),
|
| 387 |
+
> "21:9": (1280, 544),
|
| 388 |
+
> "3:2": (1248, 832),
|
| 389 |
+
> "2:3": (832, 1248),
|
| 390 |
+
> "4:3": (1184, 896),
|
| 391 |
+
> "3:4": (896, 1184),
|
| 392 |
+
> "5:4": (1152, 928),
|
| 393 |
+
> "4:5": (928, 1152),
|
| 394 |
+
> "9:16": (736, 1280),
|
| 395 |
+
> "9:21": (544, 1280)
|
| 396 |
+
> ```
|
| 397 |
+
|
| 398 |
+
> `guidance_scale`: float32
|
| 399 |
+
>
|
| 400 |
+
> Guidance scale for classifier-free guidance. Higher values increase adherence to the prompt.
|
| 401 |
+
|
| 402 |
+
## Deploy on Modal
|
| 403 |
+
|
| 404 |
+
-----------------------
|
| 405 |
+
|
| 406 |
+
For more details please use the tutorial [Modal deployment](https://docs.thestage.ai/tutorials/source/modal_thestage.html)
|
| 407 |
+
|
| 408 |
+
### Clone modal serving code
|
| 409 |
+
|
| 410 |
+
```shell
|
| 411 |
+
git clone https://github.com/TheStageAI/ElasticModels.git
|
| 412 |
+
cd ElasticModels/examples/modal
|
| 413 |
+
```
|
| 414 |
+
|
| 415 |
+
### Configuration of environment variables
|
| 416 |
+
|
| 417 |
+
Set your environment variables in `modal_serving.py`:
|
| 418 |
+
|
| 419 |
+
```python
|
| 420 |
+
# modal_serving.py
|
| 421 |
+
|
| 422 |
+
ENVS = {
|
| 423 |
+
"MODEL_REPO": "black-forest-labs/FLUX.1-dev",
|
| 424 |
+
"MODEL_BATCH": "4",
|
| 425 |
+
"THESTAGE_AUTH_TOKEN": "",
|
| 426 |
+
"HUGGINGFACE_ACCESS_TOKEN": "",
|
| 427 |
+
"PORT": "80",
|
| 428 |
+
"PORT_HEALTH": "80",
|
| 429 |
+
"HF_HOME": "/cache/huggingface",
|
| 430 |
+
}
|
| 431 |
+
```
|
| 432 |
+
|
| 433 |
+
### Configuration of GPUs
|
| 434 |
+
|
| 435 |
+
Set your desired GPU type and autoscaling setup. variables in `modal_serving.py`:
|
| 436 |
+
|
| 437 |
+
```python
|
| 438 |
+
# modal_serving.py
|
| 439 |
+
|
| 440 |
+
@app.function(
|
| 441 |
+
image=image,
|
| 442 |
+
gpu="B200",
|
| 443 |
+
min_containers=8,
|
| 444 |
+
max_containers=8,
|
| 445 |
+
timeout=10000,
|
| 446 |
+
ephemeral_disk=600 * 1024,
|
| 447 |
+
volumes={"/opt/project/.cache": HF_CACHE},
|
| 448 |
+
startup_timeout=60*20
|
| 449 |
+
)
|
| 450 |
+
@modal.web_server(
|
| 451 |
+
80,
|
| 452 |
+
label="black-forest-labs/FLUX.1-dev-test",
|
| 453 |
+
startup_timeout=60*20
|
| 454 |
+
)
|
| 455 |
+
def serve():
|
| 456 |
+
pass
|
| 457 |
+
```
|
| 458 |
+
|
| 459 |
+
### Run serving
|
| 460 |
+
|
| 461 |
+
```shell
|
| 462 |
+
modal serve modal_serving.py
|
| 463 |
+
```
|
| 464 |
|
| 465 |
|
| 466 |
## Links
|
| 467 |
|
| 468 |
* __Platform__: [app.thestage.ai](https://app.thestage.ai)
|
|
|
|
| 469 |
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
|
| 470 |
+
* __Contact email__: contact@thestage.ai
|