Nekochu commited on
Commit
620e7e1
1 Parent(s): 414de73

Qwen-Image-Edit Rapid-AIO CPU Space

Browse files
Files changed (4) hide show
  1. Dockerfile +25 -0
  2. README.md +75 -7
  3. app.py +355 -0
  4. requirements.txt +4 -0
Dockerfile ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ RUN apt-get update && apt-get install -y \
4
+ build-essential cmake git libopenblas-dev ffmpeg \
5
+ && rm -rf /var/lib/apt/lists/*
6
+
7
+ WORKDIR /app
8
+
9
+ COPY requirements.txt .
10
+
11
+ ENV CMAKE_BUILD_PARALLEL_LEVEL=1
12
+ RUN pip install --no-cache-dir -r requirements.txt
13
+
14
+ COPY . .
15
+
16
+ RUN useradd -m -u 1000 user
17
+ USER user
18
+
19
+ ENV HOME=/home/user
20
+ ENV GRADIO_SERVER_NAME=0.0.0.0
21
+ ENV GRADIO_SERVER_PORT=7860
22
+
23
+ EXPOSE 7860
24
+
25
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,12 +1,80 @@
1
  ---
2
- title: Qwen Image CPU
3
- emoji: 馃憖
4
- colorFrom: gray
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 6.12.0
8
  app_file: app.py
 
 
9
  pinned: false
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Qwen-Image-Edit CPU
3
+ emoji: 馃帹
4
+ colorFrom: yellow
5
+ colorTo: indigo
6
+ sdk: docker
 
7
  app_file: app.py
8
+ suggested_hardware: cpu-basic
9
+ startup_duration_timeout: 1h
10
  pinned: false
11
+ license: mit
12
+ tags:
13
+ - image-editing
14
+ - text-to-image
15
+ - qwen-image
16
+ - cpu
17
+ - gguf
18
+ - mcp-server
19
+ short_description: Qwen-Image-Edit Rapid-AIO on CPU via GGUF
20
+ models:
21
+ - Qwen/Qwen-Image-Edit-2511
22
+ - Qwen/Qwen-Image-2512
23
  ---
24
 
25
+ # Qwen-Image-Edit Rapid-AIO / Free CPU
26
+
27
+ Image editing and text-to-image via GGUF on free CPU hardware.
28
+
29
+ - Diffusion: Rapid-AIO v23 Q3_K (Lightning pre-fused, 4 steps) from Arunk25
30
+ - Text encoder: Qwen2.5-VL-7B-Instruct-abliterated Q3_K_M from mradermacher
31
+ - VAE: pig_qwen_image_vae from calcuis
32
+ - Alt model: Image-2512 Q3_K_M (txt2img only) switchable via dropdown
33
+ - Engine: stable-diffusion.cpp (C++ inference, no PyTorch)
34
+ - Speed: ~45 min per 512x512 image on 2 vCPU
35
+ - RAM: ~14.9GB model + ~1GB activations
36
+
37
+ ## API
38
+
39
+ ```python
40
+ from gradio_client import Client, handle_file
41
+
42
+ client = Client("WeReCooking/Qwen-Image-Edit-CPU")
43
+ result = client.predict(
44
+ prompt="transform into anime style",
45
+ negative_prompt="worst quality, blurry",
46
+ init_image=handle_file("photo.png"),
47
+ model_choice="Rapid-AIO-v23 Q3 (edit)",
48
+ aspect_ratio="Auto (match input, max 512px)",
49
+ steps=4,
50
+ cfg_scale=2.5,
51
+ guidance=3.0,
52
+ seed=-1,
53
+ api_name="/infer",
54
+ )
55
+ print(result)
56
+ ```
57
+
58
+ ## MCP
59
+
60
+ ```json
61
+ {
62
+ "mcpServers": {
63
+ "qwen-image-edit": {"url": "https://werecooking-qwen-image-edit-cpu.hf.space/gradio_api/mcp/sse"}
64
+ }
65
+ }
66
+ ```
67
+
68
+ ## CLI
69
+
70
+ ```bash
71
+ python app.py "transform into anime style" -i photo.png -o edited.png
72
+ python app.py "a cat on a windowsill" -o cat.png --model "Image-2512 (txt2img)"
73
+ ```
74
+
75
+ ## Credits
76
+
77
+ - [Rapid-AIO](https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO) by Phr00t
78
+ - [GGUF](https://huggingface.co/Arunk25/Qwen-Image-Edit-Rapid-AIO-GGUF) by Arunk25
79
+ - [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp) by leejet
80
+ - [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image-Edit-2511) by Qwen
app.py ADDED
@@ -0,0 +1,355 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Qwen-Image-Edit Rapid-AIO / Free CPU via GGUF + stable-diffusion.cpp
2
+ Lightning pre-fused, 4 steps natively."""
3
+
4
+ import os, sys, time, gc, argparse, signal, threading
5
+ from PIL import Image
6
+
7
+ sys.stdout.reconfigure(line_buffering=True)
8
+ sys.stderr.reconfigure(line_buffering=True)
9
+
10
+ def log_mem():
11
+ try:
12
+ with open("/proc/self/status") as f:
13
+ for line in f:
14
+ if line.startswith(("VmRSS", "VmPeak")):
15
+ print(f" [mem] {line.strip()}", flush=True)
16
+ except Exception: pass
17
+
18
+ def sighandler(signum, frame):
19
+ print(f"\n[FATAL] Signal {signum} ({signal.Signals(signum).name})", flush=True)
20
+ log_mem()
21
+ sys.exit(128 + signum)
22
+
23
+ for sig in (signal.SIGTERM, signal.SIGINT, signal.SIGABRT):
24
+ try: signal.signal(sig, sighandler)
25
+ except Exception: pass
26
+
27
+ def get_cpu_count() -> int:
28
+ try:
29
+ with open("/sys/fs/cgroup/cpu.max") as f:
30
+ q, p = f.read().strip().split()
31
+ if q != "max": return max(1, int(q) // int(p))
32
+ except Exception: pass
33
+ try:
34
+ with open("/sys/fs/cgroup/cpu/cpu.cfs_quota_us") as f: q = int(f.read().strip())
35
+ with open("/sys/fs/cgroup/cpu/cpu.cfs_period_us") as f: p = int(f.read().strip())
36
+ if q > 0: return max(1, q // p)
37
+ except Exception: pass
38
+ return max(1, os.cpu_count() or 2)
39
+
40
+ N_THREADS = get_cpu_count()
41
+ for k in ["OMP_NUM_THREADS", "OPENBLAS_NUM_THREADS", "MKL_NUM_THREADS"]:
42
+ os.environ.setdefault(k, str(N_THREADS))
43
+ print(f"[init] CPU threads: {N_THREADS}")
44
+
45
+ MODELS = {
46
+ "Rapid-AIO-v23 Q3 (edit)": {
47
+ "repo": "Arunk25/Qwen-Image-Edit-Rapid-AIO-GGUF",
48
+ "file": "v23/Qwen-Rapid-NSFW-v23_Q3_K.gguf",
49
+ "needs_image": True,
50
+ },
51
+ "Image-2512 (txt2img)": {
52
+ "repo": "unsloth/Qwen-Image-2512-GGUF",
53
+ "file": "qwen-image-2512-Q3_K_M.gguf",
54
+ "needs_image": False,
55
+ },
56
+ }
57
+ DEFAULT_MODEL = "Rapid-AIO-v23 Q3 (edit)"
58
+
59
+ LLM_REPO = "mradermacher/Qwen2.5-VL-7B-Instruct-abliterated-GGUF"
60
+ LLM_FILE = "Qwen2.5-VL-7B-Instruct-abliterated.Q3_K_M.gguf"
61
+
62
+ VAE_REPO = "Comfy-Org/Qwen-Image_ComfyUI"
63
+ VAE_FILE = "split_files/vae/qwen_image_vae.safetensors"
64
+
65
+ DEFAULT_NEG = "worst quality, low quality, blurry, watermark, text, signature, jpeg artifacts"
66
+
67
+ MAX_INPUT_PX = 768
68
+
69
+ from huggingface_hub import hf_hub_download
70
+ from stable_diffusion_cpp import StableDiffusion
71
+
72
+ def ensure_model(repo_id: str, filename: str) -> str:
73
+ print(f"[init] Resolving {repo_id}/{filename}...", flush=True)
74
+ t0 = time.time()
75
+ path = hf_hub_download(repo_id=repo_id, filename=filename)
76
+ dt = time.time() - t0
77
+ if dt > 1:
78
+ print(f"[init] Downloaded in {dt:.1f}s", flush=True)
79
+ return path
80
+
81
+ print("[init] Downloading shared models...", flush=True)
82
+ llm_path = ensure_model(LLM_REPO, LLM_FILE)
83
+ vae_path = ensure_model(VAE_REPO, VAE_FILE)
84
+
85
+ print("[init] Pre-caching diffusion models to disk...", flush=True)
86
+ model_paths = {}
87
+ for name, cfg in MODELS.items():
88
+ model_paths[name] = ensure_model(cfg["repo"], cfg["file"])
89
+ print(f"[init] {name}: OK", flush=True)
90
+
91
+ SD_ENGINE = None
92
+ CURRENT_MODEL = None
93
+
94
+ def load_engine(model_name=None):
95
+ model_name = model_name or DEFAULT_MODEL
96
+ if model_name not in model_paths:
97
+ raise ValueError(f"Unknown model: {model_name!r}. Available: {list(model_paths)}")
98
+ global SD_ENGINE, CURRENT_MODEL
99
+ if SD_ENGINE is not None and CURRENT_MODEL == model_name:
100
+ return SD_ENGINE
101
+ if SD_ENGINE is not None:
102
+ print(f"[engine] Unloading {CURRENT_MODEL}...", flush=True)
103
+ del SD_ENGINE
104
+ SD_ENGINE = None
105
+ gc.collect()
106
+ print(f"[engine] Loading {model_name}...", flush=True)
107
+ t0 = time.time()
108
+ SD_ENGINE = StableDiffusion(
109
+ diffusion_model_path=model_paths[model_name],
110
+ llm_path=llm_path,
111
+ vae_path=vae_path,
112
+ offload_params_to_cpu=True,
113
+ diffusion_flash_attn=True,
114
+ n_threads=N_THREADS,
115
+ verbose=True,
116
+ )
117
+ CURRENT_MODEL = model_name
118
+ print(f"[engine] Loaded in {time.time()-t0:.1f}s", flush=True)
119
+ log_mem()
120
+ return SD_ENGINE
121
+
122
+ load_engine(DEFAULT_MODEL)
123
+
124
+ ASPECT_PRESETS = {
125
+ "Auto (match input, max 512px)": None,
126
+ "1:1 512x512": (512, 512),
127
+ "16:9 576x320": (576, 320),
128
+ "9:16 320x576": (320, 576),
129
+ "4:3 576x432": (576, 432),
130
+ "3:4 432x576": (432, 576),
131
+ }
132
+
133
+ MAX_PIXELS = 512 * 512
134
+ ALIGN = 16
135
+ VAE_STEP_THRESHOLD_S = 120
136
+
137
+ def calc_output_size(img_w, img_h):
138
+ img_w = max(1, img_w)
139
+ img_h = max(1, img_h)
140
+ ratio = img_w / img_h
141
+ area = min(img_w * img_h, MAX_PIXELS)
142
+ h = max(ALIGN, int((area / ratio) ** 0.5))
143
+ w = max(ALIGN, int(h * ratio))
144
+ w = (w // ALIGN) * ALIGN
145
+ h = (h // ALIGN) * ALIGN
146
+ MIN_DIM = ALIGN * 4
147
+ while w * h > MAX_PIXELS and (w > MIN_DIM or h > MIN_DIM):
148
+ if w >= h and w > MIN_DIM: w -= ALIGN
149
+ elif h > MIN_DIM: h -= ALIGN
150
+ else: break
151
+ return w, h
152
+
153
+ def safe_load_image(path, max_px=MAX_INPUT_PX, crop_ratio=None):
154
+ img = Image.open(path).convert("RGB") if isinstance(path, str) else path.convert("RGB")
155
+ w, h = img.size
156
+ if max(w, h) > max_px:
157
+ scale = max_px / max(w, h)
158
+ img = img.resize((int(w * scale), int(h * scale)), Image.Resampling.LANCZOS)
159
+ print(f"[gen] Downscaled input {w}x{h} -> {img.size[0]}x{img.size[1]}", flush=True)
160
+ w, h = img.size
161
+ if crop_ratio is not None:
162
+ target_w, target_h = crop_ratio
163
+ tr = target_w / target_h
164
+ ir = w / h
165
+ if abs(tr - ir) > 0.01:
166
+ if ir > tr:
167
+ new_w = int(h * tr)
168
+ left = (w - new_w) // 2
169
+ img = img.crop((left, 0, left + new_w, h))
170
+ else:
171
+ new_h = int(w / tr)
172
+ top = (h - new_h) // 2
173
+ img = img.crop((0, top, w, top + new_h))
174
+ print(f"[gen] Center-cropped to {img.size[0]}x{img.size[1]} for {target_w}:{target_h} ratio", flush=True)
175
+ return img
176
+
177
+ def generate(prompt, negative_prompt, init_image, model_choice, aspect_ratio, steps, cfg_scale, guidance, seed):
178
+ gc.collect()
179
+ print(f"\n{'='*60}", flush=True)
180
+ print(f"[gen] START {time.strftime('%H:%M:%S')}", flush=True)
181
+ log_mem()
182
+ sd = load_engine(model_choice)
183
+ steps = int(steps)
184
+ try: seed = int(seed)
185
+ except (TypeError, ValueError): seed = -1
186
+ if seed < 0: seed = -1
187
+
188
+ preset = ASPECT_PRESETS.get(aspect_ratio)
189
+ pil_input = None
190
+ if init_image is not None:
191
+ pil_input = safe_load_image(init_image, crop_ratio=preset)
192
+ elif MODELS.get(model_choice, {}).get("needs_image"):
193
+ yield None, "Error: this model requires an input image"
194
+ return
195
+
196
+ if preset:
197
+ w, h = preset
198
+ elif pil_input is not None:
199
+ w, h = calc_output_size(*pil_input.size)
200
+ else:
201
+ w, h = 512, 512
202
+
203
+ kwargs = dict(
204
+ prompt=prompt,
205
+ negative_prompt=negative_prompt or "",
206
+ width=w, height=h,
207
+ sample_steps=steps,
208
+ cfg_scale=cfg_scale,
209
+ guidance=guidance,
210
+ sample_method="euler",
211
+ seed=seed,
212
+ vae_tiling=True,
213
+ )
214
+
215
+ if pil_input is not None:
216
+ kwargs["init_image"] = pil_input
217
+
218
+ mode = "edit" if pil_input else "txt2img"
219
+ print(f"[gen] {mode} {w}x{h} steps={steps} cfg={cfg_scale} guidance={guidance} seed={seed}", flush=True)
220
+ if negative_prompt:
221
+ print(f"[gen] neg: {negative_prompt[:100]}", flush=True)
222
+
223
+ state = {"phase": "starting...", "step_times": [], "small_step_rounds": 0}
224
+ result_holder = {"images": None, "error": None}
225
+
226
+ def step_cb(step, steps_total, t_step):
227
+ if steps_total > steps * 2:
228
+ pct = int(step / max(steps_total, 1) * 100)
229
+ state["phase"] = f"preparing {pct}%"
230
+ return
231
+ is_vae = (t_step < VAE_STEP_THRESHOLD_S and state["small_step_rounds"] == 0 and init_image is not None)
232
+ if is_vae:
233
+ state["phase"] = f"VAE encode {step}/{steps_total}"
234
+ print(f" [VAE {step}/{steps_total}] {t_step:.1f}s", flush=True)
235
+ if step >= steps_total:
236
+ state["small_step_rounds"] += 1
237
+ else:
238
+ state["phase"] = f"diffusion {step}/{steps_total}"
239
+ state["step_times"].append(t_step)
240
+ print(f" [diffusion {step}/{steps_total}] {t_step:.1f}s", flush=True)
241
+
242
+ def run_inference():
243
+ try:
244
+ result_holder["images"] = sd.generate_image(**kwargs, progress_callback=step_cb)
245
+ except Exception as e:
246
+ import traceback; traceback.print_exc()
247
+ result_holder["error"] = e
248
+
249
+ t0 = time.time()
250
+ thread = threading.Thread(target=run_inference)
251
+ thread.start()
252
+
253
+ yield None, f"Starting {mode} {w}x{h}..."
254
+
255
+ while thread.is_alive():
256
+ thread.join(timeout=10)
257
+ elapsed = time.time() - t0
258
+ mins = int(elapsed // 60)
259
+ secs = int(elapsed % 60)
260
+ eta = ""
261
+ if state["step_times"]:
262
+ avg = sum(state["step_times"]) / len(state["step_times"])
263
+ done = len(state["step_times"])
264
+ remaining = (steps - done) * avg
265
+ if remaining > 0:
266
+ eta_m = int(remaining // 60)
267
+ eta = f" | ~{eta_m}m left"
268
+ yield None, f"[{mins}m{secs:02d}s] {state['phase']}{eta}"
269
+
270
+ elapsed = time.time() - t0
271
+
272
+ if result_holder["error"]:
273
+ print(f"[gen] EXCEPTION: {result_holder['error']}", flush=True)
274
+ log_mem(); gc.collect()
275
+ yield None, f"Error after {elapsed:.0f}s: {result_holder['error']}"
276
+ return
277
+
278
+ images = result_holder["images"]
279
+ print(f"[gen] Done, {len(images) if images else 0} images", flush=True)
280
+ log_mem(); gc.collect()
281
+ status = f"Done in {elapsed:.0f}s | {mode} {w}x{h}, {steps} steps, seed {seed}"
282
+ print(f"[gen] {status}", flush=True)
283
+ yield (images[0] if images else None), status
284
+
285
+ def cli_main():
286
+ parser = argparse.ArgumentParser(description="Qwen-Image-Edit Rapid-AIO CPU")
287
+ parser.add_argument("prompt", help="Text prompt")
288
+ parser.add_argument("-o", "--output", default="output.png")
289
+ parser.add_argument("-i", "--init-image", default=None)
290
+ parser.add_argument("-n", "--negative", default=DEFAULT_NEG)
291
+ parser.add_argument("--model", default=DEFAULT_MODEL, choices=list(MODELS.keys()))
292
+ parser.add_argument("--aspect", default="Auto (match input, max 512px)", choices=list(ASPECT_PRESETS.keys()))
293
+ parser.add_argument("--steps", type=int, default=4)
294
+ parser.add_argument("--cfg", type=float, default=2.5)
295
+ parser.add_argument("--guidance", type=float, default=3.0)
296
+ parser.add_argument("--seed", type=int, default=-1)
297
+ args = parser.parse_args()
298
+
299
+ for img, status in generate(args.prompt, args.negative, args.init_image, args.model, args.aspect, args.steps, args.cfg, args.guidance, args.seed):
300
+ if img:
301
+ img.save(args.output)
302
+ print(f"Saved: {args.output} ({status})")
303
+ return
304
+ print(f" {status}", flush=True)
305
+ print("Failed")
306
+ sys.exit(1)
307
+
308
+ def gradio_main():
309
+ import gradio as gr
310
+
311
+ def on_model_change(choice):
312
+ return gr.update(visible=MODELS[choice]["needs_image"])
313
+
314
+ with gr.Blocks(title="Qwen-Image-Edit CPU") as demo:
315
+ with gr.Row(equal_height=False):
316
+ with gr.Column(variant="panel", scale=1, min_width=280):
317
+ prompt = gr.Textbox(label="Prompt / Qwen-Image-Edit Lightning (~45m/512x512)", lines=2, placeholder="e.g. transform into anime style")
318
+ with gr.Accordion("Negative prompt", open=False):
319
+ negative_prompt = gr.Textbox(value=DEFAULT_NEG, lines=1, show_label=False)
320
+ init_image = gr.Image(label="Input Image", type="filepath", visible=True, height=160)
321
+ gen_btn = gr.Button("Generate", variant="primary", size="lg")
322
+ with gr.Row():
323
+ model_choice = gr.Dropdown(choices=list(MODELS.keys()), value=DEFAULT_MODEL, label="Model", scale=2)
324
+ aspect_ratio = gr.Dropdown(choices=list(ASPECT_PRESETS.keys()), value="Auto (match input, max 512px)", label="Aspect (crop)", scale=2)
325
+ with gr.Row():
326
+ steps = gr.Slider(1, 50, value=4, step=1, label="Steps", scale=1)
327
+ cfg_scale = gr.Slider(1.0, 7.0, value=2.5, step=0.5, label="CFG", scale=1)
328
+ guidance = gr.Slider(1.0, 10.0, value=3.0, step=0.5, label="Guidance", scale=1)
329
+ seed = gr.Number(value=-1, label="Seed", precision=0, scale=1)
330
+ with gr.Column(variant="panel", scale=1, min_width=280):
331
+ output_image = gr.Image(label="Output", type="pil", height=380)
332
+ status_text = gr.Textbox(label="Status", interactive=False, lines=1)
333
+ gr.Markdown(
334
+ "[Rapid-AIO](https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO) 路 "
335
+ "[GGUF](https://huggingface.co/Arunk25/Qwen-Image-Edit-Rapid-AIO-GGUF) 路 "
336
+ "[sd.cpp](https://github.com/leejet/stable-diffusion.cpp)")
337
+
338
+ model_choice.change(fn=on_model_change, inputs=[model_choice], outputs=[init_image])
339
+
340
+ gen_btn.click(
341
+ fn=generate,
342
+ inputs=[prompt, negative_prompt, init_image, model_choice, aspect_ratio, steps, cfg_scale, guidance, seed],
343
+ outputs=[output_image, status_text],
344
+ api_name="infer", concurrency_limit=1,
345
+ )
346
+
347
+ demo.queue(default_concurrency_limit=1).launch(ssr_mode=False, show_error=True, mcp_server=True, max_threads=1, theme="Nymbo/Alyx_Theme")
348
+
349
+ if __name__ == "__main__":
350
+ if len(sys.argv) > 1 and not sys.argv[1].startswith("--"):
351
+ cli_main()
352
+ else:
353
+ gradio_main()
354
+ else:
355
+ gradio_main()
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ stable-diffusion-cpp-python==0.4.5
2
+ gradio[mcp]
3
+ Pillow
4
+ huggingface-hub