--- tags: - comfyui - workflow - image-generation - text-to-image - image-to-video - video-generation - wan - qwen - z-image - zit - sdxl - ai-art - generative-ai pretty_name: echosingularity workflows --- license: creativeml-openrail-m --- WAN 2.2 – I2V Workflow (Optimized for 12GB GPUs) A fast, clean, and VRAM-efficient Image-to-Video workflow built around WAN 2.2. Fast render times on mid-range GPUs. I tried to keep this simple and easy to use, while maintaining good results. Utilizing well known nodes, and minimizing node bloat. The workflow also has comments everywhere and clear flow. Ver 1.0 - Base workflow, can do 5 second clips in one iteration. (very fast for 12gb) Ver 1.1 - More stability, can run 100 times consecutively in 8hrs Ver 1.2 - Renders 20 second videos. Cleanup of wires. Ver 1.3 - MMAudio added. Ver 1.4 - 2x Upscaling, color correction, & sharpening in between passes for quality consistency. Ver 1.5 - Fixed MMAudio, Updated controls & ability to do 5, 10, 15, & 20 second videos easy. Split RIFE between phases. Fixed prompts. Cleaned up workflow. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ QWEN Image Edit workflow (Optimized for 12GB GPUs) Designed to run large AIO QWEN checkpoints (≈28GB) while still generating high-resolution outputs on 12GB VRAM GPUs. The focus here is: Image editing / guided edits Very low step counts Stable results at low CFG Aggressive memory management Clean upscale + post polish ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Z-Image Turbo workflow (Optimized multi-phase) Designed to extract maximum detail, edge fidelity, and material realism on 12GB VRAM GPUs. This workflow also includes seed variance to the conditioning so that outputs with the same prompt have more variety similarly to SDXL, Pony, IL models. This workflow uses controlled sigma shaping, Res-2 samplers, and phased refinement passes to stabilize detail while avoiding common ZIT artifacts like: Over-etched hair Shimmering edges Checkerboard blockiness CFG-induced harshness The result is clean, high-contrast outputs that scale well across portraits, fashion, cinematic scenes, and hard-surface material tests. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Auto IMG Batch Caption workflow automatically generates clean, structured image captions by combining WD14 tagging, Florence-style natural language descriptions, and a custom trigger token for training consistency. The idea behind this workflow is to deliver proven results for easily (one-click) captioning datasets for training. I have made many high quality LORA from the datasets this workflow outputs. Uses WD14 to extract high-quality tag metadata Uses Florence to generate a natural-language image description Injects a custom trigger token at the start of every caption Outputs both tags + descriptive text in a single caption block Saves captions to a user-defined folder inside ComfyUI/output Important Setup Note (VERY IMPORTANT) You must create a folder inside: ComfyUI/input/ Example: ComfyUI/input/Captions Then select that folder in the caption loader node. Captions follow this format: TRIGGER, wd14_tags_here, florence_generated_description_here