StableI2I
Official implementation of StableI2I: Spotting Unintended Changes in Image-to-Image Transition
ICML 2026
This HuggingFace repository provides the checkpoint used in the paper.
For the latest code, demo, inference scripts, and score-supported version, please refer to the official GitHub repository:
https://github.com/Henry-Lee-real/StableI2I. This model is associated with our paper: https://arxiv.org/abs/2605.04453
Any questions can be consulted via email: lijiayang.cs@gmail.com
Looking forward to your β!
π TODOs
- Release code
- Release checkpoint
- Release pip package
- Release arXiv version
- Release ICML camera-ready paper
- Release HuggingFace project page
π₯ News
- StableI2I is accepted by ICML 2026.
- This HuggingFace repository hosts the checkpoint used in the paper.
- The latest codebase is maintained in the official GitHub repository.
- If you need the version with explicit score output, please use the latest GitHub code.
Core Concept
In most real-world image-to-image (I2I) scenarios, existing evaluations primarily focus on instruction following and perceptual quality or aesthetics of the generated images. However, they often fail to assess whether the output image faithfully preserves the semantic correspondence, spatial structure, and low-level appearance of the input image.
To address this limitation, we propose StableI2I, a unified and dynamic evaluation framework for measuring content fidelity and pre--post consistency in image-to-image transitions. StableI2I does not require reference images and can be applied to a wide range of I2I tasks, including image editing and image restoration.
StableI2I evaluates unintended changes from three complementary perspectives:
Semantic Level
Checks whether the output introduces unintended object-level or meaning-level changes, such as object addition, removal, replacement, or identity drift.Structure Level
Checks whether the output preserves spatial layout and geometric consistency, including misalignment, deformation, repainting, and structural distortion.Low-level Appearance
Checks whether the output introduces unintended visual degradation, such as blur, noise, color cast, exposure degradation, or artifacts.
In addition, we construct StableI2I-Bench, a benchmark designed to systematically evaluate the ability of MLLMs to judge content fidelity and consistency in image-to-image tasks.
Extensive experiments show that StableI2I provides accurate, fine-grained, and interpretable evaluations, with strong correlations to human subjective judgments. It serves as a practical evaluation tool for diagnosing content consistency and benchmarking real-world I2I systems.
Model Checkpoint
This HuggingFace repository provides the checkpoint used in the StableI2I paper.
Please note:
- The checkpoint corresponds to the paper version.
- For the latest inference pipeline, API interface, and score-supported output format, please refer to the official GitHub repository.
- The model is built upon the Qwen3-VL environment and follows the Qwen3-VL inference style.
- Downloads last month
- 47
Model tree for lijiayangCS/StableI2I
Base model
Qwen/Qwen3-VL-8B-Instruct