image imagewidth (px) 320 5.73k | label class label 2
classes |
|---|---|
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer | |
0answer |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Reason2Gen GPT-based Full Evaluation
This document describes how to use a GPT-5.1 model (via the OpenAI API) to evaluate image-generation outputs on the Reason2Gen benchmark.
The evaluation script:
- Reads each task directory under your Reason2Gen benchmark.
- For every sample, loads:
- The question / input prompt.
- The target image (ground-truth).
- The generated image from your method (e.g., Bagel / FLUX2).
- Asks GPT to judge whether the generated image correctly solves the puzzle or instruction, given the prompt and the reference target image.
- Counts +1 for a correct image and 0 for incorrect, then reports accuracy per task and overall.
1. Directory Layout
Benchmark directory looks like this:
<base_dir>/
hanoi/
hanoi.json
question/
question_0000.png
...
answer/
answer_0000.png
...
clock/
clock.json
question/
answer/
...
Your method’s generated images are assumed to be in:
<result_root>/
hanoi/
<method_name>/
edited/
answer_0000_<suffix>.png # generated image for that sample
clock/
<method_name>/
edited/
...
...
Where:
base_dir= path to Reason2Gen benchmark (JSON + question/answer images).result_root= root directory where you saved outputs.method_name= name of your method (e.g.,bagel,flux2).- The script matches JSON entries to files by
image_targetortarget_image, then looks for an edited image with a fixed suffix (you can change this).
2. OpenAI API Configuration
You need:
- An OpenAI API key with access to the
gpt-5.1(or similar) model. - Python
openaipackage (>= 1.0.0 style client).
Set your API key via environment variable:
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxx"
Or in Windows PowerShell:
$env:OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxx"
The script uses the official client, for example:
from openai import OpenAI
client = OpenAI() # reads OPENAI_API_KEY from env
response = client.chat.completions.create(
model="gpt-5.1",
messages=[...],
)
You can change the model name (e.g., gpt-4.1-mini) in the script if desired.
3. Evaluation Script (reason2gen_gpt_eval.py)
Place reason2gen_gpt_eval.py next to this README.
The script will:
- Discover all tasks under
base_dir(each subfolder with a<task>.json). - For each task:
- Load the JSON list of samples.
- For each sample:
- Read instruction / textual description.
- Locate
questionimage (optional for GPT context). - Locate
answer(target) image. - Locate generated image from your result folder.
- If any image is missing, skip that sample.
- Build a GPT prompt including:
- Task name.
- Natural-language instruction / description from JSON.
- Short description of the evaluation rule (exact matching vs. conceptual).
- Optionally, some few-shot examples (you can add).
- Send all three images as
image_url/input_imageparts in the Chat Completions API:- question image
- target (answer) image
- generated image
- Parse GPT’s response as a strict JSON decision:
{"label": 1}→ correct{"label": 0}→ incorrect
- Accumulate:
correct_count[task]total_count[task]
- Print:
- Accuracy per task.
- Macro-average accuracy over all tasks.
4. How GPT Is Prompted
The core idea:
GPT sees both the target and your generated image.
GPT is instructed:
- Compare generated vs. target.
- Decide if the generated image is semantically correct for the puzzle, not just visually similar.
- Output only a JSON structure with
label=1or0.
Example system message (simplified):
{
"role": "system",
"content": "You are an automatic judge for puzzle-like images..."
}
Example user message (simplified):
{
"role": "user",
"content": [
{"type": "text", "text": "...task description..."},
{"type": "image_url", "image_url": {"url": "file://.../question.png"}},
{"type": "image_url", "image_url": {"url": "file://.../answer.png"}},
{"type": "image_url", "image_url": {"url": "file://.../generated.png"}}
]
}
The assistant must answer:
{"label": 1}
or:
{"label": 0}
If parsing fails, that sample is counted as incorrect by default (configurable).
5. Running the Evaluator
5.1. Install dependencies
Create a Python environment and install:
pip install openai pillow tqdm
If you use local file paths for images with the OpenAI API, ensure your environment (e.g., where the script runs) supports sending those images either as bytes or via hosted URLs. The reference implementation in reason2gen_gpt_eval.py uses local file reading + input_image uploads via the client.
5.2. Example command
python reason2gen_gpt_eval.py --base_dir /path/to/Reason2Gen --result_root /path/to/Reason2Gen_outputs --method_name bagel --image_suffix _bagel.png --model gpt-5.1 --max_samples_per_task 0
In this repo (copy-paste):
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxx"
python /mnt/bn/yuanpengtu/svgthink/benchmark/Eval/Reason2GenBench/reason2gen_gpt_eval.py \
--base_dir /mnt/bn/yuanpengtu/svgthink/benchmark/Reason2Gen \
--result_root /mnt/bn/yuanpengtu/svgthink/benchmark/Eval/Reason2Gen_outputs \
--method_name bagel \
--image_suffix _bagel.png \
--model gpt-5.1 \
--max_samples_per_task 0 \
--json_mode
Arguments:
--base_dir: root of the Reason2Gen benchmark.--result_root: root of all generated outputs.--method_name: subdirectory under each task where your edited images live.--image_suffix: suffix appended to the target filename to get your generated filename.--model: which OpenAI vision-capable model to use.--max_samples_per_task: optional cap;0or omitted means “all”.
You can also restrict to specific tasks:
python reason2gen_gpt_eval.py --base_dir /path/to/Reason2Gen --result_root /path/to/Reason2Gen_outputs --method_name flux2 --tasks hanoi clock pipe
6. Output Format
At the end, the script prints something like:
===== Per-task accuracy =====
Task hanoi: 73.2% ( 293 / 400 )
Task clock: 65.0% ( 130 / 200 )
Task pipe: 70.5% ( 141 / 200 )
...
===== Overall =====
Total: 69.1% ( 564 / 816 ) across 7 tasks
It can also optionally write results to a JSON file:
{
"per_task": {
"hanoi": {"correct": 293, "total": 400, "accuracy": 0.7325},
"clock": {"correct": 130, "total": 200, "accuracy": 0.65},
"...": {}
},
"overall": {
"correct": 564,
"total": 816,
"accuracy": 0.691
},
"config": {
"base_dir": "...",
"result_root": "...",
"method_name": "bagel",
"model": "gpt-5.1"
}
}
(Enable this by passing --save_json /path/to/results.json.)
7. Notes & Tips
- Cost & speed: Vision GPT calls with 3 images per sample can be expensive for large benchmarks. You can:
- Lower
max_samples_per_task. - Use a cheaper model like
gpt-4.1-mini. - Cache judgments (script supports optional cache file).
- Lower
- Determinism: Set
temperature=0for the GPT calls to get deterministic behavior. - Robustness:
- If a sample’s images are missing, it is skipped and not counted.
- If GPT output cannot be parsed as JSON with
label, that sample is treated as incorrect.
- Strict vs. lenient criteria:
- You can adjust the instructions to GPT to be more strict (exact final state) or more lenient (any valid solution).
8. Minimal Configuration Checklist
Reason2Gen benchmark present at
BASE_DIR:- Contains subfolders (e.g.,
hanoi,clock, …). - Each subfolder has
<task>.json,question/,answer/.
- Contains subfolders (e.g.,
Generated images present at
RESULT_ROOT:RESULT_ROOT/<task>/<method_name>/edited/….
Set the environment variable:
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxx"Run evaluator:
python reason2gen_gpt_eval.py --base_dir BASE_DIR --result_root RESULT_ROOT --method_name METHOD --image_suffix _METHOD.png --model gpt-5.1Read accuracies from terminal (and JSON if saved).
9. Extending the Script
- Different file naming scheme:
- Modify how output filenames are derived from
image_target.
- Modify how output filenames are derived from
- Extra context in prompts:
- You can inject additional text from JSON (e.g., reasoning steps) into the GPT prompt.
- Multiple methods comparison:
- Run the script separately for each
method_nameand compare overall accuracy.
- Run the script separately for each
- Downloads last month
- 9