Safetensors

Add metadata and improve model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +31 -42
README.md CHANGED
@@ -1,27 +1,30 @@
1
  ---
 
2
  license: mit
3
- base_model:
4
- - Qwen/Qwen3-VL-8B-Instruct
5
  ---
6
 
7
- <h1 align="center">PVC-Judge is a state-of-the-art 8B assessment model for evaluating image editing models in visual consistency.</h1>
 
 
 
 
8
 
9
  <p align="center">
10
  <a href="https://arxiv.org/abs/2603.28547"><img src="https://img.shields.io/badge/Paper-arXiv%3A2603.28547-b31b1b?logo=arxiv&logoColor=red"></a>
11
  <a href="https://zhangqijiang07.github.io/gedit2_web/"><img src="https://img.shields.io/badge/%F0%9F%8C%90%20Project%20Page-Website-8A2BE2"></a>
 
12
  <a href="https://huggingface.co/datasets/GEditBench-v2/GEditBench-v2"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20HF-GEditBench v2-blue"></a>
13
- <a href="https://huggingface.co/datasets/GEditBench-v2/VCReward-Bench"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20HF-VCReward Bench-blue"></a>
14
 
 
15
 
16
- ## 🚀 Quick Start!
17
- ### Clone github repo
18
- ```bash
19
- git clone https://github.com/ZhangqiJiang07/GEditBench_v2.git
20
- cd GEditBench_v2
21
- ```
22
 
23
- ### Option 1: Packaged as an online client
24
- - Merge LoRA weights to models, required env `torch/peft/transformers`
25
  ```bash
26
  python ./scripts/merge_lora.py \
27
  --base-model-path /path/to/Qwen3/VL/8B/Instruct \
@@ -29,40 +32,26 @@ python ./scripts/merge_lora.py \
29
  --model-save-dir /path/to/save/PVC/Judge/model
30
  ```
31
 
32
- - Implement online server via vLLM
33
- ```bash
34
- python -m vllm.entrypoints.openai.api_server \
35
- --model /path/to/save/PVC/Judge/model \
36
- --served-model-name PVC-Judge \
37
- --tensor-parallel-size 1 \
38
- --mm-encoder-tp-mode data \
39
- --limit-mm-per-prompt.video 0 \
40
- --host 0.0.0.0 \
41
- --port 25930 \
42
- --dtype bfloat16 \
43
- --gpu-memory-utilization 0.80 \
44
- --max_num_seqs 32 \
45
- --max-model-len 48000 \
46
- --distributed-executor-backend mp
47
- ```
48
-
49
- - Use `autopipeline` for inference.
50
-
51
- See our [repo](https://github.com/ZhangqiJiang07/GEditBench_v2/tree/main) for detailed usage!
52
-
53
-
54
- ### Option 2: Offline Inference
55
 
 
56
  ```bash
57
- # For local judge inference
58
  conda env create -f environments/pvc_judge.yml
59
  conda activate pvc_judge
60
- # or:
61
- python3.12 -m venv .venvs/pvc_judge
62
- source .venvs/pvc_judge/bin/activate
63
- python -m pip install -r environments/requirements/pvc_judge.lock.txt
64
-
65
 
66
- # Run
67
  bash ./scripts/local_eval.sh vc_reward
 
 
 
 
 
 
 
 
 
 
 
68
  ```
 
1
  ---
2
+ base_model: Qwen/Qwen3-VL-8B-Instruct
3
  license: mit
4
+ library_name: peft
5
+ pipeline_tag: image-text-to-text
6
  ---
7
 
8
+ # PVC-Judge: Pairwise Visual Consistency Judge
9
+
10
+ PVC-Judge is a state-of-the-art 8B assessment model for evaluating image editing models in visual consistency. It is a pairwise preference model designed to capture the preservation of identity, structure, and semantic coherence between edited and original images.
11
+
12
+ The model was introduced in the paper [GEditBench v2: A Human-Aligned Benchmark for General Image Editing](https://arxiv.org/abs/2603.28547) and is implemented as a LoRA adapter for [Qwen/Qwen3-VL-8B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct).
13
 
14
  <p align="center">
15
  <a href="https://arxiv.org/abs/2603.28547"><img src="https://img.shields.io/badge/Paper-arXiv%3A2603.28547-b31b1b?logo=arxiv&logoColor=red"></a>
16
  <a href="https://zhangqijiang07.github.io/gedit2_web/"><img src="https://img.shields.io/badge/%F0%9F%8C%90%20Project%20Page-Website-8A2BE2"></a>
17
+ <a href="https://github.com/ZhangqiJiang07/GEditBench_v2"><img src="https://img.shields.io/badge/GitHub-Code-black?logo=github"></a>
18
  <a href="https://huggingface.co/datasets/GEditBench-v2/GEditBench-v2"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20HF-GEditBench v2-blue"></a>
19
+ </p>
20
 
21
+ ## 🚀 Quick Start
22
 
23
+ To use PVC-Judge, you typically need to merge the LoRA weights with the base model.
24
+
25
+ ### 1. Merge LoRA weights
26
+ This step requires `torch`, `peft`, and `transformers`.
 
 
27
 
 
 
28
  ```bash
29
  python ./scripts/merge_lora.py \
30
  --base-model-path /path/to/Qwen3/VL/8B/Instruct \
 
32
  --model-save-dir /path/to/save/PVC/Judge/model
33
  ```
34
 
35
+ ### 2. Deployment or Local Inference
36
+ You can serve the merged model via vLLM or run local evaluation as described in the [official repository](https://github.com/ZhangqiJiang07/GEditBench_v2).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
+ **Local Inference:**
39
  ```bash
40
+ # Setup environment
41
  conda env create -f environments/pvc_judge.yml
42
  conda activate pvc_judge
 
 
 
 
 
43
 
44
+ # Run evaluation
45
  bash ./scripts/local_eval.sh vc_reward
46
+ ```
47
+
48
+ ## Citation
49
+
50
+ ```bibtex
51
+ @article{jiang2026geditbenchv2,
52
+ title={GEditBench v2: A Human-Aligned Benchmark for General Image Editing},
53
+ author={Zhangqi Jiang and Zheng Sun and Xianfang Zeng and Yufeng Yang and Xuanyang Zhang and Yongliang Wu and Wei Cheng and Gang Yu and Xu Yang and Bihan Wen},
54
+ journal={arXiv preprint arXiv:2603.28547},
55
+ year={2026}
56
+ }
57
  ```