UI-S1: Advancing GUI Automation via Semi-online Reinforcement Learning
Paper
โข
2509.11543
โข
Published
โข
49
This is a hybrid quantized version of mPLUG/UI-S1-7B optimized for efficient GUI automation on consumer hardware.
โ
Zero Vision Quality Loss - Vision tower completely preserved in BF16
โ
Massive Memory Savings - 68.7% size reduction
โ
Consumer Hardware Ready - Runs on 16GB VRAM GPUs
โ
16k Context Support - Full context window with room to spare
| Metric | Original | Quantized |
|---|---|---|
| Model Size | 14.5 GB | 4.6 GB |
| VRAM Usage | ~14-15 GB | ~4.5-5.5 GB |
| Vision Quality | 100% | 100% (preserved) |
| Text Layers | FP16 | INT4 |
import torch
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from quanto import safe_load, quantize, freeze, qint4
# Load base architecture
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Hadidiz9/UI-S1-7B-Hybrid-W4-Quanto",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Load quantized weights
state_dict = safe_load("quanto_model.safetensors")
model.load_state_dict(state_dict, strict=False)
# Requantize (restore quanto layers)
vision_keywords = ['visual', 'vision', 'image', 'patch', 'merger', 'projector', 'embed_tokens', 'lm_head']
exclude_modules = []
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
if any(k in name.lower() for k in vision_keywords):
exclude_modules.append(name)
quantize(model, weights=qint4, exclude=exclude_modules)
freeze(model)
model.eval()
processor = AutoProcessor.from_pretrained("Hadidiz9/UI-S1-7B-Hybrid-W4-Quanto", trust_remote_code=True)
from PIL import Image
from qwen_vl_utils import process_vision_info
# Load image
image = Image.open("screenshot.png")
# Prepare messages
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Describe the UI elements."}
]
}
]
# Process
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to(model.device)
# Generate
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=128)
response = processor.batch_decode(
[out[len(inp):] for inp, out in zip(inputs.input_ids, generated_ids)],
skip_special_tokens=True
)[0]
quanto library for loadingOriginal Model:
@article{lu2025ui,
title={UI-S1: Advancing GUI Automation via Semi-online Reinforcement Learning},
author={Lu, Zhengxi and others},
journal={arXiv preprint arXiv:2509.11543},
year={2025}
}
Apache 2.0 (same as base model)
Base model
mPLUG/UI-S1-7B