Text Generation
Transformers
Safetensors
Chinese
qwen2
qwen2.5
vision-language-model
dinov2
nsd
multimodal
conversational
custom_code
text-generation-inference
Instructions to use LinkRur/Qwen_2.5_1.5B_Instruct_Vision with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LinkRur/Qwen_2.5_1.5B_Instruct_Vision with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LinkRur/Qwen_2.5_1.5B_Instruct_Vision", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LinkRur/Qwen_2.5_1.5B_Instruct_Vision", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("LinkRur/Qwen_2.5_1.5B_Instruct_Vision", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LinkRur/Qwen_2.5_1.5B_Instruct_Vision with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LinkRur/Qwen_2.5_1.5B_Instruct_Vision" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LinkRur/Qwen_2.5_1.5B_Instruct_Vision", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LinkRur/Qwen_2.5_1.5B_Instruct_Vision
- SGLang
How to use LinkRur/Qwen_2.5_1.5B_Instruct_Vision with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LinkRur/Qwen_2.5_1.5B_Instruct_Vision" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LinkRur/Qwen_2.5_1.5B_Instruct_Vision", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LinkRur/Qwen_2.5_1.5B_Instruct_Vision" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LinkRur/Qwen_2.5_1.5B_Instruct_Vision", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use LinkRur/Qwen_2.5_1.5B_Instruct_Vision with Docker Model Runner:
docker model run hf.co/LinkRur/Qwen_2.5_1.5B_Instruct_Vision
Qwen_2.5_1.5B_Instruct_Vision
基于 DINOv2 与 NSD (Normalized State Decomposition) 的视觉语言模型,通过超球面归一化实现视觉-语言特征对齐。
方法
NSD 定义
给定向量 v ∈ R^D,NSD 将其投影到单位超球面 S^{D-1}:
归一化后,内积等价于余弦相似度:
视觉-语言对齐
DINOv2 输出 h ∈ R^{384},经投影层映射到 Qwen 的 1536 维嵌入空间,再通过 NSD 归一化到 S^{1535}:
其中 W_1 ∈ R^{768×384}, W_2 ∈ R^{1536×768} 为投影层参数。
Qwen 词表嵌入 E ∈ R^{V×1536} 同样归一化:
每个视觉 token 在词表中检索最近邻:
训练
投影层与 DINOv2 后 4 层使用 InfoNCE 对比损失微调:
其中 y_i 为类别 i 的 Qwen 嵌入,τ = 0.07 为温度系数。
架构
| 组件 | 说明 | 参数量 |
|---|---|---|
| DINOv2 ViT-S/14 | 视觉编码器 (微调后 4 层) | 22M |
| MLP Projection | 384 → 768 → 1536 | 1.18M |
| Qwen2.5-1.5B-Instruct | 语言模型 (冻结) | 1543.7M |
性能
在 tiny-imagenet-200 验证集 (10000 张) 上的视觉 token 准确率:
| Top-K | 命中率 |
|---|---|
| Top-1 | 72.43% |
| Top-3 | 80.02% |
| Top-5 | 81.18% |
| Top-10 | 81.67% |
使用
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
# 加载模型
model = AutoModelForCausalLM.from_pretrained(
"LinkRur/Qwen_2.5_1.5B_Instruct_Vision", # 替换为你的模型名
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(
"LinkRur/Qwen_2.5_1.5B_Instruct_Vision", # 替换为你的模型名
trust_remote_code=True
)
# 推理
image = Image.open("photo.jpg").convert("RGB")
vis_desc, response = model.generate_with_image(image, "描述这张图片", tokenizer)
print(response)
License
MIT
- Downloads last month
- 53