phiyodr/coco2017
Viewer • Updated • 123k • 3.08k • 25
How to use teohyc/QwigLip-VLM with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="teohyc/QwigLip-VLM") # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("teohyc/QwigLip-VLM", dtype="auto")How to use teohyc/QwigLip-VLM with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "teohyc/QwigLip-VLM"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "teohyc/QwigLip-VLM",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/teohyc/QwigLip-VLM
How to use teohyc/QwigLip-VLM with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "teohyc/QwigLip-VLM" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "teohyc/QwigLip-VLM",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "teohyc/QwigLip-VLM" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "teohyc/QwigLip-VLM",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use teohyc/QwigLip-VLM with Docker Model Runner:
docker model run hf.co/teohyc/QwigLip-VLM
Custom Vision-Language Model built from scratch. Inspired by LLaVA VLM architecture, but with a custom MLP projector and LoRA fine-tuning for efficient training. Training data from https://huggingface.co/datasets/phiyodr/coco2017 Full repository at https://github.com/teohyc/qwiglip_vlm
***** CHECK OUT inference.py FOR DETAILED INFERENCE EXAMPLE *****
import torch
from PIL import Image
from transformers import AutoTokenizer, AutoProcessor, AutoModel, Qwen2ForCausalLM
from peft import PeftModel
from vlm_model import MLPProjector, SiglipQwenVLM
#configurations
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
LLM_NAME = "Qwen/Qwen2-0.5B-Instruct"
VISION_NAME = "google/siglip-base-patch16-224"
LORA_PATH = "lora_adapter"
PROJECTOR_PATH = "projector.pt"
NUM_IMAGE_TOKENS = 196
#refer to inference.py for full code