LightOnOCR-2-1B GGUF (Q8_0)
GGUF quantized version of lightonai/LightOnOCR-2-1B.
Files
- LightOnOCR-2-1B-Q8_0.gguf (610 MB) - Language model (596M parameters, Q8_0 quantization)
- mmproj-LightOnOCR-2-1B-Q8_0.gguf (429 MB) - Vision encoder (403M parameters, Q8_0 quantization)
Usage
llama-server -hf staghado/LightOnOCR-2-1B-Q8_0-GGUF -c 8192 --temp 0.2 --top-k 0 --top-p 0.9
Note: The flags --temp 0.2 --top-k 0 --top-p 0.9 set the default generation parameters to match the original model.
API Example
import requests
import base64
with open('document.png', 'rb') as f:
image_base64 = base64.b64encode(f.read()).decode()
response = requests.post('http://localhost:8000/v1/chat/completions', json={
"model": "LightOnOCR-2-1B",
"messages": [{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_base64}"}}
]
}],
"max_tokens": 1024,
"temperature": 0.2,
"top_k": 0,
"top_p": 0.9
})
print(response.json()['choices'][0]['message']['content'])
Note: This model only accepts images, no text prompts.
Creating Quantized Versions
If you want to create your own quantized GGUF files:
Prerequisites
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
python -m venv venv
source venv/bin/activate
pip install git+https://github.com/huggingface/transformers.git torch sentencepiece
Note: transformers must be installed from source until the next release includes LightOnOCR support.
Conversion Steps
- Download original model
hf download lightonai/LightOnOCR-2-1B --repo-type model --local-dir ./models/LightOnOCR-2-1B
- Convert language model to Q8_0
python convert_hf_to_gguf.py ./models/LightOnOCR-2-1B --outtype q8_0 --outfile LightOnOCR-2-1B-Q8_0.gguf
- Convert vision encoder to Q8_0
python convert_hf_to_gguf.py ./models/LightOnOCR-2-1B --mmproj --outtype q8_0 --outfile mmproj-LightOnOCR-2-1B-Q8_0.gguf
Notes
- Q8_0 provides good quality/speed balance with ~4x compression
- Requires latest llama.cpp from main branch
Details
- Total: 1.01B parameters (vision: 403M + language: 596M + projector: 6M)
- Quantization: Q8_0 (8-bit)
- Tested on M3 Mac: 413 tokens/sec (prompt), 114 tokens/sec (generation)
- Downloads last month
- 398
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for staghado/LightOnOCR-2-1B-Q8_0-GGUF
Base model
lightonai/LightOnOCR-2-1B