NuMarkdown-8B-Thinking-AIO-GGUF

NuMarkdown-8B-Thinking from numind is an 8B-parameter reasoning-powered OCR vision-language model fine-tuned from Qwen2.5-VL-7B via supervised fine-tuning (SFT) on synthetic documents followed by reinforcement learning (RL) with GRPO using layout-aware rewards, designed to convert complex PDFs, scanned documents, and spreadsheets into clean, structured Markdown optimized for RAG workflows and knowledge bases by interpreting layout, formatting, multi-column reading order, merged/nested tables, mixed visual elements, and degraded scans rather than just extracting text. It generates intermediate "thinking tokens" (20-500% of final output length) to reason about document structure before producing parsing-ready Markdown, outperforming GPT-4o, OCRFlux, and other specialized systems on Trueskill OCR-to-Markdown benchmarks while maintaining auditable reasoning steps for enterprise/legal/archival use under MIT License. Deployable via Hugging Face Transformers with legacy processor (use_fast=False) or quantized GGUF versions for CPU/GPU, it excels at preserving spatial relationships and formatting fidelity where traditional OCR fails, making it ideal for document digitization pipelines without post-processing.

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
146
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/NuMarkdown-8B-Thinking-AIO-GGUF

Quantized
(6)
this model