davanstrien HF Staff commited on
Commit
ddec3fc
Β·
verified Β·
1 Parent(s): 7f61dee

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +132 -19
README.md CHANGED
@@ -5,9 +5,9 @@ tags: [uv-script, ocr, vision-language-model, document-processing]
5
 
6
  # OCR UV Scripts
7
 
8
- > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV
9
 
10
- Ready-to-run OCR scripts that work with `uv run` - no setup required!
11
 
12
  ## πŸš€ Quick Start with HuggingFace Jobs
13
 
@@ -31,6 +31,45 @@ That's it! The script will:
31
 
32
  ## πŸ“‹ Available Scripts
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  ### LightOnOCR (`lighton-ocr.py`) ⚑ Good one to test first since it's small and fast!
35
 
36
  Fast and compact OCR using [lightonai/LightOnOCR-1B-1025](https://huggingface.co/lightonai/LightOnOCR-1B-1025):
@@ -44,11 +83,13 @@ Fast and compact OCR using [lightonai/LightOnOCR-1B-1025](https://huggingface.co
44
  - πŸš€ **Production-ready**: 76.1% benchmark score, used in production
45
 
46
  **Vocabulary sizes:**
 
47
  - `151k`: Full vocabulary, all languages (default)
48
  - `32k`: European languages, ~12% faster decoding
49
  - `16k`: European languages, ~12% faster decoding
50
 
51
  **Quick start:**
 
52
  ```bash
53
  # Test on 100 samples with English text (32k vocab is fastest for European languages)
54
  hf jobs uv run --flavor l4x1 \
@@ -69,7 +110,39 @@ hf jobs uv run --flavor a100-large \
69
  --temperature 0.0
70
  ```
71
 
72
- ### DeepSeek-OCR (`deepseek-ocr-vllm.py`) ⭐ NEW
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
  Advanced document OCR using [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) with visual-text compression:
75
 
@@ -84,6 +157,7 @@ Advanced document OCR using [deepseek-ai/DeepSeek-OCR](https://huggingface.co/de
84
  - ⚑ **Fast batch processing** - vLLM acceleration
85
 
86
  **Resolution Modes:**
 
87
  - `tiny` (512Γ—512): Fast, 64 vision tokens
88
  - `small` (640Γ—640): Balanced, 100 vision tokens
89
  - `base` (1024Γ—1024): High quality, 256 vision tokens
@@ -91,6 +165,7 @@ Advanced document OCR using [deepseek-ai/DeepSeek-OCR](https://huggingface.co/de
91
  - `gundam` (dynamic): Adaptive multi-tile (default)
92
 
93
  **Prompt Modes:**
 
94
  - `document`: Convert to markdown with grounding (default)
95
  - `image`: OCR any image with grounding
96
  - `free`: Fast OCR without layout
@@ -176,7 +251,6 @@ High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingfa
176
  - 🧩 **YAML metadata** - Structured front matter (language, rotation, content type)
177
  - πŸš€ **Based on Qwen2.5-VL-7B** - Fine-tuned with reinforcement learning
178
 
179
-
180
  ## πŸ†• New Features
181
 
182
  ### Multi-Model Comparison Support
@@ -209,6 +283,7 @@ uv run nanonets-ocr.py dataset output --max-samples 100 --shuffle --seed 42
209
  ### Automatic Dataset Cards
210
 
211
  Every OCR run now generates comprehensive dataset documentation including:
 
212
  - Model configuration and parameters
213
  - Processing statistics
214
  - Column descriptions
@@ -221,6 +296,29 @@ Every OCR run now generates comprehensive dataset documentation including:
221
  No GPU? No problem! Run on HF infrastructure:
222
 
223
  ```bash
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
224
  # DeepSeek-OCR - Real-world example (National Library of Scotland handbooks)
225
  hf jobs uv run --flavor a100-large \
226
  -s HF_TOKEN \
@@ -337,6 +435,11 @@ uv run nanonets-ocr.py input-dataset output-dataset
337
  uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
338
  input-dataset output-dataset
339
 
 
 
 
 
 
340
  # RolmOCR for fast text extraction
341
  uv run rolm-ocr.py documents extracted-text
342
  uv run rolm-ocr.py images texts --shuffle --max-samples 100 # Random sample
@@ -354,25 +457,33 @@ Any HuggingFace dataset containing images - documents, forms, receipts, books, h
354
 
355
  ### Common Options (All Scripts)
356
 
357
- | Option | Default | Description |
358
- | -------------------------- | ------- | ----------------------------- |
359
- | `--image-column` | `image` | Column containing images |
360
- | `--batch-size` | `32`/`16`* | Images processed together |
361
- | `--max-model-len` | `8192`/`16384`** | Max context length |
362
- | `--max-tokens` | `4096`/`8192`** | Max output tokens |
363
- | `--gpu-memory-utilization` | `0.8` | GPU memory usage (0.0-1.0) |
364
- | `--split` | `train` | Dataset split to process |
365
- | `--max-samples` | None | Limit samples (for testing) |
366
- | `--private` | False | Make output dataset private |
367
- | `--shuffle` | False | Shuffle dataset before processing |
368
- | `--seed` | `42` | Random seed for shuffling |
369
-
370
- *RolmOCR and DoTS use batch size 16
371
- **RolmOCR uses 16384/8192
372
 
373
  ### Script-Specific Options
374
 
 
 
 
 
 
 
 
375
  **DeepSeek-OCR**:
 
376
  - `--resolution-mode`: Quality level - `tiny`, `small`, `base`, `large`, or `gundam` (default)
377
  - `--prompt-mode`: Task type - `document` (default), `image`, `free`, `figure`, or `describe`
378
  - `--prompt`: Custom OCR prompt (overrides prompt-mode)
@@ -380,10 +491,12 @@ Any HuggingFace dataset containing images - documents, forms, receipts, books, h
380
  - ⚠️ **Important for HF Jobs**: Add `-e UV_TORCH_BACKEND=auto` for proper PyTorch installation
381
 
382
  **RolmOCR**:
 
383
  - Output column is auto-generated from model name (e.g., `rolmocr_text`)
384
  - Use `--output-column` to override the default name
385
 
386
  **DoTS.ocr**:
 
387
  - `--prompt-mode`: Choose `ocr` (default), `layout-all`, or `layout-only`
388
  - `--custom-prompt`: Override with custom prompt text
389
  - `--output-column`: Output column name (default: `markdown`)
 
5
 
6
  # OCR UV Scripts
7
 
8
+ > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
+ Ready-to-run OCR scripts that work with `uv run` and HuggingFace Jobs - no setup required!
11
 
12
  ## πŸš€ Quick Start with HuggingFace Jobs
13
 
 
31
 
32
  ## πŸ“‹ Available Scripts
33
 
34
+ ### PaddleOCR-VL (`paddleocr-vl.py`) 🎯 Smallest model with task-specific modes!
35
+
36
+ Ultra-compact OCR using [PaddlePaddle/PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) with only 0.9B parameters:
37
+
38
+ - 🎯 **Smallest model** - Only 0.9B parameters (even smaller than LightOnOCR!)
39
+ - πŸ“ **OCR mode** - General text extraction to markdown
40
+ - πŸ“Š **Table mode** - HTML table recognition and extraction
41
+ - πŸ“ **Formula mode** - LaTeX mathematical notation
42
+ - πŸ“ˆ **Chart mode** - Structured chart and diagram analysis
43
+ - 🌍 **Multilingual** - Support for multiple languages
44
+ - ⚑ **Fast initialization** - Tiny model size for quick startup
45
+ - πŸ”§ **ERNIE-4.5 based** - Different architecture from Qwen models
46
+
47
+ **Task Modes:**
48
+
49
+ - `ocr`: General text extraction (default)
50
+ - `table`: Table extraction to HTML
51
+ - `formula`: Mathematical formula to LaTeX
52
+ - `chart`: Chart and diagram analysis
53
+
54
+ **Quick start:**
55
+
56
+ ```bash
57
+ # Basic OCR mode
58
+ hf jobs uv run --flavor l4x1 \
59
+ -s HF_TOKEN \
60
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
61
+ your-input-dataset your-output-dataset \
62
+ --max-samples 100
63
+
64
+ # Table extraction
65
+ hf jobs uv run --flavor l4x1 \
66
+ -s HF_TOKEN \
67
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
68
+ documents tables-extracted \
69
+ --task-mode table \
70
+ --batch-size 32
71
+ ```
72
+
73
  ### LightOnOCR (`lighton-ocr.py`) ⚑ Good one to test first since it's small and fast!
74
 
75
  Fast and compact OCR using [lightonai/LightOnOCR-1B-1025](https://huggingface.co/lightonai/LightOnOCR-1B-1025):
 
83
  - πŸš€ **Production-ready**: 76.1% benchmark score, used in production
84
 
85
  **Vocabulary sizes:**
86
+
87
  - `151k`: Full vocabulary, all languages (default)
88
  - `32k`: European languages, ~12% faster decoding
89
  - `16k`: European languages, ~12% faster decoding
90
 
91
  **Quick start:**
92
+
93
  ```bash
94
  # Test on 100 samples with English text (32k vocab is fastest for European languages)
95
  hf jobs uv run --flavor l4x1 \
 
110
  --temperature 0.0
111
  ```
112
 
113
+ ### LightOnOCR-2 (`lighton-ocr2.py`) ⚑ Fastest OCR model!
114
+
115
+ Next-generation fast OCR using [lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) with RLVR training:
116
+
117
+ - ⚑ **7Γ— faster than v1**: 42.8 pages/sec on H100 (vs 5.71 for v1)
118
+ - 🎯 **Higher accuracy**: 83.2% on OlmOCR-Bench (+7.1% vs v1)
119
+ - 🧠 **RLVR trained**: Eliminates repetition loops and formatting errors
120
+ - πŸ“š **Better dataset**: 2.5Γ— larger training data with cleaner annotations
121
+ - 🌍 **Multilingual**: Optimized for European languages
122
+ - πŸ“ **LaTeX formulas**: Mathematical notation support
123
+ - πŸ“Š **Table extraction**: Markdown table format
124
+ - πŸ’ͺ **Production-ready**: Outperforms models 9Γ— larger
125
+
126
+ **Quick start:**
127
+
128
+ ```bash
129
+ # Test on 100 samples
130
+ hf jobs uv run --flavor a100-large \
131
+ -s HF_TOKEN \
132
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
133
+ your-input-dataset your-output-dataset \
134
+ --batch-size 32 \
135
+ --max-samples 100
136
+
137
+ # Full production run
138
+ hf jobs uv run --flavor a100-large \
139
+ -s HF_TOKEN \
140
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
141
+ your-input-dataset your-output-dataset \
142
+ --batch-size 32
143
+ ```
144
+
145
+ ### DeepSeek-OCR (`deepseek-ocr-vllm.py`)
146
 
147
  Advanced document OCR using [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) with visual-text compression:
148
 
 
157
  - ⚑ **Fast batch processing** - vLLM acceleration
158
 
159
  **Resolution Modes:**
160
+
161
  - `tiny` (512Γ—512): Fast, 64 vision tokens
162
  - `small` (640Γ—640): Balanced, 100 vision tokens
163
  - `base` (1024Γ—1024): High quality, 256 vision tokens
 
165
  - `gundam` (dynamic): Adaptive multi-tile (default)
166
 
167
  **Prompt Modes:**
168
+
169
  - `document`: Convert to markdown with grounding (default)
170
  - `image`: OCR any image with grounding
171
  - `free`: Fast OCR without layout
 
251
  - 🧩 **YAML metadata** - Structured front matter (language, rotation, content type)
252
  - πŸš€ **Based on Qwen2.5-VL-7B** - Fine-tuned with reinforcement learning
253
 
 
254
  ## πŸ†• New Features
255
 
256
  ### Multi-Model Comparison Support
 
283
  ### Automatic Dataset Cards
284
 
285
  Every OCR run now generates comprehensive dataset documentation including:
286
+
287
  - Model configuration and parameters
288
  - Processing statistics
289
  - Column descriptions
 
296
  No GPU? No problem! Run on HF infrastructure:
297
 
298
  ```bash
299
+ # PaddleOCR-VL - Smallest model (0.9B) with task modes
300
+ hf jobs uv run --flavor l4x1 \
301
+ --secrets HF_TOKEN \
302
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
303
+ your-input-dataset your-output-dataset \
304
+ --task-mode ocr \
305
+ --max-samples 100
306
+
307
+ # PaddleOCR-VL - Extract tables from documents
308
+ hf jobs uv run --flavor l4x1 \
309
+ --secrets HF_TOKEN \
310
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
311
+ documents tables-dataset \
312
+ --task-mode table
313
+
314
+ # PaddleOCR-VL - Formula recognition
315
+ hf jobs uv run --flavor l4x1 \
316
+ --secrets HF_TOKEN \
317
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
318
+ scientific-papers formulas-extracted \
319
+ --task-mode formula \
320
+ --batch-size 32
321
+
322
  # DeepSeek-OCR - Real-world example (National Library of Scotland handbooks)
323
  hf jobs uv run --flavor a100-large \
324
  -s HF_TOKEN \
 
435
  uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
436
  input-dataset output-dataset
437
 
438
+ # PaddleOCR-VL for task-specific OCR (smallest model!)
439
+ uv run paddleocr-vl.py documents extracted --task-mode ocr
440
+ uv run paddleocr-vl.py papers tables --task-mode table # Extract tables
441
+ uv run paddleocr-vl.py textbooks formulas --task-mode formula # LaTeX formulas
442
+
443
  # RolmOCR for fast text extraction
444
  uv run rolm-ocr.py documents extracted-text
445
  uv run rolm-ocr.py images texts --shuffle --max-samples 100 # Random sample
 
457
 
458
  ### Common Options (All Scripts)
459
 
460
+ | Option | Default | Description |
461
+ | -------------------------- | ------------------ | --------------------------------- |
462
+ | `--image-column` | `image` | Column containing images |
463
+ | `--batch-size` | `32`/`16`\* | Images processed together |
464
+ | `--max-model-len` | `8192`/`16384`\*\* | Max context length |
465
+ | `--max-tokens` | `4096`/`8192`\*\* | Max output tokens |
466
+ | `--gpu-memory-utilization` | `0.8` | GPU memory usage (0.0-1.0) |
467
+ | `--split` | `train` | Dataset split to process |
468
+ | `--max-samples` | None | Limit samples (for testing) |
469
+ | `--private` | False | Make output dataset private |
470
+ | `--shuffle` | False | Shuffle dataset before processing |
471
+ | `--seed` | `42` | Random seed for shuffling |
472
+
473
+ \*RolmOCR and DoTS use batch size 16
474
+ \*\*RolmOCR uses 16384/8192
475
 
476
  ### Script-Specific Options
477
 
478
+ **PaddleOCR-VL**:
479
+
480
+ - `--task-mode`: Task type - `ocr` (default), `table`, `formula`, or `chart`
481
+ - `--no-smart-resize`: Disable adaptive resizing (use original image size)
482
+ - `--output-column`: Override default column name (default: `paddleocr_[task_mode]`)
483
+ - Ultra-compact 0.9B model - fastest initialization!
484
+
485
  **DeepSeek-OCR**:
486
+
487
  - `--resolution-mode`: Quality level - `tiny`, `small`, `base`, `large`, or `gundam` (default)
488
  - `--prompt-mode`: Task type - `document` (default), `image`, `free`, `figure`, or `describe`
489
  - `--prompt`: Custom OCR prompt (overrides prompt-mode)
 
491
  - ⚠️ **Important for HF Jobs**: Add `-e UV_TORCH_BACKEND=auto` for proper PyTorch installation
492
 
493
  **RolmOCR**:
494
+
495
  - Output column is auto-generated from model name (e.g., `rolmocr_text`)
496
  - Use `--output-column` to override the default name
497
 
498
  **DoTS.ocr**:
499
+
500
  - `--prompt-mode`: Choose `ocr` (default), `layout-all`, or `layout-only`
501
  - `--custom-prompt`: Override with custom prompt text
502
  - `--output-column`: Output column name (default: `markdown`)