File size: 22,925 Bytes
89f58ba
 
c228666
89f58ba
 
400d1bf
89f58ba
ddec3fc
763d5c9
ddec3fc
89f58ba
400d1bf
89f58ba
763d5c9
89f58ba
 
f9358c9
400d1bf
90ace90
400d1bf
f9358c9
 
89f58ba
 
400d1bf
89f58ba
3ef9d1f
400d1bf
 
763d5c9
89f58ba
400d1bf
89f58ba
7162b0e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ddec3fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c228666
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7162b0e
c228666
 
 
 
 
 
7162b0e
c228666
 
 
 
 
 
7162b0e
c228666
 
 
 
 
 
a77e763
 
 
 
 
 
 
 
 
 
 
 
 
ddec3fc
a77e763
 
 
 
 
ddec3fc
a77e763
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ddec3fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d80026
 
 
 
 
 
 
 
 
 
 
 
 
 
ddec3fc
2d80026
 
 
 
 
 
 
ddec3fc
2d80026
 
 
 
 
 
43324c8
 
 
 
 
 
 
 
 
 
400d1bf
89f58ba
400d1bf
89f58ba
400d1bf
 
 
 
 
89f58ba
7593b9a
 
 
 
 
 
 
 
 
 
 
 
 
7165fc4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90ace90
 
 
 
 
 
 
 
 
 
 
8388345
 
 
 
 
 
 
 
 
 
 
 
 
 
43324c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ddec3fc
43324c8
 
 
 
 
400d1bf
89f58ba
400d1bf
 
 
89f58ba
 
ddec3fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c228666
7162b0e
c228666
 
 
 
 
 
2d80026
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89f58ba
90ace90
89f58ba
400d1bf
89f58ba
90ace90
 
 
 
 
 
 
 
 
cea7723
400d1bf
 
 
90ace90
89f58ba
400d1bf
 
89f58ba
 
cea7723
89f58ba
7593b9a
 
 
 
 
 
 
 
 
7165fc4
 
 
90ace90
7165fc4
 
 
 
 
 
8388345
 
 
 
 
 
 
 
 
400d1bf
 
90ace90
89f58ba
400d1bf
 
89f58ba
 
 
 
 
 
 
 
 
 
400d1bf
 
89f58ba
 
 
400d1bf
 
 
 
 
 
 
 
 
 
 
cea7723
ddec3fc
 
 
 
 
43324c8
 
 
 
7593b9a
 
 
400d1bf
89f58ba
763d5c9
 
 
 
400d1bf
89f58ba
43324c8
cea7723
ddec3fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43324c8
90ace90
cea7723
7162b0e
 
 
 
 
 
 
ddec3fc
 
 
 
 
 
 
c228666
 
 
 
 
7162b0e
c228666
2d80026
ddec3fc
2d80026
 
 
 
 
 
90ace90
ddec3fc
43324c8
 
cea7723
90ace90
ddec3fc
90ace90
 
 
cea7723
90ace90
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
---
viewer: false
tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
---

# OCR UV Scripts

> Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.

Ready-to-run OCR scripts that work with `uv run` and HuggingFace Jobs - no setup required!

## πŸš€ Quick Start with HuggingFace Jobs

Run OCR on any dataset without needing your own GPU:

```bash
# Quick test with 10 samples
hf jobs uv run --flavor l4x1 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
    your-input-dataset your-output-dataset \
    --max-samples 10
```

That's it! The script will:

- βœ… Process first 10 images from your dataset
- βœ… Add OCR results as a new `markdown` column
- βœ… Push the results to a new dataset
- πŸ“Š View results at: `https://huggingface.co/datasets/[your-output-dataset]`

## πŸ“‹ Available Scripts

### PaddleOCR-VL-1.5 (`paddleocr-vl-1.5.py`) πŸ† SOTA with 6 task modes!

Ultra-compact SOTA OCR using [PaddlePaddle/PaddleOCR-VL-1.5](https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5) with 94.5% accuracy:

- πŸ† **SOTA Performance** - 94.5% on OmniDocBench v1.5
- 🧩 **Ultra-compact** - Only 0.9B parameters
- πŸ“ **OCR mode** - General text extraction to markdown
- πŸ“Š **Table mode** - HTML table recognition
- πŸ“ **Formula mode** - LaTeX mathematical notation
- πŸ“ˆ **Chart mode** - Chart and diagram analysis
- πŸ” **Spotting mode** - Text spotting with localization (higher resolution)
- πŸ”– **Seal mode** - Seal and stamp recognition
- 🌍 **Multilingual** - Support for multiple languages

**Task Modes:**

- `ocr`: General text extraction (default)
- `table`: Table extraction to HTML
- `formula`: Mathematical formula to LaTeX
- `chart`: Chart and diagram analysis
- `spotting`: Text spotting with localization
- `seal`: Seal and stamp recognition

**Quick start:**

```bash
# Basic OCR mode
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
    your-input-dataset your-output-dataset \
    --max-samples 100

# Table extraction
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
    documents tables-extracted \
    --task-mode table

# Seal recognition
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
    documents seals-extracted \
    --task-mode seal
```

### PaddleOCR-VL (`paddleocr-vl.py`) 🎯 Smallest model with task-specific modes!

Ultra-compact OCR using [PaddlePaddle/PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) with only 0.9B parameters:

- 🎯 **Smallest model** - Only 0.9B parameters (even smaller than LightOnOCR!)
- πŸ“ **OCR mode** - General text extraction to markdown
- πŸ“Š **Table mode** - HTML table recognition and extraction
- πŸ“ **Formula mode** - LaTeX mathematical notation
- πŸ“ˆ **Chart mode** - Structured chart and diagram analysis
- 🌍 **Multilingual** - Support for multiple languages
- ⚑ **Fast initialization** - Tiny model size for quick startup
- πŸ”§ **ERNIE-4.5 based** - Different architecture from Qwen models

**Task Modes:**

- `ocr`: General text extraction (default)
- `table`: Table extraction to HTML
- `formula`: Mathematical formula to LaTeX
- `chart`: Chart and diagram analysis

**Quick start:**

```bash
# Basic OCR mode
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
    your-input-dataset your-output-dataset \
    --max-samples 100

# Table extraction
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
    documents tables-extracted \
    --task-mode table \
    --batch-size 32
```

### GLM-OCR (`glm-ocr.py`) πŸ† SOTA on OmniDocBench V1.5!

Compact high-performance OCR using [zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) with 0.9B parameters:

- πŸ† **94.62% on OmniDocBench V1.5** - #1 overall ranking
- 🧠 **Multi-Token Prediction** - MTP loss + stable full-task RL for quality
- πŸ“ **Text recognition** - Clean markdown output
- πŸ“ **Formula recognition** - LaTeX mathematical notation
- πŸ“Š **Table recognition** - Structured table extraction
- 🌍 **Multilingual** - zh, en, fr, es, ru, de, ja, ko
- ⚑ **Compact** - Only 0.9B parameters, MIT licensed
- πŸ”§ **CogViT + GLM** - Visual encoder with efficient token downsampling

**Task Modes:**

- `ocr`: Text recognition (default)
- `formula`: LaTeX formula recognition
- `table`: Table extraction

**Quick start:**

```bash
# Basic OCR
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
    your-input-dataset your-output-dataset \
    --max-samples 100

# Formula recognition
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
    scientific-papers formulas-extracted \
    --task formula

# Table extraction
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
    documents tables-extracted \
    --task table
```

### LightOnOCR (`lighton-ocr.py`) ⚑ Good one to test first since it's small and fast!

Fast and compact OCR using [lightonai/LightOnOCR-1B-1025](https://huggingface.co/lightonai/LightOnOCR-1B-1025):

- ⚑ **Fastest**: 5.71 pages/sec on H100, ~6.25 images/sec on A100 with batch_size=4096
- 🎯 **Compact**: Only 1B parameters - quick to download and initialize
- 🌍 **Multilingual**: 3 vocabulary sizes for different use cases
- πŸ“ **LaTeX formulas**: Mathematical notation in LaTeX format
- πŸ“Š **Table extraction**: Markdown table format
- πŸ“ **Document structure**: Preserves hierarchy and layout
- πŸš€ **Production-ready**: 76.1% benchmark score, used in production

**Vocabulary sizes:**

- `151k`: Full vocabulary, all languages (default)
- `32k`: European languages, ~12% faster decoding
- `16k`: European languages, ~12% faster decoding

**Quick start:**

```bash
# Test on 100 samples with English text (32k vocab is fastest for European languages)
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \
    your-input-dataset your-output-dataset \
    --vocab-size 32k \
    --batch-size 32 \
    --max-samples 100

# Full production run on A100 (can handle huge batches!)
hf jobs uv run --flavor a100-large \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr.py \
    your-input-dataset your-output-dataset \
    --vocab-size 32k \
    --batch-size 4096 \
    --temperature 0.0
```

### LightOnOCR-2 (`lighton-ocr2.py`) ⚑ Fastest OCR model!

Next-generation fast OCR using [lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) with RLVR training:

- ⚑ **7Γ— faster than v1**: 42.8 pages/sec on H100 (vs 5.71 for v1)
- 🎯 **Higher accuracy**: 83.2% on OlmOCR-Bench (+7.1% vs v1)
- 🧠 **RLVR trained**: Eliminates repetition loops and formatting errors
- πŸ“š **Better dataset**: 2.5Γ— larger training data with cleaner annotations
- 🌍 **Multilingual**: Optimized for European languages
- πŸ“ **LaTeX formulas**: Mathematical notation support
- πŸ“Š **Table extraction**: Markdown table format
- πŸ’ͺ **Production-ready**: Outperforms models 9Γ— larger

**Quick start:**

```bash
# Test on 100 samples
hf jobs uv run --flavor a100-large \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
    your-input-dataset your-output-dataset \
    --batch-size 32 \
    --max-samples 100

# Full production run
hf jobs uv run --flavor a100-large \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
    your-input-dataset your-output-dataset \
    --batch-size 32
```

### DeepSeek-OCR (`deepseek-ocr-vllm.py`)

Advanced document OCR using [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) with visual-text compression:

- πŸ“ **LaTeX equations** - Mathematical formulas in LaTeX format
- πŸ“Š **Tables** - Extracted as HTML/markdown
- πŸ“ **Document structure** - Headers, lists, formatting preserved
- πŸ–ΌοΈ **Image grounding** - Spatial layout with bounding boxes
- πŸ” **Complex layouts** - Multi-column and hierarchical structures
- 🌍 **Multilingual** - Multiple language support
- 🎚️ **Resolution modes** - 5 presets for speed/quality trade-offs
- πŸ’¬ **Prompt modes** - 5 presets for different OCR tasks
- ⚑ **Fast batch processing** - vLLM acceleration

**Resolution Modes:**

- `tiny` (512Γ—512): Fast, 64 vision tokens
- `small` (640Γ—640): Balanced, 100 vision tokens
- `base` (1024Γ—1024): High quality, 256 vision tokens
- `large` (1280Γ—1280): Maximum quality, 400 vision tokens
- `gundam` (dynamic): Adaptive multi-tile (default)

**Prompt Modes:**

- `document`: Convert to markdown with grounding (default)
- `image`: OCR any image with grounding
- `free`: Fast OCR without layout
- `figure`: Parse figures from documents
- `describe`: Detailed image descriptions

### RolmOCR (`rolm-ocr.py`)

Fast general-purpose OCR using [reducto/RolmOCR](https://huggingface.co/reducto/RolmOCR) based on Qwen2.5-VL-7B:

- πŸš€ **Fast extraction** - Optimized for speed and efficiency
- πŸ“„ **Plain text output** - Clean, natural text representation
- πŸ’ͺ **General-purpose** - Works well on various document types
- πŸ”₯ **Large context** - Handles up to 16K tokens
- ⚑ **Batch optimized** - Efficient processing with vLLM

### Nanonets OCR (`nanonets-ocr.py`)

State-of-the-art document OCR using [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) that handles:

- πŸ“ **LaTeX equations** - Mathematical formulas preserved
- πŸ“Š **Tables** - Extracted as HTML format
- πŸ“ **Document structure** - Headers, lists, formatting maintained
- πŸ–ΌοΈ **Images** - Captions and descriptions included
- β˜‘οΈ **Forms** - Checkboxes rendered as ☐/β˜‘

### Nanonets OCR2 (`nanonets-ocr2.py`)

Next-generation Nanonets OCR using [nanonets/Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-3B) with improved accuracy:

- 🎯 **Enhanced quality** - 3.75B parameters for superior OCR accuracy
- πŸ“ **LaTeX equations** - Mathematical formulas preserved in LaTeX format
- πŸ“Š **Advanced tables** - Improved HTML table extraction
- πŸ“ **Document structure** - Headers, lists, formatting maintained
- πŸ–ΌοΈ **Smart image captions** - Intelligent descriptions and captions
- β˜‘οΈ **Forms** - Checkboxes rendered as ☐/β˜‘
- 🌍 **Multilingual** - Enhanced language support
- πŸ”§ **Based on Qwen2.5-VL** - Built on state-of-the-art vision-language model

### SmolDocling (`smoldocling-ocr.py`)

Ultra-compact document understanding using [ds4sd/SmolDocling-256M-preview](https://huggingface.co/ds4sd/SmolDocling-256M-preview) with only 256M parameters:

- 🏷️ **DocTags format** - Efficient XML-like representation
- πŸ’» **Code blocks** - Preserves indentation and syntax
- πŸ”’ **Formulas** - Mathematical expressions with layout
- πŸ“Š **Tables & charts** - Structured data extraction
- πŸ“ **Layout preservation** - Bounding boxes and spatial info
- ⚑ **Ultra-fast** - Tiny model size for quick inference

### NuMarkdown (`numarkdown-ocr.py`)

Advanced reasoning-based OCR using [numind/NuMarkdown-8B-Thinking](https://huggingface.co/numind/NuMarkdown-8B-Thinking) that analyzes documents before converting to markdown:

- 🧠 **Reasoning Process** - Thinks through document layout before generation
- πŸ“Š **Complex Tables** - Superior table extraction and formatting
- πŸ“ **Mathematical Formulas** - Accurate LaTeX/math notation preservation
- πŸ” **Multi-column Layouts** - Handles complex document structures
- ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`

### DoTS.ocr (`dots-ocr.py`)

Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:

- 🌍 **100+ Languages** - Extensive multilingual support
- πŸ“ **Simple OCR** - Clean text extraction (default mode)
- πŸ“Š **Layout Analysis** - Optional structured output with bboxes and categories
- πŸ“ **Formula recognition** - LaTeX format support
- 🎯 **Compact** - Only 1.7B parameters, efficient on smaller GPUs
- πŸ”€ **Flexible prompts** - Switch between OCR, layout-all, and layout-only modes

### olmOCR2 (`olmocr2-vllm.py`)

High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:

- 🎯 **High accuracy** - 82.4 ± 1.1 on olmOCR-Bench (84.9% on math)
- πŸ“ **LaTeX equations** - Mathematical formulas in LaTeX format
- πŸ“Š **Table extraction** - Structured table recognition
- πŸ“‘ **Multi-column layouts** - Complex document structures
- πŸ—œοΈ **FP8 quantized** - Efficient 8B model for faster inference
- πŸ“œ **Degraded scans** - Works well on old/historical documents
- πŸ“ **Long text extraction** - Headers, footers, and full document content
- 🧩 **YAML metadata** - Structured front matter (language, rotation, content type)
- πŸš€ **Based on Qwen2.5-VL-7B** - Fine-tuned with reinforcement learning

## πŸ†• New Features

### Multi-Model Comparison Support

All scripts now include `inference_info` tracking for comparing multiple OCR models:

```bash
# First model
uv run rolm-ocr.py my-dataset my-dataset --max-samples 100

# Second model (appends to same dataset)
uv run nanonets-ocr.py my-dataset my-dataset --max-samples 100

# View all models used
python -c "import json; from datasets import load_dataset; ds = load_dataset('my-dataset'); print(json.loads(ds[0]['inference_info']))"
```

### Random Sampling

Get representative samples with the new `--shuffle` flag:

```bash
# Random 50 samples instead of first 50
uv run rolm-ocr.py ordered-dataset output --max-samples 50 --shuffle

# Reproducible random sampling
uv run nanonets-ocr.py dataset output --max-samples 100 --shuffle --seed 42
```

### Automatic Dataset Cards

Every OCR run now generates comprehensive dataset documentation including:

- Model configuration and parameters
- Processing statistics
- Column descriptions
- Reproduction instructions

## πŸ’» Usage Examples

### Run on HuggingFace Jobs (Recommended)

No GPU? No problem! Run on HF infrastructure:

```bash
# PaddleOCR-VL - Smallest model (0.9B) with task modes
hf jobs uv run --flavor l4x1 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
    your-input-dataset your-output-dataset \
    --task-mode ocr \
    --max-samples 100

# PaddleOCR-VL - Extract tables from documents
hf jobs uv run --flavor l4x1 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
    documents tables-dataset \
    --task-mode table

# PaddleOCR-VL - Formula recognition
hf jobs uv run --flavor l4x1 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl.py \
    scientific-papers formulas-extracted \
    --task-mode formula \
    --batch-size 32

# GLM-OCR - SOTA 0.9B model (94.62% OmniDocBench)
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
    your-input-dataset your-output-dataset \
    --batch-size 16 \
    --max-samples 100

# DeepSeek-OCR - Real-world example (National Library of Scotland handbooks)
hf jobs uv run --flavor a100-large \
    -s HF_TOKEN \
    -e UV_TORCH_BACKEND=auto \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
    NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \
    davanstrien/handbooks-deep-ocr \
    --max-samples 100 \
    --shuffle \
    --resolution-mode large

# DeepSeek-OCR - Fast testing with tiny mode
hf jobs uv run --flavor l4x1 \
    -s HF_TOKEN \
    -e UV_TORCH_BACKEND=auto \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
    your-input-dataset your-output-dataset \
    --max-samples 10 \
    --resolution-mode tiny

# DeepSeek-OCR - Parse figures from scientific papers
hf jobs uv run --flavor a100-large \
    -s HF_TOKEN \
    -e UV_TORCH_BACKEND=auto \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \
    scientific-papers figures-extracted \
    --prompt-mode figure

# Basic OCR job with Nanonets
hf jobs uv run --flavor l4x1 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
    your-input-dataset your-output-dataset

# DoTS.ocr - Multilingual OCR with compact 1.7B model
hf jobs uv run --flavor a100-large \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \
    davanstrien/ufo-ColPali \
    your-username/ufo-ocr \
    --batch-size 256 \
    --max-samples 1000 \
    --shuffle

# Real example with UFO dataset πŸ›Έ
hf jobs uv run \
    --flavor a10g-large \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
    davanstrien/ufo-ColPali \
    your-username/ufo-ocr \
    --image-column image \
    --max-model-len 16384 \
    --batch-size 128

# Nanonets OCR2 - Next-gen quality with 3B model
hf jobs uv run \
    --flavor l4x1 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr2.py \
    your-input-dataset \
    your-output-dataset \
    --batch-size 16

# NuMarkdown with reasoning traces for complex documents
hf jobs uv run \
    --flavor l4x4 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/numarkdown-ocr.py \
    your-input-dataset your-output-dataset \
    --max-samples 50 \
    --include-thinking \
    --shuffle

# olmOCR2 - High-quality OCR with YAML metadata
hf jobs uv run \
    --flavor a100-large \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/olmocr2-vllm.py \
    your-input-dataset your-output-dataset \
    --batch-size 16 \
    --max-samples 100

# Private dataset with custom settings
hf jobs uv run --flavor l40sx1 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
    private-input private-output \
    --private \
    --batch-size 32
```

### Python API

```python
from huggingface_hub import run_uv_job

job = run_uv_job(
    "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
    args=["input-dataset", "output-dataset", "--batch-size", "16"],
    flavor="l4x1"
)
```

### Run Locally (Requires GPU)

```bash
# Clone and run
git clone https://huggingface.co/datasets/uv-scripts/ocr
cd ocr
uv run nanonets-ocr.py input-dataset output-dataset

# Or run directly from URL
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
    input-dataset output-dataset

# PaddleOCR-VL for task-specific OCR (smallest model!)
uv run paddleocr-vl.py documents extracted --task-mode ocr
uv run paddleocr-vl.py papers tables --task-mode table  # Extract tables
uv run paddleocr-vl.py textbooks formulas --task-mode formula  # LaTeX formulas

# RolmOCR for fast text extraction
uv run rolm-ocr.py documents extracted-text
uv run rolm-ocr.py images texts --shuffle --max-samples 100  # Random sample

# Nanonets OCR2 for highest quality
uv run nanonets-ocr2.py documents ocr-results

```

## πŸ“ Works With

Any HuggingFace dataset containing images - documents, forms, receipts, books, handwriting.

## πŸŽ›οΈ Configuration Options

### Common Options (All Scripts)

| Option                     | Default            | Description                       |
| -------------------------- | ------------------ | --------------------------------- |
| `--image-column`           | `image`            | Column containing images          |
| `--batch-size`             | `32`/`16`\*        | Images processed together         |
| `--max-model-len`          | `8192`/`16384`\*\* | Max context length                |
| `--max-tokens`             | `4096`/`8192`\*\*  | Max output tokens                 |
| `--gpu-memory-utilization` | `0.8`              | GPU memory usage (0.0-1.0)        |
| `--split`                  | `train`            | Dataset split to process          |
| `--max-samples`            | None               | Limit samples (for testing)       |
| `--private`                | False              | Make output dataset private       |
| `--shuffle`                | False              | Shuffle dataset before processing |
| `--seed`                   | `42`               | Random seed for shuffling         |

\*RolmOCR and DoTS use batch size 16
\*\*RolmOCR uses 16384/8192

### Script-Specific Options

**PaddleOCR-VL-1.5**:

- `--task-mode`: Task type - `ocr` (default), `table`, `formula`, `chart`, `spotting`, or `seal`
- `--output-column`: Override default column name (default: `paddleocr_1.5_[task_mode]`)
- SOTA 94.5% accuracy on OmniDocBench v1.5
- Uses transformers backend (single image processing for stability)

**PaddleOCR-VL**:

- `--task-mode`: Task type - `ocr` (default), `table`, `formula`, or `chart`
- `--no-smart-resize`: Disable adaptive resizing (use original image size)
- `--output-column`: Override default column name (default: `paddleocr_[task_mode]`)
- Ultra-compact 0.9B model - fastest initialization!

**GLM-OCR**:

- `--task`: Task type - `ocr` (default), `formula`, or `table`
- `--repetition-penalty`: Repetition penalty (default: 1.1, from official SDK)
- Near-greedy sampling by default (temperature=0.01, top_p=0.00001) matching official SDK
- Requires vLLM nightly + transformers>=5.1.0 (handled automatically)

**DeepSeek-OCR**:

- `--resolution-mode`: Quality level - `tiny`, `small`, `base`, `large`, or `gundam` (default)
- `--prompt-mode`: Task type - `document` (default), `image`, `free`, `figure`, or `describe`
- `--prompt`: Custom OCR prompt (overrides prompt-mode)
- `--base-size`, `--image-size`, `--crop-mode`: Override resolution mode manually
- ⚠️ **Important for HF Jobs**: Add `-e UV_TORCH_BACKEND=auto` for proper PyTorch installation

**RolmOCR**:

- Output column is auto-generated from model name (e.g., `rolmocr_text`)
- Use `--output-column` to override the default name

**DoTS.ocr**:

- `--prompt-mode`: Choose `ocr` (default), `layout-all`, or `layout-only`
- `--custom-prompt`: Override with custom prompt text
- `--output-column`: Output column name (default: `markdown`)

πŸ’‘ **Performance tip**: Increase batch size for faster processing (e.g., `--batch-size 256` on A100)