davanstrien HF Staff commited on
Commit
9c9016c
·
1 Parent(s): 26ce16b

Add FireRed-OCR script (2.1B, Qwen3-VL fine-tune)

Browse files

- New firered-ocr.py using llm.chat() pattern from dots-ocr.py
- Model: FireRedTeam/FireRed-OCR (Apache 2.0)
- Tested: 10/10 on ufo-ColPali, L4 GPU, ~4.7 min
- Stable vLLM (>=0.15.1), no nightly needed
- README: added to model table and per-model docs

Files changed (2) hide show
  1. README.md +22 -1
  2. firered-ocr.py +554 -0
README.md CHANGED
@@ -7,7 +7,7 @@ tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
- 13 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
@@ -41,6 +41,7 @@ That's it! The script will:
41
  | `lighton-ocr2.py` | [LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) | 1B | vLLM | 7× faster than v1, RLVR trained |
42
  | `hunyuan-ocr.py` | [HunyuanOCR](https://huggingface.co/tencent/HunyuanOCR) | 1B | vLLM | Lightweight VLM |
43
  | `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
 
44
  | `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
45
  | `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/Tencent/DoTS.ocr-1.5) | 3B | vLLM | Updated multilingual model |
46
  | `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
@@ -401,6 +402,26 @@ Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/r
401
  - 🎯 **Compact** - Only 1.7B parameters, efficient on smaller GPUs
402
  - 🔀 **Flexible prompts** - Switch between OCR, layout-all, and layout-only modes
403
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
404
  ### olmOCR2 (`olmocr2-vllm.py`)
405
 
406
  High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
 
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
+ 14 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
 
41
  | `lighton-ocr2.py` | [LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) | 1B | vLLM | 7× faster than v1, RLVR trained |
42
  | `hunyuan-ocr.py` | [HunyuanOCR](https://huggingface.co/tencent/HunyuanOCR) | 1B | vLLM | Lightweight VLM |
43
  | `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
44
+ | `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
45
  | `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
46
  | `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/Tencent/DoTS.ocr-1.5) | 3B | vLLM | Updated multilingual model |
47
  | `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
 
402
  - 🎯 **Compact** - Only 1.7B parameters, efficient on smaller GPUs
403
  - 🔀 **Flexible prompts** - Switch between OCR, layout-all, and layout-only modes
404
 
405
+ ### FireRed-OCR (`firered-ocr.py`)
406
+
407
+ Document OCR using [FireRedTeam/FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR), a 2.1B model fine-tuned from Qwen3-VL-2B-Instruct:
408
+
409
+ - 📝 **Structured Markdown** - Preserves headings, paragraphs, lists
410
+ - 📐 **LaTeX formulas** - Inline and block math support
411
+ - 📊 **HTML tables** - Table extraction with `<table>` tags
412
+ - 🪶 **Lightweight** - 2.1B parameters, runs on L4 GPU
413
+ - 📜 **Apache 2.0** - Permissive license
414
+
415
+ **Quick start:**
416
+
417
+ ```bash
418
+ hf jobs uv run --flavor l4x1 \
419
+ -s HF_TOKEN \
420
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/firered-ocr.py \
421
+ your-input-dataset your-output-dataset \
422
+ --max-samples 100
423
+ ```
424
+
425
  ### olmOCR2 (`olmocr2-vllm.py`)
426
 
427
  High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
firered-ocr.py ADDED
@@ -0,0 +1,554 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub",
6
+ # "pillow",
7
+ # "vllm>=0.15.1",
8
+ # "tqdm",
9
+ # "toolz",
10
+ # "torch",
11
+ # ]
12
+ #
13
+ # ///
14
+
15
+ """
16
+ Convert document images to markdown using FireRed-OCR with vLLM.
17
+
18
+ FireRed-OCR is a 2.1B document OCR model fine-tuned from Qwen3-VL-2B-Instruct.
19
+ It converts PDF/document images to structured Markdown with LaTeX formulas and
20
+ HTML tables. Apache 2.0 licensed.
21
+
22
+ Model: FireRedTeam/FireRed-OCR
23
+ vLLM: Uses stable Qwen3-VL support (>=0.15.1)
24
+ """
25
+
26
+ import argparse
27
+ import base64
28
+ import io
29
+ import json
30
+ import logging
31
+ import os
32
+ import sys
33
+ from datetime import datetime
34
+ from typing import Any, Dict, List, Union
35
+
36
+ import torch
37
+ from datasets import load_dataset
38
+ from huggingface_hub import DatasetCard, login
39
+ from PIL import Image
40
+ from toolz import partition_all
41
+ from tqdm.auto import tqdm
42
+ from vllm import LLM, SamplingParams
43
+
44
+ logging.basicConfig(level=logging.INFO)
45
+ logger = logging.getLogger(__name__)
46
+
47
+
48
+ # ────────────────────────────────────────────────────────────────
49
+ # FireRed-OCR Prompt (from official conv_for_infer.py)
50
+ # Source: https://github.com/FireRedTeam/FireRed-OCR/blob/main/conv_for_infer.py
51
+ # ────────────────────────────────────────────────────────────────
52
+
53
+ FIRERED_OCR_PROMPT = """You are an AI assistant specialized in converting PDF images to Markdown format. Please follow these instructions for the conversion:
54
+
55
+ 1. Text Processing:
56
+ - Accurately recognize all text content in the PDF image without guessing or inferring.
57
+ - Convert the recognized text into Markdown format.
58
+ - Maintain the original document structure, including headings, paragraphs, lists, etc.
59
+
60
+ 2. Mathematical Formula Processing:
61
+ - Convert all mathematical formulas to LaTeX format.
62
+ - Enclose inline formulas with \\( \\). For example: This is an inline formula \\( E = mc^2 \\)
63
+ - Enclose block formulas with \\[ \\]. For example: \\[ \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a} \\]
64
+
65
+ 3. Table Processing:
66
+ - Convert tables to HTML format.
67
+ - Wrap the entire table with <table> and </table>.
68
+
69
+ 4. Figure Handling:
70
+ - Ignore figures content in the PDF image. Do not attempt to describe or convert images.
71
+
72
+ 5. Output Format:
73
+ - Ensure the output Markdown document has a clear structure with appropriate line breaks between elements.
74
+ - For complex layouts, try to maintain the original document's structure and format as closely as possible.
75
+
76
+ Please strictly follow these guidelines to ensure accuracy and consistency in the conversion. Your task is to accurately convert the content of the PDF image into Markdown format without adding any extra explanations or comments."""
77
+
78
+
79
+ def check_cuda_availability():
80
+ """Check if CUDA is available and exit if not."""
81
+ if not torch.cuda.is_available():
82
+ logger.error("CUDA is not available. This script requires a GPU.")
83
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
84
+ sys.exit(1)
85
+ else:
86
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
87
+
88
+
89
+ def make_ocr_message(
90
+ image: Union[Image.Image, Dict[str, Any], str],
91
+ prompt: str = FIRERED_OCR_PROMPT,
92
+ ) -> List[Dict]:
93
+ """Create chat message for OCR processing."""
94
+ # Convert to PIL Image if needed
95
+ if isinstance(image, Image.Image):
96
+ pil_img = image
97
+ elif isinstance(image, dict) and "bytes" in image:
98
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
99
+ elif isinstance(image, str):
100
+ pil_img = Image.open(image)
101
+ else:
102
+ raise ValueError(f"Unsupported image type: {type(image)}")
103
+
104
+ # Convert to RGB
105
+ pil_img = pil_img.convert("RGB")
106
+
107
+ # Convert to base64 data URI
108
+ buf = io.BytesIO()
109
+ pil_img.save(buf, format="PNG")
110
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
111
+
112
+ # Return message in vLLM format
113
+ return [
114
+ {
115
+ "role": "user",
116
+ "content": [
117
+ {"type": "image_url", "image_url": {"url": data_uri}},
118
+ {"type": "text", "text": prompt},
119
+ ],
120
+ }
121
+ ]
122
+
123
+
124
+ def create_dataset_card(
125
+ source_dataset: str,
126
+ model: str,
127
+ num_samples: int,
128
+ processing_time: str,
129
+ batch_size: int,
130
+ max_model_len: int,
131
+ max_tokens: int,
132
+ gpu_memory_utilization: float,
133
+ image_column: str = "image",
134
+ split: str = "train",
135
+ ) -> str:
136
+ """Create a dataset card documenting the OCR process."""
137
+ model_name = model.split("/")[-1]
138
+
139
+ return f"""---
140
+ tags:
141
+ - ocr
142
+ - document-processing
143
+ - firered-ocr
144
+ - markdown
145
+ - uv-script
146
+ - generated
147
+ ---
148
+
149
+ # Document OCR using {model_name}
150
+
151
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using FireRed-OCR, a 2.1B model fine-tuned from Qwen3-VL-2B-Instruct.
152
+
153
+ ## Processing Details
154
+
155
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
156
+ - **Model**: [{model}](https://huggingface.co/{model})
157
+ - **Number of Samples**: {num_samples:,}
158
+ - **Processing Time**: {processing_time}
159
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
160
+
161
+ ### Configuration
162
+
163
+ - **Image Column**: `{image_column}`
164
+ - **Output Column**: `markdown`
165
+ - **Dataset Split**: `{split}`
166
+ - **Batch Size**: {batch_size}
167
+ - **Max Model Length**: {max_model_len:,} tokens
168
+ - **Max Output Tokens**: {max_tokens:,}
169
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
170
+
171
+ ## Model Information
172
+
173
+ FireRed-OCR is a document OCR model that converts images to structured Markdown:
174
+ - Fine-tuned from Qwen3-VL-2B-Instruct (2.1B parameters)
175
+ - LaTeX formula support (inline and block)
176
+ - HTML table extraction
177
+ - Layout-aware text extraction
178
+ - Apache 2.0 licensed
179
+
180
+ ## Dataset Structure
181
+
182
+ The dataset contains all original columns plus:
183
+ - `markdown`: The extracted text in markdown format
184
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
185
+
186
+ ## Usage
187
+
188
+ ```python
189
+ from datasets import load_dataset
190
+ import json
191
+
192
+ # Load the dataset
193
+ dataset = load_dataset("{{output_dataset_id}}", split="{split}")
194
+
195
+ # Access the markdown text
196
+ for example in dataset:
197
+ print(example["markdown"])
198
+ break
199
+
200
+ # View all OCR models applied to this dataset
201
+ inference_info = json.loads(dataset[0]["inference_info"])
202
+ for info in inference_info:
203
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
204
+ ```
205
+
206
+ ## Reproduction
207
+
208
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) FireRed-OCR script:
209
+
210
+ ```bash
211
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/firered-ocr.py \\
212
+ {source_dataset} \\
213
+ <output-dataset> \\
214
+ --image-column {image_column} \\
215
+ --batch-size {batch_size} \\
216
+ --max-model-len {max_model_len} \\
217
+ --max-tokens {max_tokens} \\
218
+ --gpu-memory-utilization {gpu_memory_utilization}
219
+ ```
220
+
221
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts)
222
+ """
223
+
224
+
225
+ def main(
226
+ input_dataset: str,
227
+ output_dataset: str,
228
+ image_column: str = "image",
229
+ batch_size: int = 16,
230
+ model: str = "FireRedTeam/FireRed-OCR",
231
+ max_model_len: int = 8192,
232
+ max_tokens: int = 8192,
233
+ gpu_memory_utilization: float = 0.8,
234
+ hf_token: str = None,
235
+ split: str = "train",
236
+ max_samples: int = None,
237
+ private: bool = False,
238
+ shuffle: bool = False,
239
+ seed: int = 42,
240
+ output_column: str = "markdown",
241
+ config: str = None,
242
+ create_pr: bool = False,
243
+ ):
244
+ """Process images from HF dataset through FireRed-OCR model."""
245
+
246
+ # Check CUDA availability first
247
+ check_cuda_availability()
248
+
249
+ # Track processing start time
250
+ start_time = datetime.now()
251
+
252
+ # Login to HF if token provided
253
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
254
+ if HF_TOKEN:
255
+ login(token=HF_TOKEN)
256
+
257
+ # Load dataset
258
+ logger.info(f"Loading dataset: {input_dataset}")
259
+ dataset = load_dataset(input_dataset, split=split)
260
+
261
+ # Validate image column
262
+ if image_column not in dataset.column_names:
263
+ raise ValueError(
264
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
265
+ )
266
+
267
+ # Shuffle if requested
268
+ if shuffle:
269
+ logger.info(f"Shuffling dataset with seed {seed}")
270
+ dataset = dataset.shuffle(seed=seed)
271
+
272
+ # Limit samples if requested
273
+ if max_samples:
274
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
275
+ logger.info(f"Limited to {len(dataset)} samples")
276
+
277
+ # Initialize vLLM model
278
+ logger.info(f"Initializing vLLM with model: {model}")
279
+ logger.info("This may take a few minutes on first run...")
280
+ llm = LLM(
281
+ model=model,
282
+ trust_remote_code=True,
283
+ max_model_len=max_model_len,
284
+ gpu_memory_utilization=gpu_memory_utilization,
285
+ limit_mm_per_prompt={"image": 1},
286
+ )
287
+
288
+ sampling_params = SamplingParams(
289
+ temperature=0.0, # Deterministic for OCR
290
+ max_tokens=max_tokens,
291
+ )
292
+
293
+ logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
294
+ logger.info(f"Output will be written to column: {output_column}")
295
+
296
+ # Process images in batches
297
+ all_outputs = []
298
+
299
+ for batch_indices in tqdm(
300
+ partition_all(batch_size, range(len(dataset))),
301
+ total=(len(dataset) + batch_size - 1) // batch_size,
302
+ desc="FireRed-OCR processing",
303
+ ):
304
+ batch_indices = list(batch_indices)
305
+ batch_images = [dataset[i][image_column] for i in batch_indices]
306
+
307
+ try:
308
+ # Create messages for batch
309
+ batch_messages = [
310
+ make_ocr_message(img, FIRERED_OCR_PROMPT) for img in batch_images
311
+ ]
312
+
313
+ # Process with vLLM
314
+ outputs = llm.chat(batch_messages, sampling_params)
315
+
316
+ # Extract outputs
317
+ for output in outputs:
318
+ text = output.outputs[0].text.strip()
319
+ all_outputs.append(text)
320
+
321
+ except Exception as e:
322
+ logger.warning(
323
+ f"Batch failed ({len(batch_images)} images), retrying individually: {e}"
324
+ )
325
+ for img in batch_images:
326
+ try:
327
+ msg = make_ocr_message(img, FIRERED_OCR_PROMPT)
328
+ out = llm.chat([msg], sampling_params)
329
+ all_outputs.append(out[0].outputs[0].text.strip())
330
+ except Exception as img_e:
331
+ logger.error(f"Image failed: {img_e}")
332
+ all_outputs.append("[OCR ERROR]")
333
+
334
+ # Calculate processing time
335
+ processing_duration = datetime.now() - start_time
336
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
337
+
338
+ # Add output column to dataset
339
+ logger.info(f"Adding '{output_column}' column to dataset")
340
+ dataset = dataset.add_column(output_column, all_outputs)
341
+
342
+ # Handle inference_info tracking (for multi-model comparisons)
343
+ inference_entry = {
344
+ "model_id": model,
345
+ "column_name": output_column,
346
+ "timestamp": datetime.now().isoformat(),
347
+ }
348
+
349
+ if "inference_info" in dataset.column_names:
350
+ # Append to existing inference info
351
+ logger.info("Updating existing inference_info column")
352
+
353
+ def update_inference_info(example):
354
+ try:
355
+ existing_info = (
356
+ json.loads(example["inference_info"])
357
+ if example["inference_info"]
358
+ else []
359
+ )
360
+ except (json.JSONDecodeError, TypeError):
361
+ existing_info = []
362
+
363
+ existing_info.append(inference_entry)
364
+ return {"inference_info": json.dumps(existing_info)}
365
+
366
+ dataset = dataset.map(update_inference_info)
367
+ else:
368
+ # Create new inference_info column
369
+ logger.info("Creating new inference_info column")
370
+ inference_list = [json.dumps([inference_entry])] * len(dataset)
371
+ dataset = dataset.add_column("inference_info", inference_list)
372
+
373
+ # Push to hub
374
+ logger.info(f"Pushing to {output_dataset}")
375
+ dataset.push_to_hub(
376
+ output_dataset,
377
+ private=private,
378
+ token=HF_TOKEN,
379
+ **({"config_name": config} if config else {}),
380
+ create_pr=create_pr,
381
+ commit_message=f"Add {model} OCR results ({len(dataset)} samples)"
382
+ + (f" [{config}]" if config else ""),
383
+ )
384
+
385
+ # Create and push dataset card (skip for PR-based benchmark runs)
386
+ if not create_pr:
387
+ logger.info("Creating dataset card")
388
+ card_content = create_dataset_card(
389
+ source_dataset=input_dataset,
390
+ model=model,
391
+ num_samples=len(dataset),
392
+ processing_time=processing_time_str,
393
+ batch_size=batch_size,
394
+ max_model_len=max_model_len,
395
+ max_tokens=max_tokens,
396
+ gpu_memory_utilization=gpu_memory_utilization,
397
+ image_column=image_column,
398
+ split=split,
399
+ )
400
+
401
+ card = DatasetCard(card_content)
402
+ card.push_to_hub(output_dataset, token=HF_TOKEN)
403
+
404
+ logger.info("FireRed-OCR processing complete!")
405
+ logger.info(
406
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
407
+ )
408
+ logger.info(f"Processing time: {processing_time_str}")
409
+
410
+
411
+ if __name__ == "__main__":
412
+ # Show example usage if no arguments
413
+ if len(sys.argv) == 1:
414
+ print("=" * 80)
415
+ print("FireRed-OCR Document Processing")
416
+ print("=" * 80)
417
+ print("\n2.1B document OCR model (Qwen3-VL-2B fine-tune, Apache 2.0)")
418
+ print("\nFeatures:")
419
+ print("- Structured Markdown output")
420
+ print("- LaTeX formula support (inline and block)")
421
+ print("- HTML table extraction")
422
+ print("- Layout-aware text extraction")
423
+ print("\nExample usage:")
424
+ print("\n1. Basic OCR:")
425
+ print(" uv run firered-ocr.py input-dataset output-dataset")
426
+ print("\n2. With custom settings:")
427
+ print(
428
+ " uv run firered-ocr.py docs analyzed-docs --batch-size 20 --max-samples 100"
429
+ )
430
+ print("\n3. Running on HF Jobs:")
431
+ print(" hf jobs uv run --flavor l4x1 \\")
432
+ print(" -s HF_TOKEN \\")
433
+ print(
434
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/firered-ocr.py \\"
435
+ )
436
+ print(" input-dataset output-dataset")
437
+ print("\n" + "=" * 80)
438
+ print("\nFor full help, run: uv run firered-ocr.py --help")
439
+ sys.exit(0)
440
+
441
+ parser = argparse.ArgumentParser(
442
+ description="Document OCR using FireRed-OCR (2.1B, Qwen3-VL fine-tune)",
443
+ formatter_class=argparse.RawDescriptionHelpFormatter,
444
+ epilog="""
445
+ Examples:
446
+ # Basic text OCR
447
+ uv run firered-ocr.py my-docs analyzed-docs
448
+
449
+ # Random sampling for testing
450
+ uv run firered-ocr.py large-dataset test --max-samples 50 --shuffle
451
+
452
+ # Benchmark mode (push as config with PR)
453
+ uv run firered-ocr.py source-data bench-repo --config firered-ocr --create-pr
454
+
455
+ # HF Jobs
456
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN \\
457
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/firered-ocr.py \\
458
+ input-dataset output-dataset --max-samples 50
459
+ """,
460
+ )
461
+
462
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
463
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
464
+ parser.add_argument(
465
+ "--image-column",
466
+ default="image",
467
+ help="Column containing images (default: image)",
468
+ )
469
+ parser.add_argument(
470
+ "--batch-size",
471
+ type=int,
472
+ default=16,
473
+ help="Batch size for processing (default: 16)",
474
+ )
475
+ parser.add_argument(
476
+ "--model",
477
+ default="FireRedTeam/FireRed-OCR",
478
+ help="Model to use (default: FireRedTeam/FireRed-OCR)",
479
+ )
480
+ parser.add_argument(
481
+ "--max-model-len",
482
+ type=int,
483
+ default=8192,
484
+ help="Maximum model context length (default: 8192)",
485
+ )
486
+ parser.add_argument(
487
+ "--max-tokens",
488
+ type=int,
489
+ default=8192,
490
+ help="Maximum tokens to generate (default: 8192)",
491
+ )
492
+ parser.add_argument(
493
+ "--gpu-memory-utilization",
494
+ type=float,
495
+ default=0.8,
496
+ help="GPU memory utilization (default: 0.8)",
497
+ )
498
+ parser.add_argument("--hf-token", help="Hugging Face API token")
499
+ parser.add_argument(
500
+ "--split", default="train", help="Dataset split to use (default: train)"
501
+ )
502
+ parser.add_argument(
503
+ "--max-samples",
504
+ type=int,
505
+ help="Maximum number of samples to process (for testing)",
506
+ )
507
+ parser.add_argument(
508
+ "--private", action="store_true", help="Make output dataset private"
509
+ )
510
+ parser.add_argument(
511
+ "--shuffle", action="store_true", help="Shuffle dataset before processing"
512
+ )
513
+ parser.add_argument(
514
+ "--seed",
515
+ type=int,
516
+ default=42,
517
+ help="Random seed for shuffling (default: 42)",
518
+ )
519
+ parser.add_argument(
520
+ "--output-column",
521
+ default="markdown",
522
+ help="Column name for output text (default: markdown)",
523
+ )
524
+ parser.add_argument(
525
+ "--config",
526
+ help="Config/subset name when pushing to Hub (for benchmarking multiple models in one repo)",
527
+ )
528
+ parser.add_argument(
529
+ "--create-pr",
530
+ action="store_true",
531
+ help="Create a pull request instead of pushing directly (for parallel benchmarking)",
532
+ )
533
+
534
+ args = parser.parse_args()
535
+
536
+ main(
537
+ input_dataset=args.input_dataset,
538
+ output_dataset=args.output_dataset,
539
+ image_column=args.image_column,
540
+ batch_size=args.batch_size,
541
+ model=args.model,
542
+ max_model_len=args.max_model_len,
543
+ max_tokens=args.max_tokens,
544
+ gpu_memory_utilization=args.gpu_memory_utilization,
545
+ hf_token=args.hf_token,
546
+ split=args.split,
547
+ max_samples=args.max_samples,
548
+ private=args.private,
549
+ shuffle=args.shuffle,
550
+ seed=args.seed,
551
+ output_column=args.output_column,
552
+ config=args.config,
553
+ create_pr=args.create_pr,
554
+ )