davanstrien HF Staff Claude Opus 4.6 commited on
Commit
0bc9b0a
·
1 Parent(s): 6c13a40

Update nanonets-ocr.py HF Jobs syntax and add smoke test notes

Browse files

- Replace deprecated `hfjobs run` with current `hf jobs uv run` syntax
- Add OCR smoke test dataset design notes to CLAUDE.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Files changed (2) hide show
  1. CLAUDE.md +88 -20
  2. nanonets-ocr.py +16 -15
CLAUDE.md CHANGED
@@ -9,32 +9,27 @@
9
  - Tested and working on HF Jobs
10
 
11
  ### LightOnOCR-2-1B (`lighton-ocr2.py`)
12
- ⚠️ **Temporarily Broken** (2026-01-29)
13
 
14
- **Status:** vLLM nightly regression - image processor loading fails
15
 
16
- **What happened:**
17
- - Script was working with vLLM nightly `v0.15.0rc2.dev73`
18
- - Nightly updated to `v0.15.0rc2.dev81` and broke
19
- - Error: `OSError: Can't load image processor for 'lightonai/LightOnOCR-2-1B'`
20
- - Both nightly and stable vLLM 0.14.x have this issue now
21
 
22
- **Initial test results (before breakage):**
23
- - 8/10 samples had good OCR output
24
- - 2/10 samples showed repetition loops (likely due to max_tokens=6144)
25
- - Changed max_tokens default from 6144 → 4096 (per model card recommendation)
26
 
27
- **Fixes applied:**
28
- - `max_tokens`: 6144 → 4096 (model card recommends 4096 for arXiv papers)
29
- - Fixed pyarrow compatibility (>=17.0.0,<18.0.0)
30
- - Replaced deprecated `huggingface-hub[hf_transfer]` with `hf-xet`
31
-
32
- **To verify when vLLM is fixed:**
33
  ```bash
34
  hf jobs uv run --flavor a100-large \
35
  -s HF_TOKEN \
36
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
37
- davanstrien/ufo-ColPali davanstrien/lighton-ocr2-test-v3 \
38
  --max-samples 10 --shuffle --seed 42
39
  ```
40
 
@@ -44,6 +39,42 @@ hf jobs uv run --flavor a100-large \
44
  - Training: RLVR (Reinforcement Learning with Verifiable Rewards)
45
  - Performance: 83.2% on OlmOCR-Bench, 42.8 pages/sec on H100
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ## Pending Development
48
 
49
  ### DeepSeek-OCR-2 (Visual Causal Flow Architecture)
@@ -139,7 +170,44 @@ RESOLUTION_MODES = {
139
 
140
  ---
141
 
142
- **Last Updated:** 2026-01-29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
  **Watch PRs:**
144
  - DeepSeek-OCR-2: https://github.com/vllm-project/vllm/pull/33165
145
- - LightOnOCR-2 regression: Check https://github.com/vllm-project/vllm/issues?q=LightOnOCR
 
9
  - Tested and working on HF Jobs
10
 
11
  ### LightOnOCR-2-1B (`lighton-ocr2.py`)
12
+ **Production Ready** (Fixed 2026-01-29)
13
 
14
+ **Status:** Working with vLLM nightly
15
 
16
+ **What was fixed:**
17
+ - Root cause was NOT vLLM - it was the deprecated `HF_HUB_ENABLE_HF_TRANSFER=1` env var
18
+ - The script was setting this env var but `hf_transfer` package no longer exists
19
+ - This caused download failures that manifested as "Can't load image processor" errors
20
+ - Fix: Removed the `HF_HUB_ENABLE_HF_TRANSFER=1` setting from the script
21
 
22
+ **Test results (2026-01-29):**
23
+ - 10/10 samples processed successfully
24
+ - Clean markdown output with proper headers and paragraphs
25
+ - Output dataset: `davanstrien/lighton-ocr2-test-v4`
26
 
27
+ **Example usage:**
 
 
 
 
 
28
  ```bash
29
  hf jobs uv run --flavor a100-large \
30
  -s HF_TOKEN \
31
  https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
32
+ davanstrien/ufo-ColPali output-dataset \
33
  --max-samples 10 --shuffle --seed 42
34
  ```
35
 
 
39
  - Training: RLVR (Reinforcement Learning with Verifiable Rewards)
40
  - Performance: 83.2% on OlmOCR-Bench, 42.8 pages/sec on H100
41
 
42
+ ### PaddleOCR-VL-1.5 (`paddleocr-vl-1.5.py`)
43
+ ✅ **Production Ready** (Added 2026-01-30)
44
+
45
+ **Status:** Working with transformers
46
+
47
+ **Note:** Uses transformers backend (not vLLM) because PaddleOCR-VL only supports vLLM in server mode, which doesn't fit the single-command UV script pattern. Images are processed one at a time for stability.
48
+
49
+ **Test results (2026-01-30):**
50
+ - 10/10 samples processed successfully
51
+ - Processing time: ~50s per image on L4 GPU
52
+ - Output dataset: `davanstrien/paddleocr-vl15-final-test`
53
+
54
+ **Example usage:**
55
+ ```bash
56
+ hf jobs uv run --flavor l4x1 \
57
+ -s HF_TOKEN \
58
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/paddleocr-vl-1.5.py \
59
+ davanstrien/ufo-ColPali output-dataset \
60
+ --max-samples 10 --shuffle --seed 42
61
+ ```
62
+
63
+ **Task modes:**
64
+ - `ocr` (default): General text extraction to markdown
65
+ - `table`: Table extraction to HTML format
66
+ - `formula`: Mathematical formula recognition to LaTeX
67
+ - `chart`: Chart and diagram analysis
68
+ - `spotting`: Text spotting with localization (uses higher resolution)
69
+ - `seal`: Seal and stamp recognition
70
+
71
+ **Model Info:**
72
+ - Model: `PaddlePaddle/PaddleOCR-VL-1.5`
73
+ - Size: 0.9B parameters (ultra-compact)
74
+ - Performance: 94.5% SOTA on OmniDocBench v1.5
75
+ - Backend: Transformers (single image processing)
76
+ - Requires: `transformers>=5.0.0`
77
+
78
  ## Pending Development
79
 
80
  ### DeepSeek-OCR-2 (Visual Causal Flow Architecture)
 
170
 
171
  ---
172
 
173
+ ## Future: OCR Smoke Test Dataset
174
+
175
+ **Status:** Idea (noted 2026-02-12)
176
+
177
+ Build a small curated dataset (`uv-scripts/ocr-smoke-test`?) with ~2-5 samples from diverse sources. Purpose: fast CI-style verification that scripts still work after dep updates, without downloading full datasets.
178
+
179
+ **Design goals:**
180
+ - Tiny (~20-30 images total) so download is seconds not minutes
181
+ - Covers the axes that break things: document type, image quality, language, layout complexity
182
+ - Has ground truth text where possible for quality regression checks
183
+ - All permissively licensed (CC0/CC-BY preferred)
184
+
185
+ **Candidate sources:**
186
+
187
+ | Source | What it covers | Why |
188
+ |--------|---------------|-----|
189
+ | `NationalLibraryOfScotland/medical-history-of-british-india` | Historical English, degraded scans | Has hand-corrected `text` column for comparison. CC0. Already tested with GLM-OCR. |
190
+ | `davanstrien/ufo-ColPali` | Mixed modern documents | Already used as our go-to test set. Varied layouts. |
191
+ | Something with **tables** | Structured data extraction | Tests `--task table` modes. Maybe a financial report or census page. |
192
+ | Something with **formulas/LaTeX** | Math notation | Tests `--task formula`. arXiv pages or textbook scans. |
193
+ | Something **multilingual** (CJK, Arabic, etc.) | Non-Latin scripts | GLM-OCR claims zh/ja/ko support. Good to verify. |
194
+ | Something **handwritten** | Handwriting recognition | Edge case that reveals model limits. |
195
+
196
+ **How it would work:**
197
+ ```bash
198
+ # Quick smoke test for any script
199
+ uv run glm-ocr.py uv-scripts/ocr-smoke-test smoke-out --max-samples 5
200
+ # Or a dedicated test runner that checks all scripts against it
201
+ ```
202
+
203
+ **Open questions:**
204
+ - Build as a proper HF dataset, or just a folder of images in the repo?
205
+ - Should we include expected output for regression testing (fragile if models change)?
206
+ - Could we add a `--smoke-test` flag to each script that auto-uses this dataset?
207
+ - Worth adding to HF Jobs scheduled runs for ongoing monitoring?
208
+
209
+ ---
210
+
211
+ **Last Updated:** 2026-02-12
212
  **Watch PRs:**
213
  - DeepSeek-OCR-2: https://github.com/vllm-project/vllm/pull/33165
 
nanonets-ocr.py CHANGED
@@ -103,7 +103,7 @@ def create_dataset_card(
103
  ) -> str:
104
  """Create a dataset card documenting the OCR process."""
105
  model_name = model.split("/")[-1]
106
-
107
  return f"""---
108
  viewer: false
109
  tags:
@@ -218,7 +218,7 @@ def main(
218
 
219
  # Check CUDA availability first
220
  check_cuda_availability()
221
-
222
  # Track processing start time
223
  start_time = datetime.now()
224
 
@@ -299,10 +299,10 @@ def main(
299
  # Add markdown column to dataset
300
  logger.info("Adding markdown column to dataset")
301
  dataset = dataset.add_column("markdown", all_markdown)
302
-
303
  # Handle inference_info tracking
304
  logger.info("Updating inference_info...")
305
-
306
  # Check for existing inference_info
307
  if "inference_info" in dataset.column_names:
308
  # Parse existing info from first row (all rows have same info)
@@ -316,7 +316,7 @@ def main(
316
  dataset = dataset.remove_columns(["inference_info"])
317
  else:
318
  existing_info = []
319
-
320
  # Add new inference info
321
  new_info = {
322
  "column_name": "markdown",
@@ -328,10 +328,10 @@ def main(
328
  "max_model_len": max_model_len,
329
  "script": "nanonets-ocr.py",
330
  "script_version": "1.0.0",
331
- "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py"
332
  }
333
  existing_info.append(new_info)
334
-
335
  # Add updated inference_info column
336
  info_json = json.dumps(existing_info, ensure_ascii=False)
337
  dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
@@ -339,12 +339,12 @@ def main(
339
  # Push to hub
340
  logger.info(f"Pushing to {output_dataset}")
341
  dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
342
-
343
  # Calculate processing time
344
  end_time = datetime.now()
345
  processing_duration = end_time - start_time
346
  processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
347
-
348
  # Create and push dataset card
349
  logger.info("Creating dataset card...")
350
  card_content = create_dataset_card(
@@ -359,7 +359,7 @@ def main(
359
  image_column=image_column,
360
  split=split,
361
  )
362
-
363
  card = DatasetCard(card_content)
364
  card.push_to_hub(output_dataset, token=HF_TOKEN)
365
  logger.info("✅ Dataset card created and pushed!")
@@ -394,13 +394,14 @@ if __name__ == "__main__":
394
  print("\n3. Process a subset for testing:")
395
  print(" uv run nanonets-ocr.py large-dataset test-output --max-samples 10")
396
  print("\n4. Random sample from ordered dataset:")
397
- print(" uv run nanonets-ocr.py ordered-dataset random-test --max-samples 50 --shuffle")
 
 
398
  print("\n5. Running on HF Jobs:")
399
- print(" hfjobs run \\")
400
- print(" --flavor l4x1 \\")
401
- print(" --secret HF_TOKEN=... \\")
402
  print(
403
- " uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \\"
404
  )
405
  print(" your-document-dataset \\")
406
  print(" your-markdown-output")
 
103
  ) -> str:
104
  """Create a dataset card documenting the OCR process."""
105
  model_name = model.split("/")[-1]
106
+
107
  return f"""---
108
  viewer: false
109
  tags:
 
218
 
219
  # Check CUDA availability first
220
  check_cuda_availability()
221
+
222
  # Track processing start time
223
  start_time = datetime.now()
224
 
 
299
  # Add markdown column to dataset
300
  logger.info("Adding markdown column to dataset")
301
  dataset = dataset.add_column("markdown", all_markdown)
302
+
303
  # Handle inference_info tracking
304
  logger.info("Updating inference_info...")
305
+
306
  # Check for existing inference_info
307
  if "inference_info" in dataset.column_names:
308
  # Parse existing info from first row (all rows have same info)
 
316
  dataset = dataset.remove_columns(["inference_info"])
317
  else:
318
  existing_info = []
319
+
320
  # Add new inference info
321
  new_info = {
322
  "column_name": "markdown",
 
328
  "max_model_len": max_model_len,
329
  "script": "nanonets-ocr.py",
330
  "script_version": "1.0.0",
331
+ "script_url": "https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
332
  }
333
  existing_info.append(new_info)
334
+
335
  # Add updated inference_info column
336
  info_json = json.dumps(existing_info, ensure_ascii=False)
337
  dataset = dataset.add_column("inference_info", [info_json] * len(dataset))
 
339
  # Push to hub
340
  logger.info(f"Pushing to {output_dataset}")
341
  dataset.push_to_hub(output_dataset, private=private, token=HF_TOKEN)
342
+
343
  # Calculate processing time
344
  end_time = datetime.now()
345
  processing_duration = end_time - start_time
346
  processing_time = f"{processing_duration.total_seconds() / 60:.1f} minutes"
347
+
348
  # Create and push dataset card
349
  logger.info("Creating dataset card...")
350
  card_content = create_dataset_card(
 
359
  image_column=image_column,
360
  split=split,
361
  )
362
+
363
  card = DatasetCard(card_content)
364
  card.push_to_hub(output_dataset, token=HF_TOKEN)
365
  logger.info("✅ Dataset card created and pushed!")
 
394
  print("\n3. Process a subset for testing:")
395
  print(" uv run nanonets-ocr.py large-dataset test-output --max-samples 10")
396
  print("\n4. Random sample from ordered dataset:")
397
+ print(
398
+ " uv run nanonets-ocr.py ordered-dataset random-test --max-samples 50 --shuffle"
399
+ )
400
  print("\n5. Running on HF Jobs:")
401
+ print(" hf jobs uv run --flavor l4x1 \\")
402
+ print(" -s HF_TOKEN \\")
 
403
  print(
404
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \\"
405
  )
406
  print(" your-document-dataset \\")
407
  print(" your-markdown-output")