davanstrien HF Staff Claude Opus 4.6 commited on
Commit
c37f5fc
·
1 Parent(s): e1bde4f

Add glm-ocr-v2.py: incremental uploads + checkpoint/resume

Browse files

CommitScheduler uploads parquet batches in background (results never
lost on crash/timeout). --resume picks up from last completed batch.
CleanupScheduler deletes local shards after upload to prevent disk fill.

Validated on full 2724-page Encyclopaedia Britannica run (~$5, 0 errors).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Files changed (1) hide show
  1. glm-ocr-v2.py +1004 -0
glm-ocr-v2.py ADDED
@@ -0,0 +1,1004 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.11"
3
+ # dependencies = [
4
+ # "datasets>=3.1.0",
5
+ # "pyarrow>=17.0.0,<18.0.0",
6
+ # "huggingface-hub",
7
+ # "pillow",
8
+ # "vllm",
9
+ # "toolz",
10
+ # "torch",
11
+ # ]
12
+ #
13
+ # [[tool.uv.index]]
14
+ # url = "https://wheels.vllm.ai/nightly/cu129"
15
+ #
16
+ # [tool.uv]
17
+ # prerelease = "allow"
18
+ # override-dependencies = ["transformers>=5.1.0"]
19
+ # ///
20
+
21
+ """
22
+ Convert document images to markdown using GLM-OCR with vLLM.
23
+
24
+ v2: Incremental uploads via CommitScheduler + checkpoint/resume support.
25
+ Results are saved as parquet shards per batch and uploaded in the background,
26
+ so a crash or upload failure never loses completed OCR work. Use --resume to
27
+ pick up from the last completed batch after an interruption.
28
+
29
+ GLM-OCR is a compact 0.9B parameter OCR model achieving 94.62% on OmniDocBench V1.5.
30
+ Uses CogViT visual encoder with GLM-0.5B language decoder and Multi-Token Prediction
31
+ (MTP) loss for fast, accurate document parsing.
32
+
33
+ NOTE: Requires vLLM nightly wheels from cu129 variant (GLM-OCR added in v0.16.0,
34
+ PR #33005) and transformers>=5.1.0 (GLM-OCR support landed in stable release).
35
+ Uses https://wheels.vllm.ai/nightly/cu129 which has x86_64 wheels.
36
+ First run may take a few minutes to download and install dependencies.
37
+
38
+ Features:
39
+ - 0.9B parameters (ultra-compact)
40
+ - 94.62% on OmniDocBench V1.5 (SOTA for sub-1B models)
41
+ - Text recognition with markdown output
42
+ - LaTeX formula recognition
43
+ - Table extraction (HTML format)
44
+ - Multilingual: zh, en, fr, es, ru, de, ja, ko
45
+ - MIT licensed
46
+ - Incremental parquet uploads (v2) — never lose results
47
+ - Checkpoint/resume (v2) — pick up where you left off
48
+
49
+ Model: zai-org/GLM-OCR
50
+ vLLM: Requires vLLM nightly build + transformers>=5.1.0
51
+ Performance: 94.62% on OmniDocBench V1.5
52
+ """
53
+
54
+ import argparse
55
+ import base64
56
+ import io
57
+ import json
58
+ import logging
59
+ import os
60
+ import sys
61
+ import tempfile
62
+ import time
63
+ from datetime import datetime
64
+ from pathlib import Path
65
+ from typing import Any, Dict, List, Union
66
+
67
+ import torch
68
+ from datasets import load_dataset
69
+ from huggingface_hub import CommitScheduler, DatasetCard, HfApi, login
70
+ from PIL import Image
71
+ from toolz import partition_all
72
+ from vllm import LLM, SamplingParams
73
+
74
+ logging.basicConfig(level=logging.INFO)
75
+ logger = logging.getLogger(__name__)
76
+
77
+ MODEL = "zai-org/GLM-OCR"
78
+
79
+ # Task prompts as specified by the model
80
+ TASK_PROMPTS = {
81
+ "ocr": "Text Recognition:",
82
+ "formula": "Formula Recognition:",
83
+ "table": "Table Recognition:",
84
+ }
85
+
86
+ # Metadata keys that must match between runs for --resume
87
+ _RESUMABLE_KEYS = [
88
+ "input_dataset",
89
+ "split",
90
+ "shuffle",
91
+ "seed",
92
+ "max_samples",
93
+ "batch_size",
94
+ "source_dataset_sha",
95
+ "temperature",
96
+ "top_p",
97
+ "repetition_penalty",
98
+ "max_tokens",
99
+ "task",
100
+ "gpu_memory_utilization",
101
+ "max_model_len",
102
+ ]
103
+
104
+ METADATA_FILENAME = "_run_metadata.json"
105
+
106
+
107
+ class CleanupScheduler(CommitScheduler):
108
+ """CommitScheduler that deletes local parquet files after successful upload.
109
+
110
+ Prevents disk from filling up on long-running HF Jobs.
111
+ """
112
+
113
+ def push_to_hub(self):
114
+ parquet_files = list(self.folder_path.glob("train-*.parquet"))
115
+ if not parquet_files:
116
+ return None
117
+ result = super().push_to_hub()
118
+ if result is not None:
119
+ for f in parquet_files:
120
+ f.unlink(missing_ok=True)
121
+ logger.info(f"Cleaned up uploaded shard: {f.name}")
122
+ return result
123
+
124
+
125
+ def check_cuda_availability():
126
+ """Check if CUDA is available and exit if not."""
127
+ if not torch.cuda.is_available():
128
+ logger.error("CUDA is not available. This script requires a GPU.")
129
+ logger.error("Please run on a machine with a CUDA-capable GPU.")
130
+ sys.exit(1)
131
+ else:
132
+ logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
133
+
134
+
135
+ def make_ocr_message(
136
+ image: Union[Image.Image, Dict[str, Any], str],
137
+ task: str = "ocr",
138
+ ) -> List[Dict]:
139
+ """
140
+ Create chat message for OCR processing.
141
+
142
+ GLM-OCR uses a chat format with an image and a task prompt prefix.
143
+ Supported tasks: ocr, formula, table.
144
+ """
145
+ # Convert to PIL Image if needed
146
+ if isinstance(image, Image.Image):
147
+ pil_img = image
148
+ elif isinstance(image, dict) and "bytes" in image:
149
+ pil_img = Image.open(io.BytesIO(image["bytes"]))
150
+ elif isinstance(image, str):
151
+ pil_img = Image.open(image)
152
+ else:
153
+ raise ValueError(f"Unsupported image type: {type(image)}")
154
+
155
+ # Convert to RGB
156
+ pil_img = pil_img.convert("RGB")
157
+
158
+ # Convert to base64 data URI
159
+ buf = io.BytesIO()
160
+ pil_img.save(buf, format="PNG")
161
+ data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
162
+
163
+ prompt_text = TASK_PROMPTS.get(task, TASK_PROMPTS["ocr"])
164
+
165
+ return [
166
+ {
167
+ "role": "user",
168
+ "content": [
169
+ {"type": "image_url", "image_url": {"url": data_uri}},
170
+ {"type": "text", "text": prompt_text},
171
+ ],
172
+ }
173
+ ]
174
+
175
+
176
+ def build_run_metadata(
177
+ *,
178
+ input_dataset: str,
179
+ split: str,
180
+ shuffle: bool,
181
+ seed: int,
182
+ max_samples: int | None,
183
+ batch_size: int,
184
+ source_dataset_sha: str,
185
+ temperature: float,
186
+ top_p: float,
187
+ repetition_penalty: float,
188
+ max_tokens: int,
189
+ task: str,
190
+ gpu_memory_utilization: float,
191
+ max_model_len: int,
192
+ total_batches: int,
193
+ total_samples: int,
194
+ model: str = MODEL,
195
+ ) -> dict:
196
+ """Build the run metadata dict for persistence."""
197
+ return {
198
+ "input_dataset": input_dataset,
199
+ "split": split,
200
+ "shuffle": shuffle,
201
+ "seed": seed,
202
+ "max_samples": max_samples,
203
+ "batch_size": batch_size,
204
+ "source_dataset_sha": source_dataset_sha,
205
+ "temperature": temperature,
206
+ "top_p": top_p,
207
+ "repetition_penalty": repetition_penalty,
208
+ "max_tokens": max_tokens,
209
+ "task": task,
210
+ "gpu_memory_utilization": gpu_memory_utilization,
211
+ "max_model_len": max_model_len,
212
+ "total_batches": total_batches,
213
+ "total_samples": total_samples,
214
+ "model": model,
215
+ "script": "glm-ocr-v2.py",
216
+ "created_at": datetime.now().isoformat(),
217
+ }
218
+
219
+
220
+ def save_run_metadata(metadata: dict, folder: Path) -> Path:
221
+ """Save run metadata to the staging folder."""
222
+ path = folder / METADATA_FILENAME
223
+ path.write_text(json.dumps(metadata, indent=2))
224
+ return path
225
+
226
+
227
+ def fetch_remote_metadata(api: HfApi, repo_id: str, token: str | None) -> dict | None:
228
+ """Download _run_metadata.json from the Hub dataset repo. Returns None if missing."""
229
+ try:
230
+ from huggingface_hub.utils import EntryNotFoundError
231
+
232
+ local_path = api.hf_hub_download(
233
+ repo_id=repo_id,
234
+ filename=f"data/{METADATA_FILENAME}",
235
+ repo_type="dataset",
236
+ token=token,
237
+ )
238
+ return json.loads(Path(local_path).read_text())
239
+ except (EntryNotFoundError, Exception) as e:
240
+ logger.debug(f"Could not fetch remote metadata: {e}")
241
+ return None
242
+
243
+
244
+ def find_completed_batches(api: HfApi, repo_id: str, token: str | None) -> set[int]:
245
+ """List completed batch numbers from existing parquet files on the Hub."""
246
+ completed = set()
247
+ try:
248
+ files = api.list_repo_tree(
249
+ repo_id=repo_id, path_in_repo="data", repo_type="dataset", token=token
250
+ )
251
+ for item in files:
252
+ name = item.rfilename if hasattr(item, "rfilename") else str(item)
253
+ # Extract batch number from e.g. "data/train-00003-of-00043.parquet"
254
+ basename = name.split("/")[-1] if "/" in name else name
255
+ if basename.startswith("train-") and basename.endswith(".parquet"):
256
+ try:
257
+ batch_num = int(basename.split("-")[1])
258
+ completed.add(batch_num)
259
+ except (IndexError, ValueError):
260
+ continue
261
+ except Exception as e:
262
+ logger.warning(f"Could not list remote files: {e}")
263
+ return completed
264
+
265
+
266
+ def verify_run_metadata(current: dict, remote: dict) -> list[str]:
267
+ """Compare current run params against saved metadata. Returns list of mismatches."""
268
+ mismatches = []
269
+ for key in _RESUMABLE_KEYS:
270
+ current_val = current.get(key)
271
+ remote_val = remote.get(key)
272
+ if current_val != remote_val:
273
+ mismatches.append(f" {key}: current={current_val!r}, saved={remote_val!r}")
274
+ return mismatches
275
+
276
+
277
+ def create_dataset_card(
278
+ source_dataset: str,
279
+ model: str,
280
+ num_samples: int,
281
+ processing_time: str,
282
+ batch_size: int,
283
+ max_model_len: int,
284
+ max_tokens: int,
285
+ gpu_memory_utilization: float,
286
+ temperature: float,
287
+ top_p: float,
288
+ task: str,
289
+ image_column: str = "image",
290
+ split: str = "train",
291
+ ) -> str:
292
+ """Create a dataset card documenting the OCR process."""
293
+ model_name = model.split("/")[-1]
294
+ task_desc = {
295
+ "ocr": "text recognition",
296
+ "formula": "formula recognition",
297
+ "table": "table recognition",
298
+ }
299
+
300
+ return f"""---
301
+ tags:
302
+ - ocr
303
+ - document-processing
304
+ - glm-ocr
305
+ - markdown
306
+ - uv-script
307
+ - generated
308
+ ---
309
+
310
+ # Document OCR using {model_name}
311
+
312
+ This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
313
+
314
+ ## Processing Details
315
+
316
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
317
+ - **Model**: [{model}](https://huggingface.co/{model})
318
+ - **Task**: {task_desc.get(task, task)}
319
+ - **Number of Samples**: {num_samples:,}
320
+ - **Processing Time**: {processing_time}
321
+ - **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
322
+
323
+ ### Configuration
324
+
325
+ - **Image Column**: `{image_column}`
326
+ - **Output Column**: `markdown`
327
+ - **Dataset Split**: `{split}`
328
+ - **Batch Size**: {batch_size}
329
+ - **Max Model Length**: {max_model_len:,} tokens
330
+ - **Max Output Tokens**: {max_tokens:,}
331
+ - **Temperature**: {temperature}
332
+ - **Top P**: {top_p}
333
+ - **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
334
+
335
+ ## Model Information
336
+
337
+ GLM-OCR is a compact, high-performance OCR model:
338
+ - 0.9B parameters
339
+ - 94.62% on OmniDocBench V1.5
340
+ - CogViT visual encoder + GLM-0.5B language decoder
341
+ - Multi-Token Prediction (MTP) loss for efficiency
342
+ - Multilingual: zh, en, fr, es, ru, de, ja, ko
343
+ - MIT licensed
344
+
345
+ ## Dataset Structure
346
+
347
+ The dataset contains all original columns plus:
348
+ - `markdown`: The extracted text in markdown format
349
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
350
+
351
+ ## Reproduction
352
+
353
+ ```bash
354
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr-v2.py \\
355
+ {source_dataset} \\
356
+ <output-dataset> \\
357
+ --image-column {image_column} \\
358
+ --batch-size {batch_size} \\
359
+ --task {task}
360
+ ```
361
+
362
+ Generated with [UV Scripts](https://huggingface.co/uv-scripts) (glm-ocr-v2.py)
363
+ """
364
+
365
+
366
+ def main(
367
+ input_dataset: str,
368
+ output_dataset: str,
369
+ image_column: str = "image",
370
+ batch_size: int = 16,
371
+ max_model_len: int = 8192,
372
+ max_tokens: int = 8192,
373
+ temperature: float = 0.01,
374
+ top_p: float = 0.00001,
375
+ repetition_penalty: float = 1.1,
376
+ gpu_memory_utilization: float = 0.8,
377
+ task: str = "ocr",
378
+ hf_token: str = None,
379
+ split: str = "train",
380
+ max_samples: int = None,
381
+ private: bool = False,
382
+ shuffle: bool = False,
383
+ seed: int = 42,
384
+ output_column: str = "markdown",
385
+ verbose: bool = False,
386
+ config: str = None,
387
+ create_pr: bool = False,
388
+ resume: bool = False,
389
+ force: bool = False,
390
+ upload_every: int = 5,
391
+ ):
392
+ """Process images from HF dataset through GLM-OCR model with incremental uploads."""
393
+
394
+ check_cuda_availability()
395
+
396
+ start_time = datetime.now()
397
+
398
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
399
+ if HF_TOKEN:
400
+ login(token=HF_TOKEN)
401
+
402
+ api = HfApi(token=HF_TOKEN)
403
+
404
+ # Validate task
405
+ if task not in TASK_PROMPTS:
406
+ logger.error(f"Unknown task '{task}'. Supported: {list(TASK_PROMPTS.keys())}")
407
+ sys.exit(1)
408
+
409
+ # Warn about --create-pr fallback
410
+ if create_pr:
411
+ logger.warning(
412
+ "CommitScheduler does not support PRs. "
413
+ "Falling back to v1 behavior (single push_to_hub at end)."
414
+ )
415
+ if resume:
416
+ logger.error("--resume is not compatible with --create-pr (v1 fallback).")
417
+ sys.exit(1)
418
+
419
+ logger.info(f"Using model: {MODEL}")
420
+ logger.info(f"Task: {task} (prompt: '{TASK_PROMPTS[task]}')")
421
+
422
+ # Get source dataset SHA for resume verification
423
+ logger.info(f"Fetching source dataset info: {input_dataset}")
424
+ source_info = api.dataset_info(input_dataset, token=HF_TOKEN)
425
+ source_sha = source_info.sha
426
+ logger.info(f"Source dataset SHA: {source_sha}")
427
+
428
+ # Load dataset
429
+ logger.info(f"Loading dataset: {input_dataset}")
430
+ dataset = load_dataset(input_dataset, split=split)
431
+
432
+ if image_column not in dataset.column_names:
433
+ raise ValueError(
434
+ f"Column '{image_column}' not found. Available: {dataset.column_names}"
435
+ )
436
+
437
+ if shuffle:
438
+ logger.info(f"Shuffling dataset with seed {seed}")
439
+ dataset = dataset.shuffle(seed=seed)
440
+
441
+ if max_samples:
442
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
443
+ logger.info(f"Limited to {len(dataset)} samples")
444
+
445
+ total_samples = len(dataset)
446
+ total_batches = (total_samples + batch_size - 1) // batch_size
447
+
448
+ # Build metadata for this run
449
+ run_metadata = build_run_metadata(
450
+ input_dataset=input_dataset,
451
+ split=split,
452
+ shuffle=shuffle,
453
+ seed=seed,
454
+ max_samples=max_samples,
455
+ batch_size=batch_size,
456
+ source_dataset_sha=source_sha,
457
+ temperature=temperature,
458
+ top_p=top_p,
459
+ repetition_penalty=repetition_penalty,
460
+ max_tokens=max_tokens,
461
+ task=task,
462
+ gpu_memory_utilization=gpu_memory_utilization,
463
+ max_model_len=max_model_len,
464
+ total_batches=total_batches,
465
+ total_samples=total_samples,
466
+ )
467
+
468
+ # Resume logic
469
+ completed_batches: set[int] = set()
470
+ if resume and not force:
471
+ logger.info("Checking for existing run to resume...")
472
+ remote_meta = fetch_remote_metadata(api, output_dataset, HF_TOKEN)
473
+ if remote_meta is None:
474
+ logger.error(
475
+ f"No existing metadata found at {output_dataset}. "
476
+ "Cannot resume. Run without --resume to start fresh."
477
+ )
478
+ sys.exit(1)
479
+
480
+ mismatches = verify_run_metadata(run_metadata, remote_meta)
481
+ if mismatches:
482
+ logger.error("Run parameters do not match saved metadata:")
483
+ for m in mismatches:
484
+ logger.error(m)
485
+ logger.error("Use --force to ignore and start fresh.")
486
+ sys.exit(1)
487
+
488
+ completed_batches = find_completed_batches(api, output_dataset, HF_TOKEN)
489
+ if completed_batches:
490
+ logger.info(
491
+ f"Found {len(completed_batches)} completed batches: "
492
+ f"{sorted(completed_batches)}"
493
+ )
494
+ else:
495
+ logger.info("No completed batches found. Starting from beginning.")
496
+ elif force:
497
+ logger.info("--force: ignoring any existing data, starting fresh.")
498
+
499
+ # Initialize vLLM
500
+ logger.info("Initializing vLLM with GLM-OCR")
501
+ logger.info("This may take a few minutes on first run...")
502
+ llm = LLM(
503
+ model=MODEL,
504
+ trust_remote_code=True,
505
+ max_model_len=max_model_len,
506
+ gpu_memory_utilization=gpu_memory_utilization,
507
+ limit_mm_per_prompt={"image": 1},
508
+ )
509
+
510
+ sampling_params = SamplingParams(
511
+ temperature=temperature,
512
+ top_p=top_p,
513
+ max_tokens=max_tokens,
514
+ repetition_penalty=repetition_penalty,
515
+ )
516
+
517
+ # Inference info entry for this run
518
+ inference_entry = {
519
+ "model_id": MODEL,
520
+ "model_name": "GLM-OCR",
521
+ "column_name": output_column,
522
+ "timestamp": datetime.now().isoformat(),
523
+ "task": task,
524
+ "temperature": temperature,
525
+ "top_p": top_p,
526
+ "repetition_penalty": repetition_penalty,
527
+ "max_tokens": max_tokens,
528
+ }
529
+
530
+ logger.info(f"Processing {total_samples} images in batches of {batch_size}")
531
+ logger.info(f"Output will be written to column: {output_column}")
532
+
533
+ # --- create-pr fallback: v1 behavior (collect all, push once) ---
534
+ if create_pr:
535
+ all_outputs = []
536
+ processed = 0
537
+
538
+ for batch_num, batch_indices in enumerate(
539
+ partition_all(batch_size, range(total_samples))
540
+ ):
541
+ batch_indices = list(batch_indices)
542
+ batch_images = [dataset[i][image_column] for i in batch_indices]
543
+
544
+ logger.info(
545
+ f"Batch {batch_num + 1}/{total_batches} "
546
+ f"({processed}/{total_samples} images done)"
547
+ )
548
+
549
+ try:
550
+ batch_messages = [
551
+ make_ocr_message(img, task=task) for img in batch_images
552
+ ]
553
+ outputs = llm.chat(batch_messages, sampling_params)
554
+ for output in outputs:
555
+ text = output.outputs[0].text.strip()
556
+ all_outputs.append(text)
557
+ processed += len(batch_images)
558
+ except Exception as e:
559
+ logger.error(f"Error processing batch: {e}")
560
+ all_outputs.extend(["[OCR ERROR]"] * len(batch_images))
561
+ processed += len(batch_images)
562
+
563
+ processing_duration = datetime.now() - start_time
564
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
565
+
566
+ dataset = dataset.add_column(output_column, all_outputs)
567
+
568
+ # Inference info
569
+ if "inference_info" in dataset.column_names:
570
+
571
+ def update_inference_info(example):
572
+ try:
573
+ existing = (
574
+ json.loads(example["inference_info"])
575
+ if example["inference_info"]
576
+ else []
577
+ )
578
+ except (json.JSONDecodeError, TypeError):
579
+ existing = []
580
+ existing.append(inference_entry)
581
+ return {"inference_info": json.dumps(existing)}
582
+
583
+ dataset = dataset.map(update_inference_info)
584
+ else:
585
+ inference_list = [json.dumps([inference_entry])] * len(dataset)
586
+ dataset = dataset.add_column("inference_info", inference_list)
587
+
588
+ logger.info(f"Pushing to {output_dataset} (create-pr mode)")
589
+ max_retries = 3
590
+ for attempt in range(1, max_retries + 1):
591
+ try:
592
+ if attempt > 1:
593
+ logger.warning("Disabling XET (fallback to HTTP upload)")
594
+ os.environ["HF_HUB_DISABLE_XET"] = "1"
595
+ dataset.push_to_hub(
596
+ output_dataset,
597
+ private=private,
598
+ token=HF_TOKEN,
599
+ max_shard_size="500MB",
600
+ **({"config_name": config} if config else {}),
601
+ create_pr=True,
602
+ commit_message=f"Add {MODEL} OCR results ({len(dataset)} samples)"
603
+ + (f" [{config}]" if config else ""),
604
+ )
605
+ break
606
+ except Exception as e:
607
+ logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
608
+ if attempt < max_retries:
609
+ delay = 30 * (2 ** (attempt - 1))
610
+ logger.info(f"Retrying in {delay}s...")
611
+ time.sleep(delay)
612
+ else:
613
+ logger.error("All upload attempts failed.")
614
+ sys.exit(1)
615
+
616
+ _push_dataset_card(
617
+ output_dataset=output_dataset,
618
+ input_dataset=input_dataset,
619
+ num_samples=total_samples,
620
+ processing_time=processing_time_str,
621
+ batch_size=batch_size,
622
+ max_model_len=max_model_len,
623
+ max_tokens=max_tokens,
624
+ gpu_memory_utilization=gpu_memory_utilization,
625
+ temperature=temperature,
626
+ top_p=top_p,
627
+ task=task,
628
+ image_column=image_column,
629
+ split=split,
630
+ token=HF_TOKEN,
631
+ )
632
+ _log_completion(
633
+ total_samples, processing_duration, processing_time_str, output_dataset
634
+ )
635
+ return
636
+
637
+ # --- v2 behavior: incremental parquet uploads via CommitScheduler ---
638
+ staging_dir = Path(tempfile.mkdtemp(prefix="glm-ocr-v2-"))
639
+ logger.info(f"Staging directory: {staging_dir}")
640
+
641
+ # Save metadata to staging dir so it gets uploaded with the first commit
642
+ save_run_metadata(run_metadata, staging_dir)
643
+
644
+ processed = 0
645
+ skipped = 0
646
+
647
+ with CleanupScheduler(
648
+ repo_id=output_dataset,
649
+ repo_type="dataset",
650
+ folder_path=staging_dir,
651
+ path_in_repo="data",
652
+ every=upload_every,
653
+ private=private,
654
+ token=HF_TOKEN,
655
+ ) as _scheduler: # noqa: F841
656
+ for batch_num, batch_indices in enumerate(
657
+ partition_all(batch_size, range(total_samples))
658
+ ):
659
+ batch_indices = list(batch_indices)
660
+
661
+ # Skip already-completed batches on resume
662
+ if batch_num in completed_batches:
663
+ skipped += len(batch_indices)
664
+ logger.info(
665
+ f"Batch {batch_num + 1}/{total_batches} — skipped (already uploaded)"
666
+ )
667
+ continue
668
+
669
+ batch_images = [dataset[i][image_column] for i in batch_indices]
670
+
671
+ logger.info(
672
+ f"Batch {batch_num + 1}/{total_batches} "
673
+ f"({processed + skipped}/{total_samples} images done, "
674
+ f"{skipped} skipped)"
675
+ )
676
+
677
+ try:
678
+ batch_messages = [
679
+ make_ocr_message(img, task=task) for img in batch_images
680
+ ]
681
+ outputs = llm.chat(batch_messages, sampling_params)
682
+ batch_texts = [o.outputs[0].text.strip() for o in outputs]
683
+ except Exception as e:
684
+ logger.error(f"Error processing batch {batch_num + 1}: {e}")
685
+ batch_texts = ["[OCR ERROR]"] * len(batch_images)
686
+
687
+ # Build batch dataset from the source subset
688
+ batch_ds = dataset.select(batch_indices)
689
+ batch_ds = batch_ds.add_column(output_column, batch_texts)
690
+
691
+ # Handle inference_info per row
692
+ if "inference_info" in batch_ds.column_names:
693
+ info_values = []
694
+ for i in range(len(batch_ds)):
695
+ raw = batch_ds[i]["inference_info"]
696
+ try:
697
+ existing = json.loads(raw) if raw else []
698
+ except (json.JSONDecodeError, TypeError):
699
+ existing = []
700
+ existing.append(inference_entry)
701
+ info_values.append(json.dumps(existing))
702
+ batch_ds = batch_ds.remove_columns("inference_info")
703
+ batch_ds = batch_ds.add_column("inference_info", info_values)
704
+ else:
705
+ info_values = [json.dumps([inference_entry])] * len(batch_ds)
706
+ batch_ds = batch_ds.add_column("inference_info", info_values)
707
+
708
+ # Save shard to staging dir
709
+ shard_name = f"train-{batch_num:05d}-of-{total_batches:05d}.parquet"
710
+ shard_path = staging_dir / shard_name
711
+ batch_ds.to_parquet(shard_path)
712
+ logger.info(f"Saved shard: {shard_name} ({len(batch_ds)} rows)")
713
+
714
+ processed += len(batch_indices)
715
+
716
+ # Context manager exit triggers final flush — blocks until upload completes
717
+ logger.info("All batches processed. Final upload flush complete.")
718
+
719
+ processing_duration = datetime.now() - start_time
720
+ processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
721
+
722
+ # Push dataset card as separate commit
723
+ _push_dataset_card(
724
+ output_dataset=output_dataset,
725
+ input_dataset=input_dataset,
726
+ num_samples=total_samples,
727
+ processing_time=processing_time_str,
728
+ batch_size=batch_size,
729
+ max_model_len=max_model_len,
730
+ max_tokens=max_tokens,
731
+ gpu_memory_utilization=gpu_memory_utilization,
732
+ temperature=temperature,
733
+ top_p=top_p,
734
+ task=task,
735
+ image_column=image_column,
736
+ split=split,
737
+ token=HF_TOKEN,
738
+ )
739
+
740
+ _log_completion(
741
+ total_samples, processing_duration, processing_time_str, output_dataset
742
+ )
743
+
744
+ if verbose:
745
+ import importlib.metadata
746
+
747
+ logger.info("--- Resolved package versions ---")
748
+ for pkg in ["vllm", "transformers", "torch", "datasets", "pyarrow", "pillow"]:
749
+ try:
750
+ logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
751
+ except importlib.metadata.PackageNotFoundError:
752
+ logger.info(f" {pkg}: not installed")
753
+ logger.info("--- End versions ---")
754
+
755
+
756
+ def _push_dataset_card(
757
+ *,
758
+ output_dataset: str,
759
+ input_dataset: str,
760
+ num_samples: int,
761
+ processing_time: str,
762
+ batch_size: int,
763
+ max_model_len: int,
764
+ max_tokens: int,
765
+ gpu_memory_utilization: float,
766
+ temperature: float,
767
+ top_p: float,
768
+ task: str,
769
+ image_column: str,
770
+ split: str,
771
+ token: str | None,
772
+ ):
773
+ """Create and push the dataset card."""
774
+ logger.info("Creating dataset card")
775
+ card_content = create_dataset_card(
776
+ source_dataset=input_dataset,
777
+ model=MODEL,
778
+ num_samples=num_samples,
779
+ processing_time=processing_time,
780
+ batch_size=batch_size,
781
+ max_model_len=max_model_len,
782
+ max_tokens=max_tokens,
783
+ gpu_memory_utilization=gpu_memory_utilization,
784
+ temperature=temperature,
785
+ top_p=top_p,
786
+ task=task,
787
+ image_column=image_column,
788
+ split=split,
789
+ )
790
+ card = DatasetCard(card_content)
791
+ card.push_to_hub(output_dataset, token=token)
792
+
793
+
794
+ def _log_completion(
795
+ total_samples: int,
796
+ processing_duration,
797
+ processing_time_str: str,
798
+ output_dataset: str,
799
+ ):
800
+ """Log final completion stats."""
801
+ logger.info("Done! GLM-OCR processing complete.")
802
+ logger.info(
803
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
804
+ )
805
+ logger.info(f"Processing time: {processing_time_str}")
806
+ if processing_duration.total_seconds() > 0:
807
+ logger.info(
808
+ f"Processing speed: "
809
+ f"{total_samples / processing_duration.total_seconds():.2f} images/sec"
810
+ )
811
+
812
+
813
+ if __name__ == "__main__":
814
+ if len(sys.argv) == 1:
815
+ print("=" * 70)
816
+ print("GLM-OCR Document Processing (v2 — incremental uploads)")
817
+ print("=" * 70)
818
+ print("\n0.9B OCR model - 94.62% on OmniDocBench V1.5")
819
+ print("\nv2 improvements:")
820
+ print(" - Incremental parquet uploads (never lose results)")
821
+ print(" - Checkpoint/resume (--resume)")
822
+ print(" - Background upload every N minutes (--upload-every)")
823
+ print("\nTask modes:")
824
+ print(" ocr - Text recognition (default)")
825
+ print(" formula - LaTeX formula recognition")
826
+ print(" table - Table extraction")
827
+ print("\nExamples:")
828
+ print("\n1. Basic OCR:")
829
+ print(" uv run glm-ocr-v2.py input-dataset output-dataset")
830
+ print("\n2. Resume after interruption:")
831
+ print(" uv run glm-ocr-v2.py input-dataset output-dataset --resume")
832
+ print("\n3. Running on HF Jobs:")
833
+ print(" hf jobs uv run --flavor l4x1 \\")
834
+ print(" -s HF_TOKEN \\")
835
+ print(
836
+ " https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr-v2.py \\"
837
+ )
838
+ print(" input-dataset output-dataset --batch-size 16")
839
+ print("\nFor full help: uv run glm-ocr-v2.py --help")
840
+ sys.exit(0)
841
+
842
+ parser = argparse.ArgumentParser(
843
+ description="Document OCR using GLM-OCR v2 (incremental uploads + resume)",
844
+ formatter_class=argparse.RawDescriptionHelpFormatter,
845
+ epilog="""
846
+ Task modes:
847
+ ocr Text recognition to markdown (default)
848
+ formula LaTeX formula recognition
849
+ table Table extraction
850
+
851
+ v2 features:
852
+ Parquet shards uploaded incrementally via CommitScheduler.
853
+ Use --resume to pick up from the last completed batch.
854
+ Use --force to ignore existing data and start fresh.
855
+
856
+ Examples:
857
+ uv run glm-ocr-v2.py my-docs analyzed-docs
858
+ uv run glm-ocr-v2.py docs results --task formula
859
+ uv run glm-ocr-v2.py large-dataset test --max-samples 50 --shuffle
860
+ uv run glm-ocr-v2.py large-dataset test --resume
861
+ """,
862
+ )
863
+
864
+ parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
865
+ parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
866
+ parser.add_argument(
867
+ "--image-column",
868
+ default="image",
869
+ help="Column containing images (default: image)",
870
+ )
871
+ parser.add_argument(
872
+ "--batch-size",
873
+ type=int,
874
+ default=16,
875
+ help="Batch size for processing (default: 16)",
876
+ )
877
+ parser.add_argument(
878
+ "--max-model-len",
879
+ type=int,
880
+ default=8192,
881
+ help="Maximum model context length (default: 8192)",
882
+ )
883
+ parser.add_argument(
884
+ "--max-tokens",
885
+ type=int,
886
+ default=8192,
887
+ help="Maximum tokens to generate (default: 8192, capped by max-model-len)",
888
+ )
889
+ parser.add_argument(
890
+ "--temperature",
891
+ type=float,
892
+ default=0.01,
893
+ help="Sampling temperature (default: 0.01, near-greedy for OCR accuracy)",
894
+ )
895
+ parser.add_argument(
896
+ "--top-p",
897
+ type=float,
898
+ default=0.00001,
899
+ help="Top-p sampling parameter (default: 0.00001, near-greedy)",
900
+ )
901
+ parser.add_argument(
902
+ "--repetition-penalty",
903
+ type=float,
904
+ default=1.1,
905
+ help="Repetition penalty to prevent loops (default: 1.1)",
906
+ )
907
+ parser.add_argument(
908
+ "--gpu-memory-utilization",
909
+ type=float,
910
+ default=0.8,
911
+ help="GPU memory utilization (default: 0.8)",
912
+ )
913
+ parser.add_argument(
914
+ "--task",
915
+ choices=["ocr", "formula", "table"],
916
+ default="ocr",
917
+ help="OCR task mode (default: ocr)",
918
+ )
919
+ parser.add_argument("--hf-token", help="Hugging Face API token")
920
+ parser.add_argument(
921
+ "--split", default="train", help="Dataset split to use (default: train)"
922
+ )
923
+ parser.add_argument(
924
+ "--max-samples",
925
+ type=int,
926
+ help="Maximum number of samples to process (for testing)",
927
+ )
928
+ parser.add_argument(
929
+ "--private", action="store_true", help="Make output dataset private"
930
+ )
931
+ parser.add_argument(
932
+ "--config",
933
+ help="Config/subset name when pushing to Hub (for benchmarking multiple models in one repo)",
934
+ )
935
+ parser.add_argument(
936
+ "--create-pr",
937
+ action="store_true",
938
+ help="Create a pull request instead of pushing directly (falls back to v1 single-push behavior)",
939
+ )
940
+ parser.add_argument(
941
+ "--shuffle", action="store_true", help="Shuffle dataset before processing"
942
+ )
943
+ parser.add_argument(
944
+ "--seed",
945
+ type=int,
946
+ default=42,
947
+ help="Random seed for shuffling (default: 42)",
948
+ )
949
+ parser.add_argument(
950
+ "--output-column",
951
+ default="markdown",
952
+ help="Column name for output text (default: markdown)",
953
+ )
954
+ parser.add_argument(
955
+ "--verbose",
956
+ action="store_true",
957
+ help="Log resolved package versions after processing (useful for pinning deps)",
958
+ )
959
+ # v2-specific args
960
+ parser.add_argument(
961
+ "--resume",
962
+ action="store_true",
963
+ help="Resume from last completed batch (requires matching run metadata on Hub)",
964
+ )
965
+ parser.add_argument(
966
+ "--force",
967
+ action="store_true",
968
+ help="Ignore existing data on Hub and start fresh (skips metadata check)",
969
+ )
970
+ parser.add_argument(
971
+ "--upload-every",
972
+ type=int,
973
+ default=5,
974
+ help="CommitScheduler upload interval in minutes (default: 5)",
975
+ )
976
+
977
+ args = parser.parse_args()
978
+
979
+ main(
980
+ input_dataset=args.input_dataset,
981
+ output_dataset=args.output_dataset,
982
+ image_column=args.image_column,
983
+ batch_size=args.batch_size,
984
+ max_model_len=args.max_model_len,
985
+ max_tokens=args.max_tokens,
986
+ temperature=args.temperature,
987
+ top_p=args.top_p,
988
+ repetition_penalty=args.repetition_penalty,
989
+ gpu_memory_utilization=args.gpu_memory_utilization,
990
+ task=args.task,
991
+ hf_token=args.hf_token,
992
+ split=args.split,
993
+ max_samples=args.max_samples,
994
+ private=args.private,
995
+ shuffle=args.shuffle,
996
+ seed=args.seed,
997
+ output_column=args.output_column,
998
+ verbose=args.verbose,
999
+ config=args.config,
1000
+ create_pr=args.create_pr,
1001
+ resume=args.resume,
1002
+ force=args.force,
1003
+ upload_every=args.upload_every,
1004
+ )